4

Extending as-a-Service Modeling to Edge Event-Driven Applications

 2 years ago
source link: https://blog.cimicorp.com/?p=4719
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Extending as-a-Service Modeling to Edge Event-Driven Applications

In the first part of this series, we looked at the possibility of modeling “as-a-service” offerings using a data model, with the goal of deciding whether a common approach to modeling all manner of services could be created. That could facilitate the development of a generalized way of handling edge computing applications, both in terms of lifecycle management and in terms of presenting services to users. It’s the need to represent the service in runtime, not just in lifecycle terms, that we’ll now explore.

We’ll kick this off by returning to software-as-a-service. SaaS has to present a “runtime” interface/API for access. The resource-oriented aaS offerings could be viewed as being related to lifecycle management, and what gets run on a resource-as-a-service offering of any sort is then responsible for making its runtime visible. SaaS has to present the “service” in the way that an application would present it, as a runtime interface. This is interesting and important because many of the 5G applications of edge computing would relate to lifecycle management, while edge computing overall is likely driven by IoT applications where models of an application/service would be modeling runtime execution.

Most applications run in or via as-a-service are represented by an IP address, an API. If the application is loaded by the user onto IaaS or PaaS or onto a container service, then the user’s loading of the application creates the API. In SaaS, the API is provided as the on-ramp, by the cloud provider. In either case, you can see that there is an “administrative” API that represents the platform for non-SaaS, and a runtime API that represents the application or service.

One complication to this can be seen in the “serverless” or function-as-a-service applications. These are almost always event-driven, and because the functions are stateless, there has to be a means of state control provided, which is another way of saying that event flows have to be orchestrated. In AWS Lambda, this is usually done via Step Functions, but as I’ve been saying in my blogs on event-driven systems, the general approach would be to use finite- or hierarchical-state-machine design. That same design approach was used in my ExperiaSphere project to manage events associated with service lifecycle automation.

Given that we can use FSM/HSM for lifecycle automation and for the orchestration of event-driven applications, wouldn’t it be nice if we could somehow tie the processes together? There would seem to be three general ways that could be done.

The first way would be to simply extend our two-API model for SaaS, and say that the administrative API represents the exposed lifecycle automation HSM, and the service API the service HSM. We have two independent models, the juncture of which would be the state of the service from the lifecycle HSM reflected into the service HSM. That means that if the lifecycle HSM says that the service is in the “run” state, then the service HSM is available.

The second approach would be to say that there is a lifecycle HSM linked to each element of the service, each individual component. We’d have a service HSM whose elements were the actual software components being orchestrated, and each of those elements would have its own lifecycle HSM. Those HSMs could still be “reflected” upward to the administrative API so you’d have a service lifecycle view available. This would make the lifecycle state of each component available as an event to that component, and also let component logic conditions generate a lifecycle event, so that logic faults not related to the hosting/running of the components could be reflected in lifecycle automation.

The final approach would be to fully integrate the two HSM sets, so that a single HSM contained both service event flow orchestration events and lifecycle events. The FSM/HSM tables would be integrated, which means that either lifecycle automation or service orchestration could easily influence the other, which might be a benefit. The problem is that if this is to be in any way different from the second approach above, the tables would have to be tightly coupled between lifecycle and service, which would mean there would be a risk of having a “brittle” relationship, one that might require reconfiguration of FSM/HSM process identities and even events if there were a change in how the components of an event-driven service deployed.

Selecting an option here starts easy and then gets complicated (you’re probably not surprised!) The easy part is dismissing the third option as adding more complexity than advantages. The harder part is deciding among the first two, and I propose that we assume that the first approach is simpler and more consistent with current practices, which wouldn’t mix runtime and lifecycle processes in any way. We should assume option one, then, unless we can define a good reason for option two.

The difference between our remaining options is the coupling between runtime behavior and lifecycle behavior, which means coupling between “logic conditions” detected by the actual application or service software and lifecycle decisions. Are there situations where such coupling is likely justified?

One such situation would be where load balancing and event steering decisions are made at runtime. Service meshes, load balancers, and so forth are all expected to act to optimize event pathways and component selection among available instances, including scaling. Those functions are also often part of lifecycle processes, where program logic doesn’t include the capabilities or where it’s more logical and efficient to view those things as arising from changes in resource behavior, visible to lifecycle processes.

This seems to be a valid use case for option two, but the idea would work only if the service mesh, API broker, load balancer, or whatever, had the ability to generate standard events into lifecycle management, or vice versa. You could argue that things like service meshes or load balancers should support event exchange with lifecycle processes because they’re a middleware layer that touches resource and application optimization, and it’s hard to separate that from responding to resource conditions that impact QoE or an SLA.

That point is likely to be more an argument for integration between lifecycle management and service-level meshing and load-balancing, than against our second option. One could make a very solid argument for requiring that any form of event communications or scalability needs to have some resource-level awareness. That doesn’t mean what I’ve characterized as “primitive” event awareness, because specific resource links of any sort create a brittle implementation. It might mean that we need to have a set of “standard” lifecycle events, and even lifecycle states, to allow lifecycle management to be integrated with service-layer behaviors.

That’s what I found with ExperiaSphere; it was possible to define both standard service/application element states and events outside the “ControlTalker” elements that actually controlled resources. Since those were defined in the resource layer anyway, they weren’t part of modeling lifecycle automation, only the way it mapped to specific management APIs. I think the case can be made for at least a form of option two, meaning a “connection” between service and lifecycle models at the service/application element level. The best approach seems to be borrowed from option one, though; the service layer can both report and receive an event that signals transition into or out of the “running” state, meaning an SLA violation or not. The refined state/event structure of both the service and lifecycle models are hidden from the other, because they don’t need to know.

In this structure, though, there has to be some dependable way of relating the two models, which clearly has to be at the service/application element level. These would be the lowest level of the runtime service, the components that make up the event flows. Logically, they could also be the lowest service-layer model elements, and so there would be both a service logic and a lifecycle model for these, linked to allow for the exchange of events described above. Service logic modeling of the event flows wouldn’t require any hierarchy; it’s describing flows of real events. The lifecycle model could bring these service-layer bottom elements back through a hierarchy that could represent, for example, functional grouping (access, etc.), then administrative ownership (Operator A, B, etc.), upward to the service.

If we thought of this graphically, we would see a box representing the components of a real-time, event-driven, application. The component, with its own internal state/event process, would have a parallel link to the lifecycle model, which would have lifecycle state/event processes. This same approach could be used to provide a “commercial” connection, for billing and journaling. That’s what we’ll talk about in the last blog in this series.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK