Wednesday 11 September 2019

FogLAMP, the back story - part 2


In the first post in this blog I outlined the idea and major design blocks of FogLAMP, I wanted to continue with some of this back story and fill in a few more of the details behind the architecture and philosophy of FogLAMP. In later posts I intend to talk more about the things that can be done with it, some of the interesting discussion we have with users and potential users and discuss scenarios of how to glue all the bits together. These initial posts are worthwhile, I hope, as they cast a light on some of the reasons why things are the way they are.

The use of micro services was a natural for FogLAMP, so much so that it hardly needed talking about. The reasons behind this choice are the traditional reasons for using micro services plus one or two more specific reasons;
  • Isolation - we wanted to have the different micro services to isolation the different machines and protocols used to monitor sensors, buffer data or send it onwards.
  • Reliability - with isolation you also get reliability. A failure in one protocol or sensor plugin can not effect another. Also, if a micro service fails it can be restarted without impacting the other services within the system.
  • Scale out - as the system grows more micro services can be spun up to satisfy that growth. This may be either within a single machine or distributed across many machines.
  • Embedding - a rather different advantage than normally stated for a micro service architecture, but relevant to the Internet Of Things. By isolating functionality in small units it becomes easier to create embedded versions of those micro services within sensor devices themselves.
  • Ease of extension - micro services give a framework in which it is easy to extend the functionality of a system by adding more specialised micro services to perform tasks not initially part of the system. Because these services can act independently within the system there is no need to fully understand each service when adding new services for specific tasks; these new services interact with the rest of the system using well defined interfaces and hence are unable to adversely effect the operation of the existing services.
  • Asynchronous operation - in a system whereby radically different requirements exist for real time or near real time execution it is important that the implementation of one time critical function can not interact with another. This is particularly the case for the south interface to hardware and machine monitoring protocols.
Major FogLAMP Services

The other key decision made early on was the use of plugins to provide extensibility, the goal being to make it as easy as possible for any user of FogLAMP to extend FogLAMP to support a sensor, protocol or north system. It should be a matter of hours or days rather than weeks or months to add such support. Each type of plugin would have a minimal, well defined and easy to implement set of entry points, these should be limited to half a dozen at most. Additionally, in order to provide protection against misconfiguration of plugin this interface will allow the FogLAMP that is loading a plugin to determine if a plugin is the right type for the operation is being asked to perform. All plugins, regardless of where in the system they will be used support a base set of lifecycle operations;
  • Information - allows FogLAMP to obtain information about the plugin. This is not just what type of plugin it is, but also what version of the plugin API it supports, what configuration  the plugin needs, how the plugin expects to be run and the version of the plugin itself.
  • Initialisation - starts the plugin operating and provides the initial configuration of the plugin.
  • Shutdown - terminates the plugin prior to a service or system shutdown or restart.
As well as these generic interfaces each class of plugin would also have its own additional entry points that were specific to the type of plugin.

One thing that concerned me from the beginning was how to make the configuration as extensible as the system itself. As new micro services or plugins are added the configuration of FogLAMP should also be extended, but extended in such a way as to make it look like there was one single configuration engine. Configuration became an early component that was designed and implemented, a number of requirements for configuration were drawn up;
  • New component, services and plugins, must be able to extend the configuration.
  • The configuration must be discoverable by external entities without those entities having prior knowledge of what configurable items might exist.
  • The system must be able to be operated in a 24/7 mode, therefore all reconfiguration must be dynamic. It should not be a requirement to restart the system or indeed a service within the system for new configuration values to take effect.
  • It should be possible to upgrade components of the system whilst the system is running. These upgraded components must be able to add or deprecate existing configuration to that of earlier versions of the configuration of the upgrade component. Values entered by users or administrators must of course be preserved during these operations.
To this end a component of the core micro service was designed to manage the configuration. Configuration data would be stored as JSON objects within a hierarchy of configuration categories. These JSON objects would contain not just the configuration data itself, but also meta data about the configuration item in question. This metadata would allow a client application, such as a graphical user interface, to discover what configuration was available and what rules might exist for the items. This would include a description of the item, a type for the item, constraints such as minimum value, maximum value, length etc. A default value for the item was also included. Later this meta data would be augmented with rules for validation of the item value and also for dependencies between items to be added.

Any component, be that a micro service, plugin or logical component within a micro service that wished to have configuration would create a configuration category for its configuration items. Upon startup of the component it would first obtain the current content of its configuration category. It would then merge its internal default category contents with that which it had retrieved from the configuration manager. This merging operation allowed for components to be updated with new data within the category and add that to the existing configuration data. The merge process would preserve user entered values for configuration items in the category that already existed, whilst adding new items, taking the value of the item from the default for the item as define in the components internal configuration category. This allows for the configuration to be updated without loss of user values. Once merged the component would set the category back into the configuration manager to allow it to be persisted for future executions. This functionality allows for plugins and components to not just extend the configuration of the system, but also to add and deprecate the configuration as components are updated.

The configuration manager also allowed for components to register interest in a configuration category. This meant that if a category was updated by another component, including the administrative REST API, the component would receive a callback from the configuration manager to inform it the category had changed. This callback mechanism not only worked within a micro service but also between micro services. The is the key part of the implementation that allowed for dynamic reconfiguration of FogLAMP.

The configuration was probably, rather unusually the first foundation of FogLAMP that was completed. It is however extremely important to the philosophy of extensibility, discoverability and always on operation that are fundamental within FogLAMP.

No comments:

Post a Comment