Thursday 5 September 2019

Entering the Fog

The past two and a bit years have been a very interesting time for me, it marks a new direction for my work life at least, a move into the world of the Internet of Things, Fog computing and Edge devices. As we start the third quarter of 2019 the project I have been working on is about to enter an interesting new phase, but more of that when it actually happens. The event that prompted this change of direction in my life was the creation of a new company, Dianomic Systems and the start of a new open source project under the guidance of that company. I am lucky enough to be employed by Dianomic as the architect of the project, FogLAMP. The original concept of this project was to create something open source that could be used gather data from multiple sensors and device and push that data into a common store. The initial target for this data store was the OSIsoft PI Data Archive, Dianomic being part owned and funded by OSIsoft. 

During our early research it soon became clear that there was an entire movement in edge gateways and Fog computing, and that any solution we designed in this area should really aim to be part of this ethos. Hence the Fog in the name of the project. The LAMP part came from the LAMP stack and a desire to build something that would help unify the bewildering number of different implementations and approaches to gathering data from sensors and devices. OSIsoft also had a desire to be able to gather data into their systems that was currently bypassing the traditional industrial control systems. The new breed of IoT sensors was a threat to everyone, data would end up in silos and a lot of the power of collecting that data would be lost without the ability to incorporate it into facility wide analytics systems.

It has long been a principle of mine that when designing any system I want to make it as configurable and extensible as possible. Having the ability to adapt and incorporate new requirements without having to do major redesigns or re-implementations to fulfil these new requirements. We knew we wanted to gather data from many different sources, using different mechanisms from protocol implementations to talking to raw hardware devices. This data would then be sent to the PI Data Archive to join the data lake within that system. The simplistic approach might be to construct a product for each data source, possibly making use of a set of libraries to implement the common features. Although possible this was distasteful in my opinion as it came with big support and versioning issues and it also made the task of supporting a new device a specialist activity. I wanted to make it as simple as possible for a new device or protocol to be supported. I fell back on my previous experience in architecting Maxscale, a MySQL proxy server, and decided we would make use of plugins for the device connections.

Once the decision was made to use plugins to interface with the devices and sensors it seemed natural to also use plugins for sending the data onward to external systems. Although in the first implementation there would only be a single system into which data was to be sent, it seemed like a worthwhile policy for the future proofing of FogLAMP. Now we had the second big extension point within FogLAMP, the plugins for sending data.

It was around about this time that we decided we needed a convenient naming structure, so we went for the compass point system that seems to be in use in a number of IoT systems. North for the route to the cloud and south for the devices and sensors. Hence we have south plugins to talk to devices and north plugins to talk to systems such as the PI Data Archive.

The next big component that was discussed was buffering. There where a couple of reasons we felt buffering was important
  • To deal with situations where the connection from the south devices to the north system was unreliable or not always available. It was felt FogLAMP would be running close to the south devices rather than with the north systems.
  • In order to adhere to the principles of Fog computing having some historical data, even if only for a short time frame, available locally would seem to be important.
The question was how to buffer the data, what semantics did we require and how much data do we need to concern ourselves with?

The answer to all these questions is the same of course, it depends. Given this how could we design a buffering system that would be all things to all people? We had already answered this of course, it was the same way we could support any kind of device to the south or any kind of system to the north. We would use storage plugins to provide the different scales and semantics of data buffering required.

The final addition to the functional blocks we had designed so far was a central core to orchestrate and manage these other components. Having defined the functional blocks the next question was how to implement it. The answer, to me at least seemed obvious, and again came from previous experiences in creating scalable solutions, use a mico-services architecture. I could envisage us needing multiple micro-services for the different south side devices and sensors, giving us isolation between devices and scalability. Likewise we could use the same approach to the north and provide for multiple destinations for the data.

Following this logic it seemed obvious the storage should also be done this way, with another service for the orchestrator. So hence the FogLAMP architecture was born. This is how it started and how it remains to this day. A few more specialist micro services have been or will be added to this structure, but two years on we are still building things within this basic set of services.

No comments:

Post a Comment