We are familiar with the Azure IoT Hub metrics which are offered. The Azure cloud tells us eg. how many messages are received or the number of devices that are connected.
If we look at Azure IoT Edge, you had to collect your own made metrics in the past.
Because IoT Edge modules are Docker containers and therefore sandboxed, you had to rely on the (third-party) logic to capture Host metrics. Regarding metrics about the edge agent and hub, these were not available.
With the most recent IoT Edge runtimes, agent, and hub, we have access to Edge metrics.
Both the Agent and Hub module expose the metrics over HTTP endpoints:
Within the Moby runtime, port 9600 is exposed on both individual modules. Outside the runtime, we have to assign individual host ports to prevent using the same host port.
Let’s see how this looks like and how we can harvest metrics in a custom container.
Azure IoT Edge is a powerful solution for your edge computing needs. It can collect telemetry, make local decisions, and send data to the cloud. This works great if an internet connection is available. If the connection is temporarily broken, everything still works. The telemetry is temporarily persisted so no data is lost.
Here, child devices are made part of the local routing mechanism of the edge. The child devices are configured to send their telemetry to the edge device. From there, the same telemetry is sent to the cloud as if it’s sent by the child device itself.
The main advantages are:
If no internet connection is available, the child telemetry is stored on the edge until the connection is restored. The child devices have no notion of the edge gateway, hence ‘transparent’
The logic running on the edge is able to access the telemetry coming from child devices so this can be used and combined with other data to take local decisions
This architecture is also known as downstream devices.
I already wrote a blog on this topic previously. In there, some test apps stole the show.
Now, let’s see this in action with an actual industrial device. We also check out sending telemetry back:
Getting started with Azure IoT Edge is easy. Microsoft offers quite some tutorials for several operating systems for setting up an edge gateway.
Once you have created your first IoT edge solution and played with it, you discover Azure IoT Edge takes a bit more time to master.
In real-life IoT is hard, though…
This is because there are more moving parts like security, provisioning, managing, monitoring, etc.
For example, take a look at the ‘iotedge check’ output on your edge device:
This feature of the iotedge runtime makes it possible to check how well your runtime is hardened against common scenarios where something can fail (eg. running out of disk space due to extensive logging or firewall blockades for certain protocols).
In this case, a message is shown which indicates the runtime is using a development (x509) certificate which will expire within ninety days. Communication between the edge modules will stop after that date. A reboot/restart of the runtime is needed to get the runtime running again for another ninety days.
What is the purpose of this certificate and why do we need this to be fixed?
IoT Edge certificates are used by the modules and downstream IoT devices to verify the identity and legitimacy of the IoT Edge hub runtime module
So, apart from the secure connection with the cloud (either with a symmetric key, x509 certificate, or a TPM endorsement), this certificate is used to secure the communication between modules and possible edge devices. If the certificate expires, edge communication comes to a hold.
Let’s check out how to ruggedize the communication.
Azure IoT Edge is based on the concept of modules. A module is a container holding some logic executed on the edge device. These containers are actual Docker containers.
These can both be generic containers like a NodeJS that you have produced yourself, an open-source container, or a commercial container. In can also be a container supporting Azure IoT Edge module twins and the routing between modules using one of the Azure IoT Edge SDKs.
Anyway, the modules have to be deployed at one point in time.
By default, Azure IoT Edge devices are constructed with two basic modules registered, the edgeAgent (which is responsible for life-and-death of other modules) and the edgeHub (for enabling message routing between modules and the local gateway towards the cloud):
With life-or-death of other modules I mean the EdgeAgent is responsible for keeping the module configuration on the Azure IoT Edge device in sync with the registration and configuration in the IoT Hub device registration.
Each time the configuration of an edge device registration in the IoT Hub changes, a new version of the deployment manifest is offered to the Edge Agent. It contains both the module descriptions and their configuration and a description of the message routing on the edge.
The Edge Agent then picks up the deployment manifest and checks for changes with the last manifest it received. If there are any configuration changes, or modules added or modules deleted, the edgeAgent will start the process of synchronizing the deployment.
If you check the documentation, three ways of altering the IoT Edge configuration (and thus deploying a new deployment manifest) are documented:
CrateDB is a distributed SQL database built on a NoSQL foundation. It is familiar to use, simple to scale, and versatile for handling any type of structured or unstructured data with real-time query performance.
It’s always nice being able to choose from several services like databases. So I checked out how to develop a simple application and Azure IoT Edge module against Crate if running in a container.
In this blog, we see how we can use the CrateDB in Azure IoT Edge.
Each Azure IoT Edge module, deployed to a device, has its own Module twin.
A Module twin is the same concept as a Device twin for an Azure IoT Device, it stores state information including metadata, configurations, and conditions.
A Module twin is essentially a JSON document which lives both in the cloud (in the IoT Hub) and on the device and is kept in sync when communication between device and cloud is possible:
In the IoT Hub, the tags are writable and readable. These can be used to identify a specific device with an alternative key and/or to filter subsets of devices.
Also in the cloud, the desired properties can be written with updated values. These (updated) values (eg. properties or settings) are picked up by the device when it is connected. So it could take days or weeks for the updated desired property to be picked up because the device is offline in the meanwhile.
But the desired properties are patient…
Once the updated values of changed desired properties are arriving at a device, a notification method on the device is triggered to handle them.
As a good citizen, an IoT Edge module should report back to the cloud how it is updated by the desired properties. This is done using the reported properties in the Module twin.
This closes the loop for the administrator. I can publish a desired property change for one or more devices. And after a while, the reported properties can be checked to see which devices have picked them up and which devices need some attention.
Do you notice that it’s also possible to read reported properties, on the module side?
Write data, read data… that is enough to persist data on the edge, isn’t it?
Let’s see how we can use this for persisting local state.
In my previous blog, I showed how regular Docker containers can be rolled out using Azure IoT Edge.
But what about databases, can these be deployed too?
Yes, I showed how to deploy and connect to SQL Server in the past and it works very well if you like SQL Server.
But what about MySql, can we connect to this database too?
Many of the world’s largest and fastest-growing organizations including Facebook, Google, Adobe, Alcatel Lucent, and Zappos rely on MySQL to save time and money powering their high-volume Web sites, business-critical systems, and packaged software.
The heart of Azure IoT Edge logic is the availability to add Docker containers with the functionality of your choice.
You can create your own module using the VS Code or Visual Studio extension for IoT Edge in various languages (eg. C#, NodeJS, Python, Java, C).
But you can also use existing modules. IoT Edge is capable to ship whatever container you have as long as it is available in a container registry. The only limitation it has is to get them running using the zero-touch approach of Azure IoT Edge.
Microsoft has created a special marketplace where modules are advertised and ready for deployment to the IoT Edge device of your choice. Here is a selection of what is offered:
Note: the filter has no usage at this moment.
On this marketplace, Microsoft also advertises its four cognitive services. These analyze text on the edge and in the cloud using container support:
These modules are:
Language Detection Container – For up to 120 languages, detects and reports in which language the input text is written.
Sentiment Analysis Container – Analyzes raw text for clues about positive or negative sentiment for a limited amount of languages.
Key Phrase Extraction Container – Extracts key phrases to identify the main points for a limited amount of languages.
Language Understanding Container – Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container’s API endpoints.
Let’s check out how we can deploy and use them on the edge.