Right from the start back in 2015, The Things Network supported connecting their LoraWan backend platform to other (cloud) platforms.
Connecting to the Azure cloud was no exception to this.
There was no default integration but the support for the MQTT protocol opened this door. Back then, I already blogged about using MQTT to connect. Based on that logic, bridges appeared for both uplink and downlink messages.
Later on, Microsoft also introduced the secure and flexible IoT Central bridge to connect to The Things Network, based on uplink support for webhooks.
Still, even with the new V3 portal of The Things Network, this integration was still based on either the Webhook integration or the original V3 MQTT interface.
All these techniques require an extra layer of logic between the two clouds.
The main features of the Azure IoT Hub Integration are:
Devices only need to be registered in the Azure IoT Hub. The related TTN application device registrations are kept in sync by TTN. Just use IoT Hub Device Twin tags to control your TTN device registration
Uplink messages appear both as telemetry and as Device Twin reported property updates
Downlink messages are sent using the Device Twin desired properties
The current documentation is pretty straight forward (although the DPS service logo is shown, it’s not part of the standard solution).
Let’s check out how this integration works and let’s dive deeper into this solution architecture.
It all started with this workshop back in 2016 that I built together with my friend Jan Willem Groenenberg where we connected the TTN backend with the Azure Cloud. Over the years, we organized many events based on the workshop.
We needed a ‘bridge’ to bring two worlds together: The Things Network backend applications and the Azure cloud.
I created this TTN azure bridge based on the MQTT protocol supporting a stateful exchange of D2C (uplink) messages from Lora devices to an Azure IoT Hub and supporting C2D (downlink) messages back to the devices.
Since then, the TTN backend migrated twice and now we have this new Version 3 backend with lots of goodies!
I got already some questions about the original bridge and I was informed it is not sufficient anymore so I took some time to revisit the MQTT uplink and downlink support in TTN applications:
We will why this is still a solid solution but we will also look at a possible alternative.
Microsoft is the founder of this concept called Azure IoT Plug and Play:
IoT Plug and Play enables solution builders to integrate smart devices with their solutions without any manual configuration.
The idea is that the device describes itself using some identification key. This key, the Model ID (or DTMI, Device Twin Model Identification), is bound to a complete model written in DTDL (Digital Twins Definition Language).
Using this model, the interface (or capabilities) of this device can be read:
Properties (Azure IoT Device Twin desired properties and reported properties)
Telemetry (the D2C messages)
Command methods (based on Azure IoT Direct methods)
Once a device starts communicating using this deterministic interface, a User Interface can be provided dynamically.
This is the same principle of Plug and Play devices like a mouse or a webcam. If you plug it in, the device is identified as a mouse or webcam and the correct device driver is downloaded from the internet and installed. In the time before Plug&Play, each device came with a CD or floppy disk containing the driver. This was always a hassle, Plug&Play has taken away that pain.
The actual model is stored in a global Device Models Repository. You can create your own repository too.
Look at this Azure IoT Plug&Play architecture/flow:
Here are see the different steps needed to build a solution based on Azure IoT Plug&Play:
Devices exposing their Model ID
IoT Hub storing the Model ID as a reference in the IoT Hub registry
Consuming this Model ID by other Azure resources
Looking up the actual Model in a Device Models Repository
Building up a tree of device capabilities based on the device model
Once the device capabilities are known, an actual UI can be generated for this device so users can interact with it without any extra effort.
OPC-UA is a modern protocol to unlock M2M communcation.
The OPC Unified Architecture (UA) is a platform independent service-oriented architecture that integrates all the functionality of the individual OPC Classic specifications into one extensible framework. Building on the success of OPC Classic, OPC UA was designed to enhance and surpass the capabilities of the OPC Classic specifications.
Its popularity is still growing in many markets and for multiple reasons!
Most importantly, from an IoT Developer view, the protocol supports devices to offer a secure communication layers and the exposed tags can be made human readable.
For example, this OPC-UA clients looks at the exposed tags of an OPC-UA server running on an Advantech Wise 710:
In this Prosys OPC-UA client, the tag values are shown:
The nodes are actually mapped Modbus values read from Wise 4012E.
As you can see, it exposes these six values. The potentio meter values shown is made available as a double.
I was asked to provide documentation like this, listing all exposed nodes together with the DataType.
This, now labeled ‘classic’ version of Azure IoT Security, was based on AuditD and filled an important need: getting insight into the security of IoT devices.
Though, as with many things, the world moves on.
Microsoft reconsidered the solution and decided to spice it up a little bit.
They now offer a new edition supporting both an agent-based and an agent-less solution.
If you are interested (and you are when you own large networks with many devices!) in the agent-less solution, please check out this great demonstration on the Internet of Things show.
This agent-less solution is especially powerful in large solutions with many devices on the network:
You just install this Azure Defender for IoT ‘sensor’ device within the network and it starts inspecting that network for possible threats based on deep-packet inspection and updated threat-analysis logic coming from Microsoft.
In contrast, Here is shown how the current agent-based solution is rolled out:
As seen in the picture an agent, running as a daemon process on your (Linux) host (Azure RTOS is supported too), checks for possible vulnerabilities and passes it on to the cloud, to an IoT Hub.
There, the situation (and possible threats) is visualized in the Azure Defender for IoT portal panes.