Extending the AZ-220 Digital Twins hands-on lab with 3D visualization

Azure Digital Twins is advertised as a “platform as a service (PaaS) offering that enables the creation of twin graphs based on digital models of entire environments, which could be buildings, factories, farms, energy networks, railways, stadiums, and more—even entire cities”.

This sounds promising but it does not really ring a bell, does it?

Fortunately, besides the excellent documentation, Microsoft provides a great learning path in MS Learn as part of the AZ-220 Azure IoT developer exam preparations.

There, you will learn how Azure Digital Twins offers new opportunities for representing an Internet of Things solution via twin models, twin relations, and a runtime environment.

You finish the learning path with a hands-on lab where you build a model around a cheese factory and ingest sensor telemetry:

In the demo, the telemetry flows through the runtime and ends up in Time Series Insights.

Yes, the learning path is a good start and will prepare you for the exam or the assessment (you need to pass this assessment for a free one-year certification renewal).

On the other hand, many extra features could be added to turn this good start into a great start!

Think about propagating Azure Digital Twins events and twin property changes through the graph and visualizing live updates of twins in a 3D model, complete with alerts.

Let’s check out some of these additional features and see what you need to do to extend the ADT example.

Doorgaan met het lezen van “Extending the AZ-220 Digital Twins hands-on lab with 3D visualization”

IoT Plug and Play, modeling IoT Central devices

If you have built an IoT Device yourself and are finally able to send telemetry to the cloud, you should be familiar with the scenario where you have to repeat the hard work of describing the messages all again on ingestion.

IoT Devices expose D2C telemetry and it can also support C2D communication. This interface is most of the time unique for that device. To be able to get insights from a device you have to be able to react to its interface.

Wouldn’t it be nice if a device was able to provide metadata about its interface once it connects to the cloud? This way, the incoming D2C telemetry could automatically result in e.g. a full user interface. And all C2D output could be represented by pre-configured input controls.

With IoT Plug and Play this is all possible:

IoT Plug and Play enables solution builders to integrate smart devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device model that a device uses to advertise its capabilities to an IoT Plug and Play-enabled application

https://docs.microsoft.com/en-us/azure/iot-pnp/overview-iot-plug-and-play

Microsoft provides a way to describe the interface of a device and annotate every feature.

To experience how this works, we will look at the best example: Azure IoT Central.

This SaaS IoT Dashboard makes perfect use of the ability how devices can expose their meta-model.

Let’s see how.

Doorgaan met het lezen van “IoT Plug and Play, modeling IoT Central devices”