Extending the AZ-220 Digital Twins hands-on lab with 3D visualization

Azure Digital Twins is advertised as a “platform as a service (PaaS) offering that enables the creation of twin graphs based on digital models of entire environments, which could be buildings, factories, farms, energy networks, railways, stadiums, and more—even entire cities”.

This sounds promising but it does not really ring a bell, does it?

Fortunately, besides the excellent documentation, Microsoft provides a great learning path in MS Learn as part of the AZ-220 Azure IoT developer exam preparations.

There, you will learn how Azure Digital Twins offers new opportunities for representing an Internet of Things solution via twin models, twin relations, and a runtime environment.

You finish the learning path with a hands-on lab where you build a model around a cheese factory and ingest sensor telemetry:

In the demo, the telemetry flows through the runtime and ends up in Time Series Insights.

Yes, the learning path is a good start and will prepare you for the exam or the assessment (you need to pass this assessment for a free one-year certification renewal).

On the other hand, many extra features could be added to turn this good start into a great start!

Think about propagating Azure Digital Twins events and twin property changes through the graph and visualizing live updates of twins in a 3D model, complete with alerts.

Let’s check out some of these additional features and see what you need to do to extend the ADT example.

This post is part one of a series of posts about Azure Digital Twins:

  1. Extending the AZ-220 Digital Twins hands-on lab with 3D visualization
  2. ADX Kusto plug-in for Azure Digital Twins history
  3. Exploring Azure Digital Twins Graph history

Note: This blog post assumes you have already completed the MS Learn hands-on lab. The lab is also available on GitHub as part of the AZ-220 training labs. Any extensions seen here are made available on GitHub under the MIT license.

As seen in the training lab, the Azure Function App needs access to the ADT environment using the ‘Azure Digital Twins Data Owner’ role assigned to the Azure Function system-assigned identity:

az functionapp identity assign -g [resourcegroup name]-neu-rg -n [function app name] --query principalId -o tsv

// this returns the identity id guid

az dt role-assignment create --dt-name [azure digital twins name] --assignee [identity id guid] --role "Azure Digital Twins Data Owner"

The Identity ID GUID is also seen at:

Function App | Identity | System Assigned

You can check the role assignment afterward when looking at the IAM of both services.

Note 3: It seems the code samples as seen in the training are not on par with the latest documentation. For example, in the Azure Functions, the way to authenticate the device twin client has changed. Now, it’s just these two lines.

var cred = new DefaultAzureCredential();
var client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred);

Because the lab is hosted in an Azure subscription you already own, not in a temporary sandbox environment, you can still play with it and extend it afterward.

When you completed the lab, the lab solution will look much like this:

These features are already available in the original lab:

  • A live Azure Digital Twins environment around a cheese factory having three caves. Each cave has a temperature and humidity sensor computing alerts too
  • For cave 1, a sensor device, number 55, is sending telemetry to an IoT Hub
  • An Azure Function picks up the sensor telemetry ingested by the IoT Hub and sends it to the right digital twin sensor representation, both as property patches (for alert properties) and as telemetry (non-visual temperature and humidity)
  • Sensor twin telemetry events are routed by ADT and outputted to an Event Hub endpoint using internal event routes
  • Events from that telemetry event route are picked up by another Azure Function and enriched so these can be picked up by and shown in Time Series Insights

Today, we will look into adding these new or updated features:

  • More elaborate device simulation, supporting multiple devices running next to each other
  • Visual feedback of incoming telemetry in the existing graph?
  • Sensor telemetry must update cave temperature and humidity, by propagating Azure Digital Twins events through the graph
  • Sensor twin properties must update cave alerts, by propagating Azure Digital Twins properties through the graph
  • Visualization of the ADT model in a 3D environment

This will turn the current architecture into this extended architecture:

Looks promising, doesn’t it?

Apart from a more helpful device simulation and some directions on how to check the Digital Twins graph for twin updates, three beneficial additions are provided.

These will turn the lab outcome into an appealing visual representation of the physical world!

Let’s dive into each new or updated feature.

More elaborate device simulation, supporting multiple devices

The device simulation sends telemetry messages containing a temperature value and a humidity value.

Each message is accompanied by user properties for fan alert, temperature alert, and humidity alert.

This logic is a bit hard to follow but it makes sense in the end. There are upper limits and lower limits involved.

I did not change the logic but added more elaborating output in the console of the app, telling what is happening on the device:

Start the application and see that both the telemetry and properties are shown in the logging, together with the upper limits and lower limits of both temperature and humidity:

The alert properties are shown too.

Notice you have to provide a device connection string first.

I extended the code so the console app now takes an appsettings.json file to read the device connection string.

Add this file to the project folder:

    "Settings": {
        "cs": "[device connection string]"

Note: this specific file is not checked in (it contains a secret) using the .gitignore file. So, you have to create it yourself.

This way, the application is easy to duplicate so you can run multiple simulation devices on the same (development) machine at the same time.

Visual feedback of incoming telemetry?

At the end of exercise 7 of the original ADT training, it says:

“You should be able to see that the fanAlert, temperatureAlert and humidityAlert properties have been updated.”

Well, because the simulation sensor twin telemetry changes are quite slow (this will take several minutes if not hours) in reaching points where alert values are changing, it’s not clear if the telemetry is actually arriving in the model.

You can check the Azure Function logging output (are messages ingested or denied?) but we want to see proof in the twin model graph.

I found a quicker way to see if the sensor-th-55 twin is updated, by looking at the metadata.

All properties have a lastUpdateTime in the user interface. Refresh the graph a number of times (eg. by skipping to another twin and back). You should see eg. the fanAlert lastUpdateTime changing over time:

By the way, I also experimented with extending the sensor twin model to show temperature and humidity as properties in the sensor twin.

Still, this feels like a bit of duplicate logic. Why should I add extra properties? Updating the already existing parent cave temperature and humidity properties is a more elegant way.

So, in the end, instead of updating the sensor twin with unnecessary properties, I went for updating the parent cave’s existing properties.

But you are free to play with this, now abandoned, experiment. Check out this readme in the DTDL models section for more details.

Sensor telemetry must update cave temperature and humidity, by propagating Azure Digital Twins events through the graph

Event propagation is a very important part of Azure Digital Twins, if not the most important one.

In the lab, you have learned how an ADT route and ADT route endpoint are used to route sensor twin telemetry towards an Event Hub so the (now enriched) telemetry (the deviceId/twinId is added) can be picked up by Time Series Insights.

As shown above, we want to have the parent cave twin properties updated when the related child sensor twin receives new telemetry messages.

We do this by adding a new Azure Function, listening to the same sensor twin telemetry, used for updating TSI:

This new function named “FuncToGraphFunction” gets its own copy of each telemetry message so notice the specific Event Hub consumer group named ‘graph’:

The new function first checks for the right message type and twin model:

This is because the code following this check wants to modify twins based on the relation between sensors and caves.

Note: to make this code more flexible, and able to handle other kinds of messages too, a strategy pattern implementation would be a good addition.

Now, we have received a telemetry message from a sensor twin. So, we can try to get the sensor parent cave twin ID:

Using the relationship, the parent cave twin is found.

We then construct this patch message containing the temperature and humidity and send it to the parent cave twin (using the UpdateDigitaTwinAsync call).

I noticed we receive this twin ID naming format of the sensor twin:

  • adt-az220-training-sve220101.api.weu.digitaltwins.azure.net/digitaltwins/sensor-th-0055

We try to find the parent cave of child sensors by looking at the relationship. For this, we need the short name: ‘sensor-th-0055’.

So, I trimmed the full name of the twin into this short notation and now I can check for any parent with this helper method:

When we deploy the new function and check the log messages, we get a good insight about what is happening:

We see the event type, the schema type, and Twin ID, made available in the message application properties.

Using the telemetry and the discovered parent cave Twin ID, we can send this patch.

Now, go back to the Azure Digital Twin explorer and check the graph:

There, we see both the temperature and humidity of the sensor parent cave being updated based on child sensor telemetry.

But what about those other properties like the cave alert properties?

Sensor twin properties must update cave alerts, by propagating Azure Digital Twins properties through the graph

At this moment, we have successfully updated a cave twin based on related sensor twin telemetry.

Can we update the cave twin also, based on the sensor twin property changes (the alert property patch seen in the original solution)?

Of course, we could use the original ‘TelemetryFunction’ and patch both the sensor twin properties and the parent cave twin properties.

But in a more mature Azure Device Twins environment, you want to sync parents and children using the routes and additional Azure functions.

So, let’s introduce another new Azure Function in conjunction with to creating a second route, especially for twin updates:

In the ADT environment, twin property changes, in general, will generate events of type ‘Microsoft.DigitalTwins.Twins.Update’.

We can route them to an Event Hub using a new ADT route:

Once this route is in place, we see the arrival of any twin update in this Event Hub:

In the free GitHub Repo containing all code samples as seen in this blog post, a new function called ‘PropFuncToGraphFunction’ is made available:

Note: it’s a good practice to always use a separate consumer group, preventing others from ‘hijacking’ your messages.

This function listens to sensor twin property updates:

In the previous function, handling telemetry messages, both the event type and data schema were part of the message application properties.

Here, the schema is omitted from the message application properties.

We have to check the body too, for the schema:

Because the message body is a JSON patch, I wrote a converter to turn this message format into a C# class:

    "modelId": "dtmi:com:contoso:digital_factory:cheese_factory:cheese_cave_device;1",
    "patch": [
            "value": false,
            "path": "/fanAlert",
            "op": "replace"
            "value": true,
            "path": "/temperatureAlert",
            "op": "replace"
            "value": true,
            "path": "/humidityAlert",
            "op": "replace"

We know we have received a valid sensor twin patch so we can try to find the parent cave. If found, we patch its alert properties:

The function log shows all the details regarding handing device twin updates:

As you can see, the right event type and the schema from the sensor twin update will be checked.

If the related parent is found, a patch is sent and the parent is updated.

There is one catch: this twin update event will generate another twin update event….

To prevent circular updates, you need to test for the right conditions!

Here is an example where wrong twin updates are rejected:

Now, the cave twin alert properties are updated based on changing sensor twin alerts:

This twin update logic opens new possibilities like updating IoT Hub device twin desired properties when ADT twin properties are changing. This way, full integration between ATD twins and IoT Hub device twins can be established.

By now, it must be clear that each digital twin model synchronization requirement is established by Azure Functions.

Right now, you have some basic tooling available to manipulate the graph using the ADT query language.

Personally, I am very impressed by the graph abilities of the Azure Digital Twins environment. I get full insight into the hierarchy. But it’s not something I would show to my boss, it’s way too complex.

How about showing those fancy 3D models everybody is talking about, to my boss?

Visualization of the ADT model in a 3D environment

Perhaps you have seen the ISS demonstration based on Azure digital twins?

If you check out the Azure Digital Twins explorer (the one accessible from the Azure Portal), you will come across this ISS demonstration yourself:

You can try it out here too.

Inspired by this, I would like to have a 3D demonstration of the cheese factory.

Is this possible?


All you need is an already running cheese factory demo with one or more simulation devices running and generating sensor data (wait, we have that already!), together with a 3D representation.

In a more elaborate situation, you want full control over how to represent the 3D environment so you probably will build something yourself.

But Microsoft offers a low code solution inside the ADT explorer which is good enough already in many cases.

In the ADT Explorer, just go to the next tab:

There you see an introduction to the low-code 3D representation which already supports a world map to place your models on, widgets representing properties in legends, alert icons, layers, etc. Please check the tour to learn about this low-code solution.

There is also a nice tutorial showing a factory floor with multiple robot arms on it.

The 3D image of that factory is accessible in the Windows 11 3D viewer:

Using Paint3D, I was also able to access this model because it is generated with the (universal) GLB extension, the same extension Paint3D uses:

For my Cheese factory, I needed another model of a… ahem… factory…

Luckily, the 3D library of Paint3D already contains this factory:

It’s perfect. With a little bit of imagination, I can recognize three caves.

So, with my impressive drawing skills, this cheese factory was created, having three caves:

Well, that was easy.

Notice there are now four separate visual components on this canvas. This is important because the whole drawing of the factory is just one component. For example, the doors of the factory will not be selectable as a representation of a twin. That is why I added those three numbers.

Note: If you are actually skilled in 3D modeling, you could create a more compelling factory having eg. the three caves ‘glow’.

A copy of this model is available for you in the GitHub repo.

Back in the ADT explorer, where can we upload/combine that new 3D model?

When you close the tutorial, you probably get this ‘No environment added’ message:

Note: I say ‘probably’ because it seems the browser cache is involved here. In my example, I have already added an environment but if you open the same page again in another type of browser or on another machine, this message will appear…

You first need to attach a storage account to the ADT runtime using this ‘Configure Environment’ dialog:

Regarding the instance URL, you can select the current one from the drop-down list.

For the Azure Storage container URL, you need to have to go through a number of steps, as seen in this tutorial:

The first step is creating a completely new storage account:

az storage account create --resource-group <your-resource-group> --name <name-for-your-storage-account> --location <region> --sku Standard_RAGRS

The output of this command will show you this storage account ID which has to be used together with YOUR OWN email address to grant you ‘Storage Blob Data Owner’ role permissions for this new storage account:

az role assignment create --role "Storage Blob Data Owner" --assignee <your-Azure-email> --scope <ID-of-your-storage-account>

Note: This is just another role assignment, you could check this (or even assign this) in the Azure portal. Check the IAM page of the storage account.

Note 2: You need to assign YOUR OWN email address needs to be used. I expect it’s just needed for two reasons. First, you can execute the next command. Next, once you want to use the 3D scene editor, you need access to the container.

Then, enable CORS for the storage account so the ADT explorer can access the storage account (using your identity/role permissions):

az storage cors add --services b --methods GET OPTIONS POST PUT --origins https://explorer.digitaltwins.azure.net --allowed-headers Authorization x-ms-version x-ms-blob-type --account-name <your-storage-account>

Now, create a private container inside the new storage account:

az storage container create --name <name-for-your-container> --public-access off --account-name <your-storage-account>

The last step is to arrange the URL of your new container. It will look like this:


Fill the URL address of your storage account container into the dialog:

Once this is saved, a ‘scene’ list is shown. It’s probably empty for now.

Using the storage container, resources for multiple scenes can be added.

In the background, you will upload 3D pictures to this container and each scene is added to this single ‘3DScenesConfiguration.json’ file, inside the container:

Note: an example of this file is made available on GitHub.

So, Let’s add a scene:

Give the scene a unique name, flip the switch to show it on a globe, and upload the 3D image.

The scene is added to the list:

Check out the Globe View:

The cheese factory is located on the map as a button.

Notice that the location of the scene button seems fixed. I do not think it’s possible to update the location live. I can imagine a vehicle twin can have a changing location.

Click on the factory button and start building (notice the Build state in the upper right corner):

You can rescale and turn the 3D image. It’s not perfect (sometimes it goes out of focus) but it works good enough for now.

As you can see this 3D representation is shown without elements, without behaviors (as seen on the left side of the screen).

In this post, we will add the second cave as an element inside the 3D picture. This is because I have device simulation 66 running (the sensor child of the second cave).

There are three main tasks to perform for this new element:

  1. Select a twin and a mesh, a visual component within the picture to represent the element. Here, I will select the ‘number 2’ mesh for cave 2
  2. We have to add behaviors to the element. I will add three alerts and two gauges inside a legend to visualize the cave
  3. Start viewing the model

Adding a twin and mesh

Once you click ‘New element’ you get a dialog to create the element:

Select the twin named ‘cave_2’, give the element a descriptive name like ‘Cave 2’ and click on the right mesh, the visual ‘2’ in the 3D picture:

As you can see, the mesh is selected and related to this twin.

Adding an alert

We continue editing the new element by adding behaviors.

Click on the Behaviors tab and add a new behavior:

We are adding a temperature alert, so fill in as Display name:

Temperature alert

Click on the Alerts tab and enter as trigger expression:


Select a nice icon (there is a thermometer) and color (I select yellow for temperature) and fill is as scenario description:

Temperature alert: ${PrimaryTwin.temperatureAlert}

Save/create this alert behavior:

After saving this, when you go back to the ‘Cave 2’ element, you will notice it now has one behavior:

Let’s add two more alert behaviors. As seen above, fill in these values:

Humidity alert
Humidity alert: ${PrimaryTwin.humidityAlert}

Fan alert
Fan alert: ${PrimaryTwin.fanAlert}

After saving the behaviors, at this point, Cave 2 has three alert behaviors:

Adding a legend (widgets)

We will add a legend showing both the latest temperature and latest humidity as gauges.

Add a new behavior (click Add behavior), name it ‘Legend’, and open the Widgets tab.

Click Add widget.

For now, three kinds of widgets can be added:

Note: the link will actually show a link that can be clicked so you can combine the 3D visualization with some other system (like a link to some Power BI page). This link will open in a separate browser tab. This could even trigger some logic (an Azure Function?) to make the 3D representation actionable. The link can look like [https://www.somewebsite.nl/twinrelatedsubselection/${PrimaryTwin.$dtId}]

Select Gauge so a new dialog is added:

Add a Display name:

Latest temperature

Add a measure:


Select the ‘temperature’ property from the list (taken from the properties from the related twin):

This will show as:


Only a certain range of temperatures is tolerated for the cave so we will add three ranges:

0-55 red
55-65 yellow
65-100 red

Note: these ranges are on par with the device simulation logic. This is hard-coded. I’m not sure if we can program again this logic.

The widget will look like this:

Add a second gauge to the legend for the latest humidity:

Lastest Humidity

0-69 red
69-89 blue
89-100 red

Note: these ranges are also on par with the device simulation logic.

In the end, the second widget will look like this:

Save the behavior changes.

Although not demonstrated here, you can offer limited interaction, like sending commands, using the Link widget.

There, you can add a clickable URL that can be even partially filled with twin properties:

Note: because this call is a GET, all information is seen by others on the same network/internet. So, they could try to execute the same URLs without any initial security limitations.

I recommend limiting this to eg. manuals or documentation links.

See that Cave 2 now has four behaviors:

It’s time to see the model in action!

Add a Status

Next to alerts and widgets, you can also add a Status.

Although not demonstrated here, you can give a complete mesh (or multiple) a certain color based on a condition. This way, you can give segments of the complete 3D model different colors to show the status of these parts.

It’s up to the user to do something with it.

View the result

At this point, the device simulation should still run and generate telemetry.

Click the ‘View’ button in the right upper corner.

This will bring us to the 3D model viewer:

In my case, to the left, I already get the notification showing Cave 2 has two alerts. This is on par with the messages sent by the simulation, and the properties of both sensor 65 and cave 2 twins in the graph.

When I hover over the mesh representing cave 2, I get this popup:

When I click on this popup, the Cave 2 legend is show, together with the current alerts of Cave 2:

Wait for a minute and you will see the gauge values change automatically based on generated telemetry.

Here you have it, a live updated 3D representation of the cheese factory.

There are many more features available in the low-code 3D editor and viewer but I leave this to the viewer’s discretion.

By the way, adding a second and third cave element is surprisingly easy. Adding all the behaviors to those twin elements is as easy as selecting them from a dropdown list:

Because the administration of the behaviors, etc. is saved in that ‘3DScenesConfiguration.json’ file in the storage container, you can reuse them on other elements.


We have experienced how the (fairly straightforward) AZ-220 lab example has turned into a beautiful demonstration of many of the Azure Digital Twins features.

Using my code examples available for free on GitHub, you can start building your own Digital Twins solution.

I’m excited about the many Digital Twins possibilities now available like having a simplified live 3D representation of something very difficult to understand.

I’m interested in what you are doing with the extended lab. So please leave a note below, in the GitHub repo, or on Twitter using the hashtag @svelde