One Azure IoT accelerator to rule them all

The family of Azure IoT resources is very diverse. If you know what you are doing and have developers available you can have a great time with the many PaaS cloud resources.

If you have devices which need internet connectivity but you have no developers, you can check out IoT Central, the SaaS IoT solution.

Recently, Microsoft announced a very powerful integration with other leading IoT Platforms like SAP Leonardo and PTC Thingworx. Both can connect directly with the Azure IoT Hub, the cloud gateway. This opens a broad range of integration opportunities.

And last but not least, you can start with prebuild verticals, Azure IoT accelerators, formerly known as Azure IoT suites. If you have developers available but you do not want to start from scratch, check them out. You can deploy a typical accelerator in 15 minutes to see how they behave. And the smart thing is, all the code behind the logic is available for free on Github.

The most known accelerators are:

  • Remote Monitoring (version two is based on microservices)
  • Connected Factory (support OPC-UA protocol)
  • Predictive Maintenance

But there are also third-party accelerators.

If you are a developer or architect, it’s time well spend checking them out!

Remote monitoring

The remote monitoring is a good starting point, it has a lot of out-of-the-box features:

In one of our current projects, we were looking for a rule engine. And while playing with the demo of the Remote Monitoring Accelerator, we stumbled on one.

The picture shown above is not really helping to explain how this rule engine works and you can try to read about it or check out the code on GitHub.

The features of this rules engine are both simple and powerful:

  • Define rules for alarms or even actions as JSON files in blob storage
  • Bind rules to groups of devices (defined as CSV file in blob storage)
  • Rules can react to ‘instant’ messages using Javascript comparisons
  • Rules can react to time windows aggregations using Javascript comparisons

And the best feature is that the rules engine is based on Azure Stream Analytics. Therefore it’s modular and it can be separated and reused completely in your own solution.

In this blog, we will see how it’s done.

Scope

We are going to reuse the same rules engine as available in the accelerator.

You will see we only need to add and configure some Azure resources in our own resource group.

We will ignore the UI regarding maintaining this rules engine configuration.

The Stream Analytics Job

To start with the easy part, just generate your own Remote Monitoring accelerator.

Yes, this accelerator will burn some of your Azure credits if you keep it running for days but we only need it for a short period, just to see how it behaves. We focus on the Stream Analytics job inside it. And we can see how it’s working because the accelerator spins up some device simulations on startup.

Note: Once you have duplicated the logic of the rules engine, you can safely remove the accelerator from your subscription

Once you check out the job, you will notice the following moving parts:

Inputs

Telemetry is received from an IoTHub. Incoming telemetry looks like:

{
  "temperature": 99.1307423000134,
  "temperature_unit": "F",
  "humidity": 369.0896886846003,
  "humidity_unit": "%",
  "pressure": 457.464665566787,
  "pressure_unit": "psig",
}

Yes, this format is pretty simple so you probably will need to rewrite some part of the rules engine to make it support your own message format. But this is relatively simple compared to the work done.

the groups of devices are defined in a CSV file in blob storage as reference data:

The structure is simple, each device registered in the IoTHub is also put in this file and connected to a group ID.

Notice the path of this file:

Note: I use the Azure Storage Explorer to update an insert files in Azure storage

The blob is put in a container using the path: YYYY-MM-DD/HH-mm/devicegroups.csv

In the same folder we also see the other reference input file, the JSON file with the rules:

[
  {
    "Id": "default_Chiller_Pressure_High",
    "Name": "Chiller pressure is too high",
    "Description": "Pressure > 298",
    "GroupId": "default_Chillers",
    "Severity": "Critical",
    "AggregationWindow": "instant",
    "Fields": [
      "pressure"
    ],
    "__rulefilterjs": "return (record.__aggregates.pressure > 298) ? true : false;"
  },
  {
    "Id": "default_Chiller_Temperature_High",
    "Name": "Chiller temperature is too high",
    "Description": "Temperature > 298",
    "GroupId": "default_Chillers",
    "Severity": "Critical",
    "AggregationWindow": "instant",
    "Fields": [
      "temperature"
    ],
    "Actions": ["Close valves", "Notify manager"],
    "__rulefilterjs": "return (record.__aggregates.temperature > 98) ? true : false;"
  },
  ....
]

Here we see two different rules. Both are connected to the same GroupId, “default_Chillers” (as seen in the CSV file with device groups).

We see a nice subscription and the specific part of a Javascript function.

Note: You need to specify the function fields to in the Fields property and this has to match with the Javascript line.

Check out the second rule. It contains an extra property named “Actions”. It’s an array with two actions here.

There is also another format:

...
  {
    "Id": "default_Elevator_Vibration_Stopped",
    "Name": "Elevator vibration stopped",
    "Description": "Vibration < 0.1",
    "GroupId": "default_Elevators",
    "Severity": "Warning",
    "AggregationWindow": "tumblingwindow10minutes",
    "Fields": [
      "vibration"
    ],
    "__rulefilterjs": "return (record.__aggregates.vibration.avg < 0.1) ? true : false;"
  },
...

Here a rule is defined to raise an alarm if the average vibration of a device in the “default_Elevators” group is below 0.1 for the last ten minutes.

Here are all predefined intervals:

  • tumblingwindow1minutes
  • tumblingwindow5minutes
  • tumblingwindow10minutes

It’s easy to add more of these predefined intervals in the original query.

This concludes the input into the query.

Outputs

Before we check out the query, let’s look at the output.

Originally, Microsoft has defined three different outputs, one to an EventHub and two to the same location in CosmosDB. Strangely, the Message output to CosmosDB never made it into the query. So it’s safe to ignore this unneeded output.

This leaves us two different outputs.

For convenience, I used two Azure Functions while testing the rules engine. Later on, we will see what is arriving into these outputs.

Functions

A large part of the magic is executed using three user-defined JavaScript functions:

  1. flattenMeasurements
  2. removeUnusedProperties
  3. applyRuleFilter

The third applyRuleFilter function gets the JavaScript from the rule injected in the function JavaScript. You can experiment with more complex functions eg. by referencing other new Javascript functions.

For now, check out the ‘Record’ variable which represents the original message or the aggregation.

Query

After I deployed my own accelerator, I notice the script only supports alarms and tasks! the Messages output was not used. I checked this with the original code in GitHub.

GitHub contains two queries:

  1. A script named alarmsOnlyQuery.asaql
  2. A script named script.asaql

This second script supports the messages output. And that’s the only difference.

Let’s check out the last part of the query, outputting the messages:

...

-- Output alarm events
SELECT
  CA.[doc.schemaVersion],
  CA.[doc.schema],
  CA.[status],
  CA.[logic],
  CA.[created],
  CA.[modified],
  CA.[rule.description],
  CA.[rule.severity],
  CA.[rule.id],
  CA.[device.id],
  CA.[device.msg.received]
INTO
  Alarms
FROM
  CombineAlarms CA PARTITION BY PartitionId

-- Output action events
SELECT
  CA.[created],
  CA.[modified],
  CA.[rule.description],
  CA.[rule.severity],
  CA.[rule.id],
  CA.[rule.actions],
  CA.[device.id],
  CA.[device.msg.received]
INTO
  Actions
FROM
  CombineAlarms CA PARTITION BY __partitionid
WHERE
  CA.[rule.actions] IS NOT NULL

-- Output origin telemetry messages
SELECT
  CONCAT(T.IoTHub.ConnectionDeviceId, ';', CAST(DATEDIFF(millisecond, '1970-01-01T00:00:00Z', T.EventEnqueuedUtcTime) AS nvarchar(max))) as id,
  1 as [doc.schemaVersion],
  'd2cmessage' as [doc.schema],
  T.IoTHub.ConnectionDeviceId as [device.id],
  'device-sensors;v1' as [device.msg.schema],
  'StreamingJobs' as [data.schema],
  DATEDIFF(millisecond, '1970-01-01T00:00:00Z', System.Timestamp) as [device.msg.created],
  DATEDIFF(millisecond, '1970-01-01T00:00:00Z', T.EventEnqueuedUtcTime) as [device.msg.received],
  udf.removeUnusedProperties(T) as Data
INTO
  Messages
FROM
  DeviceTelemetry T PARTITION BY PartitionId TIMESTAMP BY T.EventEnqueuedUtcTime

Just copy one of the two scripts into your own Stream Analytics job.

Generating alarms and tasks

At this point, you will have copied the rules engine:

  1. Created an IoTHUb with a consumer group named “asa”
  2. Created an ASA job
  3. Added the IoTHub stream input and the two reference inputs
  4. Copied the two reference files to your own blob storage container
  5. Make sure your IoT Device is known in the device groups reference file
  6. Added the UDF Javascript junctions
  7. Added one of the scripts
  8. Created two or three outputs

In this example, I created three Azure Functions. These are triggered by an HTTP call. And these can be used for ASA outputs.

So just start the job and send the same message as seen at the start of this blog.

I received this message:

[
  {
    "id": "chiller-01.0;1552160031102",
    "doc.schemaversion": 1,
    "doc.schema": "d2cmessage",
    "device.id": "chiller-01.0",
    "device.msg.schema": "device-sensors;v1",
    "data.schema": "StreamingJobs",
    "device.msg.created": 1552160031102,
    "device.msg.received": 1552160031102,
    "data": {
      "temperature": 99.1307423000134,
      "temperature_unit": "F",
      "humidity": 369.0896886846003,
      "humidity_unit": "%",
      "pressure": 457.464665566787,
      "pressure_unit": "psig",
      "PartitionId": 0
    }
  }
]

This same message also triggers two alarms (temperature too high and pressure too high). Here is the message as it arrived at the Azure function connected to the Alarm output:

[
  {
    "doc.schemaversion": 1,
    "doc.schema": "alarm",
    "status": "open",
    "logic": "1Rule-1Device-1Message",
    "created": 1552160031102,
    "modified": 1552160031102,
    "rule.description": "Pressure > 298",
    "rule.severity": "Critical",
    "rule.id": "default_Chiller_Pressure_High",
    "device.id": "chiller-01.0",
    "device.msg.received": 1552160031102
  },
  {
    "doc.schemaversion": 1,
    "doc.schema": "alarm",
    "status": "open",
    "logic": "1Rule-1Device-1Message",
    "created": 1552160031102,
    "modified": 1552160031102,
    "rule.description": "Temperature > 298",
    "rule.severity": "Critical",
    "rule.id": "default_Chiller_Temperature_High",
    "device.id": "chiller-01.0",
    "device.msg.received": 1552160031102
  }
]

Note: For some unknown reason the timestamp is written as a ‘tick’.

The  alarm for the high temperature also contained this actions property:

"Actions": ["Close valves", "Notify manager"],

So we also see the arrival of an action:

[
  {
    "created": 1552160031102,
    "modified": 1552160031102,
    "rule.description": "Temperature > 298",
    "rule.severity": "Critical",
    "rule.id": "default_Chiller_Temperature_High",
    "rule.actions": [
      "Close valves",
      "Notify manager"
    ],
    "device.id": "chiller-01.0",
    "device.msg.received": 1552160031102
  }
]

Conclusion

This rules engine is quite powerful and a great start for your own projects.

You can add and update rules on the fly.

If you have a more complex format of incoming or outgoing messages, expect some serious rework and testing to get it supported by your updated rules engine. But still, you now know what to expect from a solution like this.

And perhaps this is a good motivation to adopt the complete Remote Monitoring accelerator.

 

Advertenties