Iot Hub supports uploading files, perfect for Cognitive Services vision

Usually, when I reference the Azure IoT Hub, you can expect that some telemetry is involved. But the Azure IoT Hub is evolving. A new feature is coming, it is now possible to upload blobs.

Why should we want to upload blobs? Well, I think it’s great to have the possibility to upload files full of telemetry in one upload.

But sometimes the telemetry is not a bunch of values but it’s more like a big bag of bytes 🙂 In the example below, we will upload an image and pass it on to Microsoft Cognitive Services  (previously known as Project Oxford) for visual analysis.

Uploading a file is not that hard. In the Azure portal, you only have to attach an Azure Blob storage to the IoT Hub:


Yes, files uploaded using the IoT Hub will end up in an Azure Blob Storage. You only have to define a container where the blob will be put in.

Next, we create a device client app. First, we create this AzureIoTHub class which can be generated using the IoT Hub Connected Service extension:

internal static class AzureIoTHub
    private const string DeviceConnectionString = ";DeviceId=CameraOne;SharedAccessKey=[device api key]";
    public static async Task SendToBlobAsync(string fileName)
        var deviceClient = DeviceClient.CreateFromConnectionString(DeviceConnectionString, TransportType.Amqp);
        Console.WriteLine("Uploading file: {0}", fileName);
        using (var sourceData = new FileStream(fileName, FileMode.Open))
            await deviceClient.UploadToBlobAsync(Path.GetFileName(fileName), sourceData);
    public static async Task<string> ReceiveCloudToDeviceMessageAsync()
        var deviceClient = DeviceClient.CreateFromConnectionString(DeviceConnectionString, TransportType.Amqp);
        while (true)
            var receivedMessage = await deviceClient.ReceiveAsync();
            if (receivedMessage != null)
                var messageData = Encoding.ASCII.GetString(receivedMessage.GetBytes());
                await deviceClient.CompleteAsync(receivedMessage);
                return messageData;
            await Task.Delay(TimeSpan.FromSeconds(1));

I changed the class a bit, we now pass a file to deviceClient.UploadToBlobAsync().

Note: I tried this example using a UWP app, a WPF app and a Console app. This code only works using the UWP app. At this moment, not all versions of the same V1.0.21 Device Client support the upload functionality. According to the comments here we just have to wait.

Let’s call the two methods, one for sending a file, the other for listening for commands.

Just to make it interesting, I will demonstrate how to pass an image and let it be analyzed by the Computer Vision api of Microsoft Cognitive Service (AKA Project Oxford). This Project Oxford is a set of cognitive API’s:

“Microsoft Cognitive Services let you build apps with powerful algorithms using just a few lines of code. They work across devices and platforms such as iOS, Android, and Windows, keep improving, and are easy to set up.”

Most of the service are free for limited use. The service I introduce is the “Computer Vision”. This service tells you what’s on a picture. It recognizes people, objects, backgrounds etc. Really cool! So just make a free account and get a unique API key for the Computer Vision service.

First, this is the code you need to send an image to the IoT Hub:

private static void Main(string[] args)
    Console.WriteLine("Upload a file");
    Console.WriteLine("Waiting for description...");
    var command = AzureIoTHub.ReceiveCloudToDeviceMessageAsync().Result;
    Console.WriteLine($"Response is {command}");
    Console.WriteLine("Press a key to exit");

The image will end up in the Azure Blob storage.  Normally this will raise an issue. Because if only the file name is used to save the file, we loose track of which device has sent the image.  The solution is simple, the path of files uploaded, are extended with the name of the device. So inside the container, the first directories reflect the devices:


Every time a blob is created, we trigger an Azure Function. This is the code of the function:

#r "System.Runtime"
#r "System.Threading.Tasks"
#r "System.IO"

using System.IO;
using System;
using Microsoft.Azure.Devices;
using System.Text;
using Newtonsoft.Json;
using Microsoft.ProjectOxford.Vision;
using System.Runtime;
using System.Threading.Tasks;

public static void Run(Stream myBlob, string name, TraceWriter log)
  var fileName = Path.GetFileName(name);
  var device = Path.GetDirectoryName(name);

  log.Info($"IoT Blob Name:{fileName} of device {device}; Size: {myBlob.Length} Bytes");

  var connectionString = ";SharedAccessKeyName=iothubowner;SharedAccessKey=[access key]";

  var serviceClient = ServiceClient.CreateFromConnectionString(connectionString);

  // vision

  var VisionServiceClient = new VisionServiceClient("[Vision API key]");

  var visualFeatures = new VisualFeature[]

  var analysisResult = VisionServiceClient.AnalyzeImageAsync(myBlob, visualFeatures).Result;

  string result = string.Empty;

  foreach (var descriptionCaption in analysisResult.Description.Captions)
    result += $"'{descriptionCaption.Text} ({descriptionCaption.Confidence})'";

  result += " (";

  foreach (var descriptionTag in analysisResult.Description.Tags)
    result += $"'{descriptionTag}'";

  result += ")";

  var commandMessage = new Message(Encoding.ASCII.GetBytes(result));
  serviceClient.SendAsync(device, commandMessage);
  log.Info($"Command sent to {device}");

We first determine the name of the device. We need it to pass a command back to the device.

Next, we call the Vision service. For simplicity, we are interested in the description only (check out the wide range of visual features). When the vision API returns it decision, we pass it back to the device.

To connect to both the Azure IoT Hub and Project Oxford, we need to load some NuGet packages (Microsoft.Azure.Devices en Microsoft.ProjectOxford.Vision). Reference them (along with some dependencies) in the project.json file:

"frameworks": {
"net46": {
"dependencies": {
"Microsoft.AspNet.WebApi.Client": "5.2.3",
"Microsoft.AspNet.WebApi.Core": "5.2.3",
"Microsoft.Azure.Amqp": "1.1.5",
"Microsoft.Azure.Devices": "1.0.9",
"Newtonsoft.Json": "8.0.3",
"Microsoft.ProjectOxford.Vision": "1.0.354"

And that’s it.

So let’s look at this picture:


This will result in a description like this:

“‘a group of people sitting at a desk (0.516866604774419)’ (‘person”indoor”table”people”group”computer”man”sitting”food”laptop”office”room”woman”large”desk”standing’)”

Not bad, isn’t it?


3 thoughts on “Iot Hub supports uploading files, perfect for Cognitive Services vision

  1. I don’t see how you have passed the device name as part of the file path when you upload?

    Also – Is there any point in using iot hub? why not just upload to a container (without using iothub) – and the azure function will still get called

    1. The devicename is added into the path automatically (so when there are multiple devices, each device has its own ‘container’). Extract it with “var device = Path.GetDirectoryName(name);”. I could use the plain Azure Storage library but then I have to reuse the same credentials/secret for all devices. Or I should use an alternative way to create SAS tokens for each device using eg a WebApi service. But with the IoTHub, I get the same level of security for free and I do not have to write any line of code.

      1. Oh I see, So each device just provides it’s own key, whereas my suggestion of just uploading blob would require credentials to upload (which is potentially unsecure).

        Will try this out. Good article.

Reacties zijn gesloten.