Microsoft describes its IoT Operations as a “Range of modular, scalable, and highly available services” that run on Azure Arc-enabled edge Kubernetes clusters. It includes native integration of other MS services like Event Hub or Fabric, whereby the latter’s been generally available since November 2023. Our blog summarizes the new Azure service’s features and deployment.
Table of Contents
Features von Azure IoT Operations
From an architectural perspective, Azure IoT Operations is divided into two elements: IoT Operations itself and the associated Experience Portal. The former serves as an umbrella term for the containerised services it contains, such as the Azure IoT Data Processor or the Azure IoT OPC UA Broker. The Experience Portal, on the other hand, enables the management of resources and data pipelines in IoT operations using a clear web interface. The most obvious new feature of IoT Operations is the use of Kubernetes as an orchestration system for the individual new services. Azure Arc enables the management of clusters in the cloud and the integration of other Azure services. The advantages of focussing on Kubernetes are obvious:
Scalability and high availability of the services used
Lower barrier to entry thanks to platform-independent and powerful edge infrastructure
Modularisation of the services to be used and therefore easier adaptation to the given requirements
Use of custom resource definitions (CRDs) developed in-house or already publicly available
Simplified operationalisation with the help of custom locations in Azure and the option to use familiar GitOps tools such as Flux or ArgoCD
The new AIO service offers many components for customising the respective solution. As shown in the overview of the supplied components, data can be pre-processed on the Edge Cluster using the Data Processor, for example. In contrast to the existing IoT Edge, this is much easier and more user-friendly with the new pipeline-based logic. For comparison: when using IoT Edge, corresponding Edge Modules must first be developed and deployed on the Edge Devices. With the help of the IoT Operations Experience Portal, on the other hand, data pipelines can be defined and configured in the UI. Of course, connectors to various Azure services, such as event hubs, ADLS or Fabric, are also provided. Alternatively or additionally, data can also be published using the MQTT broker. The Azure IoT Orchestrator is also deployed on the cluster to orchestrate the services used. Overall, the development of services in the form of modules towards containerised and portable microservices in Kubernetes becomes clear, especially compared to IoT Edge. This simplifies development and deployment in IoT Operations.
The following illustration shows how IoT Operations can be integrated into b.telligent's reference architecture for Azure IoT.
Deployment of Azure IoT Operations
Azure IoT Operations can be integrated into various network architectures. The two most commonly used architectures are
Single Layer Networks (see https://learn.microsoft.com/de-de/azure/iot-operations/get-started/quickstart-deploy?tabs=linux)
Layered Networks (see https://learn.microsoft.com/de-de/azure/iot-operations/manage-layered-network/overview-layered-network)
Layered networks are used in practice in Industrial IoT and are often mapped using the Purdue Enterprise Reference Architecture (PERA) (see https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture). The classification of the edge layers into the different network levels is shown in the figure below.
The following section looks at the deployment of Azure IoT Operations in single layer networks. To deploy the modular and containerised services in IoT Operations, some prerequisites are required first:
Azure Subscription
Service Principal to enable communication between the cluster and the cloud resources
Edge device (e.g. single node cluster, 8 core, 16 GB RAM, see documentation)
Azure CLI with the azure-iot-ops extension
The first step is to set up a Kubernetes cluster on the edge device (in this case K3s). It is important to establish the connection to Azure using Azure Arc. A key vault must then be created, which will be required later in the deployment to manage secrets. The following resources are therefore created in Azure before deployment:
Azure Arc (Kubernetes)
Key Vault
IoT Operations can then be deployed in Azure Arc. To do this, the subscription and resource group as well as the cluster name must first be entered in the Basic tab of the deployment process. A simulated PLC (or PLC) is activated in the subsequent Configuration tab and the MQ mode is set to Auto. The previously created Key Vault is then selected in the Automation tab. This generates an Azure CLI command that must be executed on the Edge. Deployment then takes place, which can be monitored using known kubectl commands.
Please note that the pods are deployed in the azure-iot-operations namespace. In addition, the cluster is now also visible in the Azure Portal in the resource group used.
Conclusion
Even though IoT Operations is still in the preview phase and therefore not suitable for productive use, the new IoT service already shows a lot of potential. After looking at the deployment in this first part of our blog post series, the second part illustrates the configuration of the deployed services using an example.
Who is b.telligent?
Do you want to replace the IoT core with a multi-cloud solution and utilise the benefits of other IoT services from Azure or Amazon Web Services? Then get in touch with us and we will support you in the implementation with our expertise and the b.telligent partner network.
Many users of Google's IoT Core are currently looking for a successor to this service which will expire in August 2023. This blog post shows how Stackable's data platform can be used to create a highly scalable open-source alternative to Google's IoT Core.
Many security considerations involving Azure revolve primarily around network security. Other important security aspects to be considered in the context of Microsoft Fabric are indicated below.
How can I integrate data sources that are secured via private endpoints into Fabric? How do I deal with Azure Data Lakes behind a firewall? This blog post shows the possibilities which Fabric Nativ offers