Install and run the Spatial Analysis container (preview)
The Spatial Analysis container enables you to analyze real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. Containers help you meet specific security and data governance requirements.
Prerequisites
- Azure subscription - Create one for trial
- Once you have your Azure subscription, create a Computer Vision resource for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
- You'll need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
Spatial Analysis container requirements
To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, NVIDIA Tesla T4, A2, 1080Ti, or 2080Ti).The container runs on any other desktop machine that meets the minimum requirements. We'll refer to this device as the host computer.
Minimum hardware requirements
- 4 GB of system RAM
- 4 GB of GPU RAM
- 8 core CPU
- One NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, NVIDIA Tesla T4, A2, 1080Ti, or 2080Ti)
- 20 GB of HDD space
Recommended hardware
- 32 GB of system RAM
- 16 GB of GPU RAM
- 8 core CPU
- Two NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, NVIDIA Tesla T4, A2, 1080Ti, or 2080Ti)
- 50 GB of SSD space
In this article, you'll download and install the following software packages. The host computer must be able to run the following (see below for instructions):
- NVIDIA graphics drivers and NVIDIA CUDA Toolkit. The minimum GPU driver version is 460 with CUDA 11.1.
- Configurations for NVIDIA MPS (Multi-Process Service).
- Docker CE and NVIDIA-Docker2
- Azure IoT Edge runtime.
Requirement | Description |
---|---|
Camera | The Spatial Analysis container isn't tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
Linux OS | Ubuntu Desktop 18.04 LTS must be installed on the host computer. |
Set up the host computer
Select Desktop Machine if you're configuring a different device, or Virtual Machine if you're utilizing a VM.
Follow these instructions.
Install NVIDIA CUDA Toolkit and Nvidia graphics drivers on the host computer
Use the following bash script to install the required Nvidia graphics drivers, and CUDA Toolkit.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cuda
Reboot the machine, and run the following command.
nvidia-smi
You should see the following output.
Install Docker CE and nvidia-docker2 on the host computer
Install Docker CE on the host computer.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
Install the nvidia-docker-2 software package.
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y docker-ce nvidia-docker2
sudo systemctl restart docker
Enable NVIDIA MPS on the host computer
Tip
- Don't install MPS if your GPU compute capability is less than 7.x (pre Volta). See CUDA Compatability for reference.
- Run the MPS instructions from a terminal window on the host computer. Not inside your Docker container instance.
For best performance and utilization, configure the host computer's GPU(s) for NVIDIA Multiprocess Service (MPS). Run the MPS instructions from a terminal window on the host computer.
# Set GPU(s) compute mode to EXCLUSIVE_PROCESS
sudo nvidia-smi --compute-mode=EXCLUSIVE_PROCESS
# Cronjob for setting GPU(s) compute mode to EXCLUSIVE_PROCESS on boot
echo "SHELL=/bin/bash" > /tmp/nvidia-mps-cronjob
echo "PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" >> /tmp/nvidia-mps-cronjob
echo "@reboot root nvidia-smi --compute-mode=EXCLUSIVE_PROCESS" >> /tmp/nvidia-mps-cronjob
sudo chown root:root /tmp/nvidia-mps-cronjob
sudo mv /tmp/nvidia-mps-cronjob /etc/cron.d/
# Service entry for automatically starting MPS control daemon
echo "[Unit]" > /tmp/nvidia-mps.service
echo "Description=NVIDIA MPS control service" >> /tmp/nvidia-mps.service
echo "After=cron.service" >> /tmp/nvidia-mps.service
echo "" >> /tmp/nvidia-mps.service
echo "[Service]" >> /tmp/nvidia-mps.service
echo "Restart=on-failure" >> /tmp/nvidia-mps.service
echo "ExecStart=/usr/bin/nvidia-cuda-mps-control -f" >> /tmp/nvidia-mps.service
echo "" >> /tmp/nvidia-mps.service
echo "[Install]" >> /tmp/nvidia-mps.service
echo "WantedBy=multi-user.target" >> /tmp/nvidia-mps.service
sudo chown root:root /tmp/nvidia-mps.service
sudo mv /tmp/nvidia-mps.service /etc/systemd/system/
sudo systemctl --now enable nvidia-mps.service
Configure Azure IoT Edge on the host computer
To deploy the Spatial Analysis container on the host computer, create an instance of an Azure IoT Hub service using the Standard (S1) or Free (F0) pricing tier.
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the Azure portal.
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
sudo az login
sudo az account set --subscription "<name or ID of Azure Subscription>"
sudo az group create --name "<resource-group-name>" --location "<your-region>"
See Region Support for available regions.
sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<resource-group-name>"
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
You need to install Azure IoT Edge version 1.0.9. Follow these steps to download the correct version:
Ubuntu Server 18.04:
curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
Copy the generated list.
sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
Install the Microsoft GPG public key.
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
Update the package lists on your device.
sudo apt-get update
Install the 1.0.9 release:
sudo apt-get install iotedge=1.1* libiothsm-std=1.1*
Next, register the host computer as an IoT Edge device in your IoT Hub instance, using a connection string.
You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123
On the host computer open /etc/iotedge/config.yaml
for editing. Replace ADD DEVICE CONNECTION STRING HERE
with the connection string. Save and close the file.
Run this command to restart the IoT Edge service on the host computer.
sudo systemctl restart iotedge
Deploy the Spatial Analysis container as an IoT Module on the host computer, either from the Azure portal or Azure CLI. If you're using the portal, set the image URI to the location of your Azure Container Registry.
Use the below steps to deploy the container using the Azure CLI.
IoT Deployment manifest
To streamline container deployment on multiple host computers, you can create a deployment manifest file to specify the container creation options, and environment variables. You can find an example of a deployment manifest for other desktop machines, and Azure VM with GPU on GitHub.
The following table shows the various Environment Variables used by the IoT Edge Module. You can also set them in the deployment manifest linked above, using the env
attribute in spatialanalysis
:
Setting Name | Value | Description |
---|---|---|
ARCHON_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values |
ARCHON_SHARED_BUFFER_LIMIT | 377487360 | Do not modify |
ARCHON_PERF_MARKER | false | Set this to true for performance logging, otherwise this should be false |
ARCHON_NODES_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values |
OMP_WAIT_POLICY | PASSIVE | Do not modify |
QT_X11_NO_MITSHM | 1 | Do not modify |
APIKEY | your API Key | Collect this value from Azure portal from your Vision resource. You can find it in the Key and endpoint section for your resource. |
BILLING | your Endpoint URI | Collect this value from Azure portal from your Vision resource. You can find it in the Key and endpoint section for your resource. |
EULA | accept | This value needs to be set to accept for the container to run |
DISPLAY | :1 | This value needs to be same as the output of echo $DISPLAY on the host computer. Azure Stack Edge devices don't have a display. This setting is not applicable |
KEY_ENV | ASE Encryption key | Add this environment variable if Video_URL is an obfuscated string |
IV_ENV | Initialization vector | Add this environment variable if Video_URL is an obfuscated string |
Important
The Eula
, Billing
, and ApiKey
options must be specified to run the container; otherwise, the container won't start. For more information, see Billing.
Once you update the Deployment manifest for a desktop machine or Azure VM with GPU with your own settings and selection of operations, you can use the below Azure CLI command to deploy the container on the host computer, as an IoT Edge Module.
sudo az login
sudo az extension add --name azure-iot
sudo az iot edge set-modules --hub-name "<iothub-name>" --device-id "<device-name>" --content DeploymentManifest.json --subscription "<name or ID of Azure Subscription>"
Parameter | Description |
---|---|
--hub-name |
Your Azure IoT Hub name. |
--content |
The name of the deployment file. |
--target-condition |
Your IoT Edge device name for the host computer. |
--subscription |
Subscription ID or name. |
This command will start the deployment. Navigate to the page of your Azure IoT Hub instance in the Azure portal to see the deployment status. The status may show as 417 - The device's deployment configuration is not set until the device finishes downloading the container images and starts running.
Validate that the deployment is successful
There are several ways to validate that the container is running. Locate the Runtime Status in the IoT Edge Module Settings for the Spatial Analysis module in your Azure IoT Hub instance on the Azure portal. Validate that the Desired Value and Reported Value for the Runtime Status is Running.
Once the deployment is complete and the container is running, the host computer starts sending events to the Azure IoT Hub. If you used the .debug
version of the operations, you'll see a visualizer window for each camera you configured in the deployment manifest. You can now define the lines and zones you want to monitor in the deployment manifest and follow the instructions to deploy again.
Configure the operations performed by Spatial Analysis
You need to use Spatial Analysis operations to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis generates an output stream of JSON messages, sent to your instance of Azure IoT Hub.
Use the output generated by the container
If you want to start consuming the output generated by the container, see the following articles:
- Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see Read device-to-cloud messages from the built-in endpoint.
- Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc.
Troubleshooting
If you encounter issues when starting or running the container, see Telemetry and troubleshooting for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
If you're having trouble running an Azure AI services container, you can try using the Microsoft diagnostics container. Use this container to diagnose common errors in your deployment environment that might prevent Azure AI containers from functioning as expected.
To get the container, use the following docker pull
command:
docker pull mcr.microsoft.com/azure-cognitive-services/diagnostic
Then run the container. Replace {ENDPOINT_URI}
with your endpoint, and replace {API_KEY}
with your key to your resource:
docker run --rm mcr.microsoft.com/azure-cognitive-services/diagnostic \
eula=accept \
Billing={ENDPOINT_URI} \
ApiKey={API_KEY}
The container will test for network connectivity to the billing endpoint.
Billing
The Spatial Analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Azure AI containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
Summary
In this article, you learned concepts and workflow for downloading, installing, and running the Spatial Analysis container. In summary:
- Spatial Analysis is a Linux container for Docker.
- Container images are downloaded from the Microsoft Container Registry.
- Container images run as IoT Modules in Azure IoT Edge.
- Configure the container and deploy it on a host machine.