教程:使用 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展)来分析实时视频Tutorial: Analyze live video by using OpenVINO™ Model Server – AI Extension from Intel

本教程介绍如何使用 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展)来分析(模拟)IP 相机中的实时视频源。This tutorial shows you how to use the OpenVINO™ Model Server – AI Extension from Intel to analyze a live video feed from a (simulated) IP camera. 你将了解此推理服务器如何允许你访问用于检测物体(人、车辆或自行车)的模型以及用于车辆分类的模型。You'll see how this inference server gives you access to models for detecting objects (a person, a vehicle, or a bike), and a model for classifying vehicles. 实时视频源中的一部分帧会被发送到此推理服务器,并且结果会被发送到 IoT Edge 中心。A subset of the frames in the live video feed is sent to this inference server, and the results are sent to IoT Edge Hub.

本教程将 Azure VM 用作 IoT Edge 设备,并使用模拟的实时视频流。This tutorial uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. 它基于用 C# 编写的示例代码,并以检测运动并发出事件快速入门为基础。It's based on sample code written in C#, and it builds on the Detect motion and emit events quickstart.

备注

本教程要求使用 x86-64 计算机作为你的 Edge 设备。This tutorial requires the use of an x86-64 machine as your Edge device.

先决条件Prerequisites

提示

在安装 Azure IoT Tools 时,系统可能会提示安装 Docker。When installing Azure IoT Tools, you might be prompted to install Docker. 可以忽略该提示。You can ignore the prompt.

观看示例视频Review the sample video

设置 Azure 资源时,会将停车场的短视频复制到 Azure 中用作 IoT Edge 设备的 Linux VM 上。When you set up the Azure resources, a short video of a parking lot is copied to the Linux VM in Azure that you're using as the IoT Edge device. 此快速入门使用视频文件来模拟实时流。This quickstart uses the video file to simulate a live stream.

打开一个应用程序,例如 VLC 媒体播放器Open an application such as VLC media player. 选择“Ctrl+N”,然后粘贴视频的链接以开始播放。Select Ctrl+N and then paste a link to the video to start playback. 可以看到停车场中车辆的视频片段,大多数是停着的,只有一辆在移动。You see the footage of vehicles in a parking lot, most of them parked, and one moving.

在此快速入门中,你将使用 IoT Edge 上的实时视频分析和 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展)来检测车辆等物体,或对它们进行分类。In this quickstart, you'll use Live Video Analytics on IoT Edge along with the OpenVINO™ Model Server – AI Extension from Intel to detect objects such as vehicles, or to classify them. 将生成的推理事件发布到 IoT Edge 中心。You'll publish the resulting inference events to IoT Edge Hub.

概述Overview

概述

此图显示本快速入门中信号的流动方式。This diagram shows how the signals flow in this quickstart. Edge 模块模拟托管实时流式处理协议 (RTSP) 服务器的 IP 相机。An edge module simulates an IP camera hosting a Real-Time Streaming Protocol (RTSP) server. RTSP 源节点从该服务器拉取视频源,并将视频帧发送到 HTTP 扩展处理器节点。An RTSP source node pulls the video feed from this server and sends video frames to the HTTP extension processor node.

HTTP 扩展节点扮演代理的角色。The HTTP extension node plays the role of a proxy. 它对 samplingOptions 字段设置的传入视频帧采样,还会将视频帧转换为指定的图像类型。It samples the incoming video frames set by you samplingOptions field and also converts the video frames to the specified image type. 然后,它将图像通过 REST 转发到另一个 Edge 模块,该模块在 HTTP 终结点后运行 AI 模型。Then it relays the image over REST to another edge module that runs AI models behind an HTTP endpoint. 在本例中,该 Edge 模块是 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展)。In this example, that edge module is the OpenVINO™ Model Server – AI Extension from Intel. HTTP 扩展处理器节点收集检测结果并将事件发布到 IoT 中心接收器节点。The HTTP extension processor node gathers the detection results and publishes events to the IoT Hub sink node. 然后该节点将这些事件发送到 IoT Edge 中心The node then sends those events to IoT Edge Hub.

在本教程中,将:In this tutorial, you will:

  1. 创建和部署媒体图,并进行修改。Create and deploy the media graph, modifying it.
  2. 解释结果。Interpret the results.
  3. 清理资源。Clean up resources.

关于 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展)About OpenVINO™ Model Server – AI Extension from Intel

Intel® 分发版 OpenVINO™ 工具套件(开放式视觉推理和神经网络优化)是一个免费的软件包,可帮助开发人员和数据科学家提高计算机视觉工作负载、简化深度学习推理和部署,以及实现从边缘到云的跨 Intel® 平台的简单异类执行。The Intel® Distribution of OpenVINO™ toolkit (open visual inference and neural network optimization) is a free software kit that helps developers and data scientists speed up computer vision workloads, streamline deep learning inference and deployments, and enable easy, heterogeneous execution across Intel® platforms from edge to cloud. 它包括配备了模型优化器和推理引擎的 Intel® 深度学习部署工具套件以及具有超过 40 个经过优化的预训练模型的 Open Model Zoo 存储库。It includes the Intel® Deep Learning Deployment Toolkit with model optimizer and inference engine, and the Open Model Zoo repository that includes more than 40 optimized pre-trained models.

为了构建复杂、高性能的实时视频分析解决方案,IoT Edge 模块上的实时视频分析应与功能强大的推理引擎配合使用,以利用边缘的规模。In order to build complex, high-performance live video analytics solutions, the Live Video Analytics on IoT Edge module should be paired with a powerful inference engine that can leverage the scale at the edge. 本教程会将推理请求发送到 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展),这是一种设计用于 IoT Edge 上的实时视频分析的 Edge 模块。In this tutorial, inference requests are sent to the OpenVINO™ Model Server – AI Extension from Intel, an Edge module that has been designed to work with Live Video Analytics on IoT Edge. 此推理服务器模块包含 OpenVINO™ Model Server (OVMS),这是一种由 OpenVINO™ 工具套件提供支持的推理服务器,它针对计算机视觉工作负载进行了高度优化,并针对 Intel® 体系结构进行了开发。This inference server module contains the OpenVINO™ Model Server (OVMS), an inference server powered by the OpenVINO™ toolkit, that is highly optimized for computer vision workloads and developed for Intel® architectures. 已将扩展添加到 OVMS,以便在推理服务器与 IoT Edge 模块上的实时视频分析之间轻松交换视频帧和推理结果,从而使你能够运行任何 OpenVINO™ 工具包支持的模型(可通过修改代码来自定义推理服务器模块)。An extension has been added to OVMS for easy exchange of video frames and inference results between the inference server and the Live Video Analytics on IoT Edge module, thus empowering you to run any OpenVINO™ toolkit supported model (you can customize the inference server module by modifying the code). 可进一步选择 Intel® 硬件提供的各种加速机制。You can further select from the wide variety of acceleration mechanisms provided by Intel® hardware. 这些包括 CPU(Atom、Core、Xeon)、FPGA、VPU。These include CPUs (Atom, Core, Xeon), FPGAs, VPUs.

在此推理服务器的初始版本中,你可以访问以下模型In the initial release of this inference server, you have access to the following models:

  • 车辆检测(推理 URL: http://{module-name}:4000/vehicleDetection)Vehicle Detection (inference URL: http://{module-name}:4000/vehicleDetection)
  • 人/车辆/自行车检测(推理 URL: http://{module-name}:4000/personVehicleBikeDetection)Person/Vehicle/Bike Detection (inference URL: http://{module-name}:4000/personVehicleBikeDetection)
  • 车辆分类(推理 URL: http://{module-name}:4000/vehicleClassification)Vehicle Classification (inference URL: http://{module-name}:4000/vehicleClassification)
  • 人脸检测(推理 URL: http://{module-name}:4000/faceDetection)Face Detection (inference URL: http://{module-name}:4000/faceDetection)

备注

通过下载和使用 Edge 模块:OpenVINO™ Model Server(由 Intel 提供的 AI 扩展),以及包含的软件,即表示你同意许可协议下的条款和条件。By downloading and using the Edge module: OpenVINO™ Model Server – AI Extension from Intel, and the included software, you agree to the terms and conditions under the License Agreement. Intel 致力于尊重人权,避免参与任何侵犯人权的行为。Intel is committed to respecting human rights and avoiding complicity in human rights abuses. 请参阅 Intel 的全球人权原则See Intel's Global Human Rights Principles. Intel 的产品和软件仅适用于不导致或不构成侵犯国际公认人权的应用程序。Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.

创建和部署媒体图Create and deploy the media graph

检查和编辑示例文件Examine and edit the sample files

作为先决条件的一部分,请将示例代码下载到一个文件夹中。As part of the prerequisites, you downloaded the sample code to a folder. 按照以下步骤检查并编辑示例文件。Follow these steps to examine and edit the sample files.

  1. 在 Visual Studio Code 中,转到 src/edge。In Visual Studio Code, go to src/edge. 你可看到 .env 文件以及一些部署模板文件。You see your .env file and a few deployment template files.

    部署模板是指边缘设备的部署清单。The deployment template refers to the deployment manifest for the edge device. 它包含一些占位符值。It includes some placeholder values. 该 .env 文件包含这些变量的值。The .env file includes the values for those variables.

  2. 转到 src/cloud-to-device-console-app 文件夹。Go to the src/cloud-to-device-console-app folder. 你可在此处看到 appsettings.json 文件和一些其他文件:Here you see your appsettings.json file and a few other files:

    • c2d-console-app.csproj - Visual Studio Code 的项目文件。c2d-console-app.csproj - The project file for Visual Studio Code.

    • operations.json - 希望程序运行的操作的列表。operations.json - A list of the operations that you want the program to run.

    • Program.cs - 示例程序代码。Program.cs - The sample program code. 此代码:This code:

      • 加载应用设置。Loads the app settings.
      • 调用 IoT Edge 模块上的实时视频分析公开的直接方法。Invokes direct methods that the Live Video Analytics on IoT Edge module exposes. 可以通过调用模块的直接方法来使用该模块分析实时视频流。You can use the module to analyze live video streams by invoking its direct methods.
      • 暂停以检查“终端”窗口中程序的输出,并检查“输出”窗口中模块生成的事件 。Pauses so that you can examine the program's output in the TERMINAL window and examine the events that were generated by the module in the OUTPUT window.
      • 调用直接方法以清理资源。Invokes direct methods to clean up resources.
  3. 编辑 operations.json 文件:Edit the operations.json file:

    • 将链接更改为图拓扑:Change the link to the graph topology:

      "topologyUrl" : "https://raw.githubusercontent.com/Azure/live-video-analytics/master/MediaGraph/topologies/httpExtensionOpenVINO/2.0/topology.json"

    • GraphInstanceSet 下,编辑图拓扑的名称,使其与上一个链接中的值匹配:Under GraphInstanceSet, edit the name of the graph topology to match the value in the preceding link:

      "topologyName" : "InferencingWithOpenVINO"

    • GraphTopologyDelete 下,编辑名称:Under GraphTopologyDelete, edit the name:

      "name": "InferencingWithOpenVINO"

生成并部署 IoT Edge 部署清单Generate and deploy the IoT Edge deployment manifest

  1. 右键单击 src/edge/ deployment.openvino.template.json 文件,然后选择“生成 IoT Edge 部署清单”。Right-click the src/edge/deployment.openvino.template.json file and then select Generate IoT Edge Deployment Manifest.

    生成 IoT Edge 部署清单

    deployment.openvino.amd64.json 清单文件是在 src/edge/config 文件夹中创建的 。The deployment.openvino.amd64.json manifest file is created in the src/edge/config folder.

  2. 如果已完成检测运动并发出事件快速入门,则跳过此步骤。If you completed the Detect motion and emit events quickstart, then skip this step.

    否则,请在左下角“AZURE IOT 中心”窗格附近选择“更多操作”图标,然后选择“设置 IoT 中心连接字符串” 。Otherwise, near the AZURE IOT HUB pane in the lower-left corner, select the More actions icon and then select Set IoT Hub Connection String. 可以从 appsettings.json 文件中复制字符串。You can copy the string from the appsettings.json file. 或者,为确保在 Visual Studio Code 中配置了正确的 IoT 中心,请使用选择 IoT 中心命令Or, to ensure you've configured the proper IoT hub within Visual Studio Code, use the Select IoT hub command.

    设置 IoT 中心连接字符串

备注

系统可能会要求你提供 IoT 中心的内置终结点信息。You might be asked to provide Built-in endpoint information for the IoT Hub. 若要获取此信息,请在 Azure 门户中导航到 IoT 中心,然后在左侧导航窗格中查找“内置终结点”选项。To get that information, in Azure portal, navigate to your IoT Hub and look for Built-in endpoints option in the left navigation pane. 单击此处,在“与事件中心兼容的终结点”部分下查找“与事件中心兼容的终结点” 。Click there and look for the Event Hub-compatible endpoint under Event Hub compatible endpoint section. 复制并使用框中的文本。Copy and use the text in the box. 终结点将如下所示:The endpoint will look something like this:
Endpoint=sb://iothub-ns-xxx.servicebus.chinacloudapi.cn/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>

  1. 右键单击 src/edge/config/deployment.openvino.amd64.json,然后选择“为单个设备创建部署”。Right-click src/edge/config/deployment.openvino.amd64.json and select Create Deployment for Single Device.

    为单个设备创建部署

  2. 如果系统提示你选择 IoT 中心设备,请选择“lva-sample-device”。When you're prompted to select an IoT Hub device, select lva-sample-device.

  3. 大约 30 秒后,在该窗口的左下角刷新 Azure IoT 中心。After about 30 seconds, in the lower-left corner of the window, refresh Azure IoT Hub. 边缘设备现在显示以下已部署的模块:The edge device now shows the following deployed modules:

    • 实时视频分析模块,名为“lvaEdge”The Live Video Analytics module, named lvaEdge
    • rtspsim 模块,可模拟 RTSP 服务器,充当实时视频源的源The rtspsim module, which simulates an RTSP server and acts as the source of a live video feed
    • openvino 模块,即 OpenVINO™ Model Server(由 Intel 提供的 AI 扩展)The openvino module, which is the OpenVINO™ Model Server – AI Extension module from Intel

准备监视事件Prepare to monitor events

右键单击实时视频分析设备,并选择“开始监视内置事件终结点”。Right-click the Live Video Analytics device and select Start Monitoring Built-in Event Endpoint. 需要执行此步骤,以在 Visual Studio Code 的“输出”窗口中监视 IoT 中心事件。You need this step to monitor the IoT Hub events in the OUTPUT window of Visual Studio Code.

开始监视

运行示例程序以检测车辆Run the sample program to detect vehicles

如果在浏览器中打开本教程的图形拓扑,你将看到 inferencingUrl 的值已经设置为 http://openvino:4000/vehicleDetection,这意味着在实时视频中检测到车辆后(若有),推理服务器将返回结果。If you open the graph topology for this tutorial in a browser, you will see that the value of inferencingUrl has been set to http://openvino:4000/vehicleDetection, which means the inference server will return results after detecting vehicles, if any, in the live video.

  1. 在 Visual Studio Code 中,打开“扩展”选项卡(或按 Ctrl+Shift+X),然后搜索“Azure IoT 中心”。In Visual Studio Code, open the Extensions tab (or press Ctrl+Shift+X) and search for Azure IoT Hub.

  2. 右键单击并选择“扩展设置”。Right click and select Extension Settings.

    扩展设置

  3. 搜索并启用“显示详细消息”。Search and enable "Show Verbose Message".

    显示详细消息

  4. 若要启动调试会话,请选择 F5 键。To start a debugging session, select the F5 key. 你可在“终端”窗口中看到打印的消息。You see messages printed in the TERMINAL window.

  5. operations.json 代码首先调用直接方法 GraphTopologyListGraphInstanceListThe operations.json code starts off with calls to the direct methods GraphTopologyList and GraphInstanceList. 如果你在完成先前的快速入门后清理了资源,则该过程将返回空列表,然后暂停。If you cleaned up resources after you completed previous quickstarts, then this process will return empty lists and then pause. 若要继续,请选择 Enter 键。To continue, select the Enter key.

    “终端”窗口将显示下一组直接方法调用:The TERMINAL window shows the next set of direct method calls:

    • GraphTopologySet 的调用,该调用使用前面的 topologyUrlA call to GraphTopologySet that uses the preceding topologyUrl

    • GraphInstanceSet 的调用,该调用使用以下正文:A call to GraphInstanceSet that uses the following body:

      {
        "@apiVersion": "2.0",
        "name": "Sample-Graph-1",
        "properties": {
          "topologyName": "InferencingWithOpenVINO",
          "description": "Sample graph description",
          "parameters": [
            {
              "name": "rtspUrl",
              "value": "rtsp://rtspsim:554/media/lots_015.mkv"
            },
            {
              "name": "rtspUserName",
              "value": "testuser"
            },
            {
              "name": "rtspPassword",
              "value": "testpassword"
            }
          ]
        }
      }
      
    • GraphInstanceActivate 的调用,用于启动图形实例和视频流A call to GraphInstanceActivate that starts the graph instance and the flow of video

    • GraphInstanceList 的第二次调用,显示图形实例处于运行状态A second call to GraphInstanceList that shows that the graph instance is in the running state

  6. “终端”窗口中的输出会在出现 Press Enter to continue 提示时暂停。The output in the TERMINAL window pauses at a Press Enter to continue prompt. 暂时不要选择 Enter。Don't select Enter yet. 向上滚动,查看调用的直接方法的 JSON 响应有效负载。Scroll up to see the JSON response payloads for the direct methods you invoked.

  7. 切换到 Visual Studio Code 中的“输出”窗口。Switch to the OUTPUT window in Visual Studio Code. 可看到 IoT Edge 模块上的实时视频分析正发送到 IoT 中心的消息。You see messages that the Live Video Analytics on IoT Edge module is sending to the IoT hub. 本快速入门中的以下部分将讨论这些消息。The following section of this quickstart discusses these messages.

  8. 媒体图将继续运行并打印结果。The media graph continues to run and print results. RTSP 模拟器不断循环源视频。The RTSP simulator keeps looping the source video. 若要停止媒体图,请返回“终端”窗口,并选择 Enter。To stop the media graph, return to the TERMINAL window and select Enter.

    接下来会执行一系列调用,以清理资源:The next series of calls cleans up resources:

    • 调用 GraphInstanceDeactivate 停用图形实例。A call to GraphInstanceDeactivate deactivates the graph instance.
    • 调用 GraphInstanceDelete 删除该实例。A call to GraphInstanceDelete deletes the instance.
    • 调用 GraphTopologyDelete 删除拓扑。A call to GraphTopologyDelete deletes the topology.
    • GraphTopologyList 的最后一次调用显示该列表为空。A final call to GraphTopologyList shows that the list is empty.

解释结果Interpret results

运行媒体图时,来自 HTTP 扩展处理器节点的结果将通过 IoT 中心接收器节点传递到 IoT 中心。When you run the media graph, the results from the HTTP extension processor node pass through the IoT Hub sink node to the IoT hub. 在“输出”窗口中看到的消息包含 bodyapplicationProperties 部分。The messages you see in the OUTPUT window contain a body section and an applicationProperties section. 有关详细信息,请参阅创建和读取 IoT 中心消息For more information, see Create and read IoT Hub messages.

在下面的消息中,实时视频分析模块定义了应用程序属性和正文内容。In the following messages, the Live Video Analytics module defines the application properties and the content of the body.

MediaSessionEstablished 事件MediaSessionEstablished event

对媒体图进行实例化后,RTSP 源节点尝试连接到在 rtspsim-live555 容器上运行的 RTSP 服务器。When a media graph is instantiated, the RTSP source node attempts to connect to the RTSP server that runs on the rtspsim-live555 container. 如果连接成功,则打印以下事件。If the connection succeeds, then the following event is printed. 事件类型为 Microsoft.Media.MediaGraph.Diagnostics.MediaSessionEstablished。The event type is Microsoft.Media.MediaGraph.Diagnostics.MediaSessionEstablished.

[IoTHubMonitor] [9:42:18 AM] Message received from [lvaedgesample/lvaEdge]:
{
  "body": {
    "sdp&quot;: &quot;SDP:\nv=0\r\no=- 1586450538111534 1 IN IP4 nnn.nn.0.6\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/lots_015.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.03.06\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/lots_015.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=Z00AKeKQCgC3YC3AQEBpB4kRUA==,aO48gA==\r\na=control:track1\r\n"
  },
  "applicationProperties": {
    "dataVersion": "1.0",
    "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
    "subject": "/graphInstances/GRAPHINSTANCENAMEHERE/sources/rtspSource",
    "eventType": "Microsoft.Media.MediaGraph.Diagnostics.MediaSessionEstablished",
    "eventTime": "2020-07-24T16:42:18.1280000Z"
  }
}

在此消息中,请注意以下详细信息:In this message, notice these details:

  • 消息为诊断事件。The message is a diagnostics event. MediaSessionEstablished 指示 RTSP 源节点(使用者)与 RTSP 模拟器连接,并已开始接收(模拟的)实时馈送。MediaSessionEstablished indicates that the RTSP source node (the subject) connected with the RTSP simulator and has begun to receive a (simulated) live feed.
  • applicationProperties 中的 subject 指示消息是从媒体图中的 RTSP 源节点生成的。In applicationProperties, subject indicates that the message was generated from the RTSP source node in the media graph.
  • applicationProperties 中的 eventType 指示此事件是诊断事件。In applicationProperties, eventType indicates that this event is a diagnostics event.
  • eventTime 指示事件发生的时间。The eventTime indicates the time when the event occurred.
  • body 包含有关诊断事件的数据。The body contains data about the diagnostics event. 在本例中,数据包含会话描述协议 (SDP) 详细信息。In this case, the data comprises the Session Description Protocol (SDP) details.

推理事件Inference event

HTTP 扩展处理器节点从 OpenVINO™ Model Server(AI 扩展模块)接收推理结果。The HTTP extension processor node receives inference results from the OpenVINO™ Model Server – AI Extension module. 然后它通过 IoT 中心接收器节点将结果作为推理事件发出。It then emits the results through the IoT Hub sink node as inference events.

在这些事件中,类型设置为 entity,用于指示它是实体,如汽车或卡车等。In these events, the type is set to entity to indicate it's an entity, such as a car or truck. eventTime 值为检测到对象时的 UTC 时间。The eventTime value is the UTC time when the object was detected.

在以下示例中,检测到两个置信度值高于 0.9 的车辆。In the following example, two vehicles were detected, with a confidence values above 0.9.

[IoTHubMonitor] [9:43:18 AM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "inferences": [
      {
        "type": "entity",
        "subtype": "vehicleDetection",
        "entity": {
          "tag": {
            "value": "vehicle",
            "confidence": 0.9951713681221008
          },
          "box": {
            "l": 0.042635321617126465,
            "t": 0.4004564881324768,
            "w": 0.10961548984050751,
            "h": 0.07942074537277222
          }
        }
      },
      {
        "type": "entity",
        "subtype": "vehicleDetection",
        "entity": {
          "tag": {
            "value": "vehicle",
            "confidence": 0.928486168384552
          },
          "box": {
            "l": 0.2506900727748871,
            "t": 0.07512682676315308,
            "w": 0.05470699071884155,
            "h": 0.07408371567726135
          }
        }
      }
    ]
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
    "subject": "/graphInstances/GRAPHINSTANCENAMEHERE/processors/inferenceClient",
    "eventType": "Microsoft.Media.Graph.Analytics.Inference",
    "eventTime&quot;: &quot;2020-07-24T16:43:18.1280000Z"
  }
}

在消息中,请注意以下详细信息:In the messages, notice the following details:

  • applicationProperties 中的 subject 引用生成消息的图形拓扑中的节点。In applicationProperties, subject references the node in the graph topology from which the message was generated.
  • applicationProperties 中的 eventType 指示此事件是分析事件。In applicationProperties, eventType indicates that this event is an analytics event.
  • eventTime 值为事件发生的时间。The eventTime value is the time when the event occurred.
  • body 部分包含有关分析事件的数据。The body section contains data about the analytics event. 在本例中,该事件是推理事件,因此正文包含 inferences 数据。In this case, the event is an inference event, so the body contains inferences data.
  • inferences 部分指示 typeentityThe inferences section indicates that the type is entity. 本部分包含有关实体的其他数据。This section includes additional data about the entity.

运行实例程序以检测人、车辆或自行车Run the sample program to detect persons or vehicles or bikes

若要使用不同的模型,则需要修改图形拓扑和 operations.json 文件。To use a different model, you will need to modify the graph topology, and as well as operations.json file.

图形拓扑复制到本地文件,如 C:\TEMP\topology.jsonCopy the graph topology to a local file, say C:\TEMP\topology.json. 打开该副本,然后将 inferencingUrl 的值编辑为 http://openvino:4000/personVehicleBikeDetectionOpen that copy, and edit the value of inferencingUrl to http://openvino:4000/personVehicleBikeDetection.

下一步,在 Visual Studio Code 中,转到 src/cloud-to-device-console-app 文件夹并打开 operations.json 文件。Next, in Visual Studio Code, go to the src/cloud-to-device-console-app folder and open operations.json file. 将包含 topologyUrl 的行编辑为:Edit the line with topologyUrl to:

      "topologyFile" : "C:\\TEMP\\topology.json" 

现在可以重复上述步骤,使用新拓扑再次运行示例程序。You can now repeat the steps above to run the sample program again, with the new topology. 推理结果将与车辆检测模型的结果相似(在架构中),只是将 subtype 设置为 personVehicleBikeDetectionThe inference results will be similar (in schema) to that of the vehicle detection model, with just the subtype set to personVehicleBikeDetection.

运行示例程序将车辆分类Run the sample program to classify vehicles

在 Visual Studio Code 中,打开上一步中 topology.json 的本地副本,然后将 inferencingUrl 的值编辑为 http://openvino:4000/vehicleClassificationIn Visual Studio Code, open the local copy of topology.json from the previous step, and edit the value of inferencingUrl to http://openvino:4000/vehicleClassification. 如果已运行前面的示例来检测人、车辆或自行车,则无需再次修改 operations.json 文件。If you have run the previous example to detect persons or vehicles or bikes, you do not need to modify the operations.json file again.

现在可以重复上述步骤,使用新拓扑再次运行示例程序。You can now repeat the steps above to run the sample program again, with the new topology. 示例分类结果如下所示。A sample classification result is as follows.

[IoTHubMonitor] [9:44:18 AM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "inferences": [
      {
        "type": "classification",
        "subtype": "color",
        "classification": {
          "tag": {
            "value": "black",
            "confidence": 0.9179772138595581
          }
        }
      },
      {
        "type": "classification",
        "subtype": "type",
        "classification": {
          "tag": {
            "value": "truck",
            "confidence": 1
          }
        }
      }
    ]
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
    "subject": "/graphInstances/GRAPHINSTANCENAMEHERE/processors/inferenceClient",
    "eventType": "Microsoft.Media.Graph.Analytics.Inference",
    "eventTime&quot;: &quot;2020-07-24T16:44:18.1280000Z"
  }
}

运行示例程序以检测人脸Run the sample program to detect faces

在 Visual Studio Code 中,打开上一步中 topology.json 的本地副本,然后将 inferencingUrl 的值编辑为 http://openvino:4000/faceDetectionIn Visual Studio Code, open the local copy of topology.json from the previous step, and edit the value of inferencingUrl to http://openvino:4000/faceDetection. 如果已运行前面的示例来检测人、车辆或自行车,则无需再次修改 operations.json 文件。If you have run the previous example to detect persons or vehicles or bikes, you do not need to modify the operations.json file again.

现在可以重复上述步骤,使用新拓扑再次运行示例程序。You can now repeat the steps above to run the sample program again, with the new topology. 示例检测结果如下所示(注意:上面使用的停车场视频不包含任何可检测的人脸,你应该使用其他视频以试用此模型)。A sample detection result is as follows (note: the parking lot video used above does not contain any detectable faces - you should another video in order to try this model).

[IoTHubMonitor] [9:54:18 AM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "inferences": [
      {
        "type": "entity",
        "subtype": "faceDetection",
        "entity": {
          "tag": {
            "value": "face",
            "confidence": 0.9997053742408752
          },
          "box": {
            "l": 0.2559490501880646,
            "t": 0.03403960168361664,
            "w": 0.17685115337371826,
            "h": 0.45835764706134796
          }
        }
      }
    ]
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/mediaservices/hubname",
    "subject": "/graphInstances/GRAPHINSTANCENAMEHERE/processors/inferenceClient",
    "eventType": "Microsoft.Media.Graph.Analytics.Inference",
    "eventTime&quot;: &quot;2020-07-24T16:54:18.1280000Z"
  }
}

清理资源Clean up resources

如果计划学习其他快速入门或教程,请保留创建的资源。If you intend to try other quickstarts or tutorials, keep the resources you created. 否则,请转到 Azure 门户,再转到资源组,选择运行本教程所用的资源组,并删除所有资源。Otherwise, go to the Azure portal, go to your resource groups, select the resource group where you ran this tutorial, and delete all the resources.

后续步骤Next steps

查看高级用户面临的其他挑战:Review additional challenges for advanced users: