快速入门:检测运动,将视频录制到媒体服务Quickstart: Detect motion, record video to Media Services

本文逐步讲解如何使用 IoT Edge 上的实时视频分析进行基于事件的录制This article walks you through the steps to use Live Video Analytics on IoT Edge for event-based recording. 它将 Azure 中的 Linux VM 用作 IoT Edge 设备并使用模拟的实时视频流。It uses a Linux VM in Azure as an IoT Edge device and a simulated live video stream. 分析此视频流以了解是否存在移动物体。This video stream is analyzed for the presence of moving objects. 检测到运动时,会将事件发送到 Azure IoT 中心,并将视频流的相关部分记录为 Azure 媒体服务中的资产。When motion is detected, events are sent to Azure IoT Hub, and the relevant part of the video stream is recorded as an asset in Azure Media Services.

本文以开始使用快速入门为基础。This article builds on top of the Getting Started quickstart.

先决条件Prerequisites

观看示例视频Review the sample video

作为上述 Azure 资源设置步骤的一部分,一个停车场(短)视频将被复制到 Azure 中用作 IoT Edge 设备的 Linux VM 上。As part of the steps above to set up the Azure resources, a (short) video of a parking lot will be copied to the Linux VM in Azure being used as the IoT Edge device. 本教程将使用此视频文件模拟实时流。This video file will be used to simulate a live stream for this tutorial.

可以使用 VLC Player 等应用程序,启动它,按 Ctrl+N,然后粘贴停车场视频示例链接以开始播放。You can use an application like VLC Player, launch it, hit Ctrl+N, and paste the parking lot video sample link to start playback. 大约在 5 秒的时候,一辆白色汽车在停车场间移动。At about the 5-second mark, a white car moves through the parking lot.

完成以下步骤时,会使用 IoT Edge 上的实时视频分析功能检测汽车的运动,并从大约第 5 秒的标记处开始录制视频剪辑。When you complete the steps below, you will have used Live Video Analytics on IoT Edge to detect that motion of the car, and record a video clip starting at around that 5-second mark. 下图直观呈现了整个过程。The diagram below is the visual representation of the overall flow.

根据运动事件将基于事件的视频录制到资产

使用直接方法调用Use direct method calls

可以通过调用直接方法来使用该模块分析实时视频流。You can use the module to analyze live video streams by invoking direct methods. 阅读 IoT Edge 上的实时视频分析的直接方法,了解模块提供的所有直接方法。Read Direct Methods for Live Video Analytics on IoT Edge to understand all the direct methods provided by the module.

调用 GraphTopologyListInvoke GraphTopologyList

此步骤会枚举模块中的所有图形拓扑This step enumerates all the graph topologies in the module.

  1. 右键单击“lvaEdge”模块,然后从上下文菜单中选择“调用模块直接方法”。Right-click on "lvaEdge" module and select "Invoke Module Direct Method" from the context menu.
  2. 你将看到一个编辑框在 Visual Studio Code 窗口的顶部中间弹出。You will see an edit box pop in the top-middle of Visual Studio Code window. 在编辑框中输入“GraphTopologyList”,然后按 Enter。Enter "GraphTopologyList" in the edit box and press enter.
  3. 接下来,复制以下 JSON 有效负载并将其粘贴到编辑框中,然后按 Enter。Next, copy, and paste the below JSON payload in the edit box and press enter.
{
    "@apiVersion" : "1.0"
}

几秒钟内,你将看到 Visual Studio Code 中弹出“输出”窗口,并显示以下响应Within a few seconds, you will see the OUTPUT window in Visual Studio Code popup with the following response

[DirectMethod] Invoking Direct Method [GraphTopologyList] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": {
    "value": []
  }
}

由于没有创建图形拓扑,所以预期会出现上述响应。The above response is expected as no graph topologies have been created.

调用 GraphTopologySetInvoke GraphTopologySet

通过与调用 GraphTopologyList 相同的步骤,你可以使用以下 JSON 作为有效负载,调用 GraphTopologySet 以设置图形拓扑Using the same steps as those outlined for invoking GraphTopologyList, you can invoke GraphTopologySet to set a graph topology using the following JSON as the payload. 你将创建一个名为“EVRtoAssetsOnMotionDetection”的图形拓扑。You will be creating a graph topology named as "EVRtoAssetsOnMotionDetection".

{
    "@apiVersion": "1.0",
    "name": "EVRtoAssetsOnMotionDetection",
    "properties": {
      "description": "Event-based video recording to Assets based on motion events",
      "parameters": [
        {
            "name": "rtspUserName",
            "type": "String",
            "description": "rtsp source user name.",
            "default": "dummyUserName"
        },
        {
            "name": "rtspPassword",
            "type": "String",
            "description": "rtsp source password.",
            "default" : "dummyPassword"
        },
        {
            "name": "rtspUrl",
            "type": "String",
            "description": "rtsp Url"
        },
        {
            "name": "motionSensitivity",
            "type": "String",
            "description": "motion detection sensitivity",
            "default" : "medium"
        },
        {
            "name": "hubSinkOutputName",
            "type": "String",
            "description": "hub sink output name",
            "default" : "iothubsinkoutput"
        }                              
    ],         

      "sources": [
        {
          "@type": "#Microsoft.Media.MediaGraphRtspSource",
          "name": "rtspSource",
          "endpoint": {
            "@type": "#Microsoft.Media.MediaGraphUnsecuredEndpoint",
            "url": "${rtspUrl}",
            "credentials": {
              "@type": "#Microsoft.Media.MediaGraphUsernamePasswordCredentials",
              "username": "${rtspUserName}",
              "password": "${rtspPassword}"
            }
          }
        }
      ],
      "processors": [
        {
          "@type": "#Microsoft.Media.MediaGraphMotionDetectionProcessor",
          "name": "motionDetection",
          "sensitivity": "${motionSensitivity}",
          "inputs": [
            {
              "nodeName": "rtspSource"
            }
          ]
        },
        {
          "@type": "#Microsoft.Media.MediaGraphSignalGateProcessor",
          "name": "signalGateProcessor",
          "inputs": [
            {
              "nodeName": "motionDetection"
            },
            {
              "nodeName": "rtspSource"
            }
          ],
          "activationEvaluationWindow": "PT1S",
          "activationSignalOffset": "PT0S",
          "minimumActivationTime": "PT30S",
          "maximumActivationTime": "PT30S"
        }
      ],
      "sinks": [
        {
          "@type": "#Microsoft.Media.MediaGraphAssetSink",
          "name": "assetSink",
          "assetNamePattern": "sampleAssetFromEVR-LVAEdge-${System.DateTime}",
          "segmentLength": "PT0M30S",
          "localMediaCacheMaximumSizeMiB": "2048",
          "localMediaCachePath": "/var/lib/azuremediaservices/tmp/",
          "inputs": [
            {
              "nodeName": "signalGateProcessor"
            }
          ]
        },
        {
          "@type": "#Microsoft.Media.MediaGraphIoTHubMessageSink",
          "name": "hubSink",
          "hubOutputName": "${hubSinkOutputName}",
          "inputs": [
            {
              "nodeName": "motionDetection"
            }
          ]
        }
      ]
    }
}

上述 JSON 有效负载的结果是创建一个定义了五个参数(其中四个参数具有默认值)的图形拓扑。The above JSON payload results in the creation of a graph topology that defines five parameters (four of which have default values). 拓扑有一个源节点(RTSP 源)、两个处理器节点(运动检测处理器信号入口处理器)以及两个接收器节点(IoT 中心接收器和资产接收器)。The topology has one source node (RTSP source), two processor nodes (motion detection processor and signal gate processor, and two sink nodes (IoT Hub sink and asset sink). 拓扑的直观呈现形式如下所示。The visual representation of the topology is shown above.

几秒钟内,“输出”窗口中显示以下响应。Within a few seconds, you will see the following response in the OUTPUT window.

[DirectMethod] Invoking Direct Method [GraphTopologySet] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 201,
  "payload": {
    "systemData": {
      "createdAt": "2020-05-12T22:05:31.603Z",
      "lastModifiedAt": "2020-05-12T22:05:31.603Z"
    },
    "name": "EVRtoAssetsOnMotionDetection",
    "properties": {
      "description": "Event-based video recording to assets based on motion events",
      "parameters": [
        {
          "name": "rtspUserName",
          "type": "String",
          "description": "rtsp source user name.",
          "default": "dummyUserName"
        },
        {
          "name": "rtspPassword",
          "type": "String",
          "description": "rtsp source password.",
          "default": "dummyPassword"
        },
        {
          "name": "rtspUrl",
          "type": "String",
          "description": "rtsp Url"
        },
        {
          "name": "motionSensitivity",
          "type": "String",
          "description": "motion detection sensitivity",
          "default": "medium"
        },
        {
          "name": "hubSinkOutputName",
          "type": "String",
          "description": "hub sink output name",
          "default": "iothubsinkoutput"
        }
      ],
      "sources": [
        {
          "@type": "#Microsoft.Media.MediaGraphRtspSource",
          "name": "rtspSource",
          "transport": "Tcp",
          "endpoint": {
            "@type": "#Microsoft.Media.MediaGraphUnsecuredEndpoint",
            "url": "${rtspUrl}",
            "credentials": {
              "@type": "#Microsoft.Media.MediaGraphUsernamePasswordCredentials",
              "username": "${rtspUserName}",
              "password": "${rtspPassword}"
            }
          }
        }
      ],
      "processors": [
        {
          "@type": "#Microsoft.Media.MediaGraphMotionDetectionProcessor",
          "sensitivity": "${motionSensitivity}",
          "name": "motionDetection",
          "inputs": [
            {
              "nodeName": "rtspSource",
              "outputSelectors": []
            }
          ]
        },
        {
          "@type": "#Microsoft.Media.MediaGraphSignalGateProcessor",
          "activationEvaluationWindow": "PT1S",
          "activationSignalOffset": "PT0S",
          "minimumActivationTime": "PT30S",
          "maximumActivationTime": "PT30S",
          "name": "signalGateProcessor",
          "inputs": [
            {
              "nodeName": "motionDetection",
              "outputSelectors": []
            },
            {
              "nodeName": "rtspSource",
              "outputSelectors": []
            }
          ]
        }
      ],
      "sinks": [
        {
          "@type": "#Microsoft.Media.MediaGraphAssetSink",
          "localMediaCachePath": "/var/lib/azuremediaservices/tmp/",
          "localMediaCacheMaximumSizeMiB": "2048",
          "segmentLength": "PT0M30S",
          "assetNamePattern": "sampleAssetFromEVR-LVAEdge-${System.DateTime}",
          "name": "assetSink",
          "inputs": [
            {
              "nodeName": "signalGateProcessor",
              "outputSelectors": []
            }
          ]
        },
        {
          "@type": "#Microsoft.Media.MediaGraphIoTHubMessageSink",
          "hubOutputName": "${hubSinkOutputName}",
          "name": "hubSink",
          "inputs": [
            {
              "nodeName": "motionDetection",
              "outputSelectors": []
            }
          ]
        }
      ]
    }
  }
}

返回的状态为 201,表示已创建新图形拓扑。The status returned is 201, indicating that a new graph topology was created. 请尝试在后续步骤中使用以下直接方法:Try the following direct methods as next steps:

  • 再次调用 GraphTopologySet,并检查返回的状态代码是否为 200。Invoke GraphTopologySet again and check that the status code returned is 200. 状态代码 200 表示已成功更新现有图形拓扑。Status code 200 indicates that an existing graph topology was successfully updated.
  • 再次调用 GraphTopologySet,但更改描述字符串。Invoke GraphTopologySet again but change the description string. 检查响应中的状态代码是否为 200,说明是否更新为新值。Check that the status code in the response is 200 and the description is updated to the new value.
  • 调用之前部分概述的 GraphTopologyList,并检查现在是否可以在返回的有效负载中看到“EVRtoAssetsOnMotionDetection”拓扑。Invoke GraphTopologyList as outlined in the previous section and check that now you can see the "EVRtoAssetsOnMotionDetection" graph topology in the returned payload.

调用 GraphTopologyGetInvoke GraphTopologyGet

现在,使用以下有效负载调用 GraphTopologyGetNow invoke GraphTopologyGet with the following payload


{
    "@apiVersion" : "1.0",
    "name" : "EVRtoAssetsOnMotionDetection"
}

几秒钟内,你应该可在“输出”窗口中看到以下响应Within a few seconds, you should see the following response in the Output window

[DirectMethod] Invoking Direct Method [GraphTopologyGet] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": {
    "systemData": {
      "createdAt": "2020-05-12T22:05:31.603Z",
      "lastModifiedAt": "2020-05-12T22:05:31.603Z"
    },
    "name": "EVRtoAssetsOnMotionDetection",
    "properties": {
      "description": "Event-based video recording to Assets based on motion events",
      "parameters": [
        {
          "name": "rtspUserName",
          "type": "String",
          "description": "rtsp source user name.",
          "default": "dummyUserName"
        },
        {
          "name": "rtspPassword",
          "type": "String",
          "description": "rtsp source password.",
          "default": "dummyPassword"
        },
        {
          "name": "rtspUrl",
          "type": "String",
          "description": "rtsp Url"
        },
        {
          "name": "motionSensitivity",
          "type": "String",
          "description": "motion detection sensitivity",
          "default": "medium"
        },
        {
          "name": "hubSinkOutputName",
          "type": "String",
          "description": "hub sink output name",
          "default": "iothubsinkoutput"
        }
      ],
      "sources": [
        {
          "@type": "#Microsoft.Media.MediaGraphRtspSource",
          "name": "rtspSource",
          "transport": "Tcp",
          "endpoint": {
            "@type": "#Microsoft.Media.MediaGraphUnsecuredEndpoint",
            "url": "${rtspUrl}",
            "credentials": {
              "@type": "#Microsoft.Media.MediaGraphUsernamePasswordCredentials",
              "username": "${rtspUserName}",
              "password": "${rtspPassword}"
            }
          }
        }
      ],
      "processors": [
        {
          "@type": "#Microsoft.Media.MediaGraphMotionDetectionProcessor",
          "sensitivity": "${motionSensitivity}",
          "name": "motionDetection",
          "inputs": [
            {
              "nodeName": "rtspSource",
              "outputSelectors": []
            }
          ]
        },
        {
          "@type": "#Microsoft.Media.MediaGraphSignalGateProcessor",
          "activationEvaluationWindow": "PT1S",
          "activationSignalOffset": "PT0S",
          "minimumActivationTime": "PT30S",
          "maximumActivationTime": "PT30S",
          "name": "signalGateProcessor",
          "inputs": [
            {
              "nodeName": "motionDetection",
              "outputSelectors": []
            },
            {
              "nodeName": "rtspSource",
              "outputSelectors": []
            }
          ]
        }
      ],
      "sinks": [
        {
          "@type": "#Microsoft.Media.MediaGraphAssetSink",
          "localMediaCachePath": "/var/lib/azuremediaservices/tmp/",
          "localMediaCacheMaximumSizeMiB": "2048",
          "segmentLength": "PT0M30S",
          "assetNamePattern": "sampleAssetFromEVR-LVAEdge-${System.DateTime}",
          "name": "assetSink",
          "inputs": [
            {
              "nodeName": "signalGateProcessor",
              "outputSelectors": []
            }
          ]
        },
        {
          "@type": "#Microsoft.Media.MediaGraphIoTHubMessageSink",
          "hubOutputName": "${hubSinkOutputName}",
          "name": "hubSink",
          "inputs": [
            {
              "nodeName": "motionDetection",
              "outputSelectors": []
            }
          ]
        }
      ]
    }
  }
}

请注意响应有效负载中的以下属性:Note the following properties in the response payload:

  • 状态代码为 200,表示成功。Status code is 200, indicating success.
  • 有效负载具有“created”和“lastModified”时间戳。The payload has the "created" and the "lastModified" timestamp.

调用 GraphInstanceSetInvoke GraphInstanceSet

接下来,创建引用上述图形拓扑的图形实例。Next, create a graph instance that references the above graph topology. 此处所述,图形实例允许你分析来自具有相同图形拓扑的许多照相机的实时视频流。As explained here, graph instances let you analyze live video streams from many cameras with the same graph topology.

现在使用以下有效负载调用 GraphInstanceSet 直接方法:Now invoke the GraphInstanceSet direct method with the following payload:

{
    "@apiVersion" : "1.0",
    "name" : "Sample-Graph-2",
    "properties" : {
        "topologyName" : "EVRtoAssetsOnMotionDetection",
        "description" : "Sample graph description",
        "parameters" : [
            { "name" : "rtspUrl", "value" : "rtsp://rtspsim:554/media/lots_015.mkv" }
        ]
    }
}

注意以下事项:Note the following:

  • 上述有效负载指定需要为其创建图形实例的图形拓扑名称 (EVRtoAssetsOnMotionDetection)。The payload above specifies the graph topology name (EVRtoAssetsOnMotionDetection) for which the graph instance needs to be created.
  • 有效负载包含“rtspUrl”的参数值,该参数在拓扑负载中没有默认值。The payload contains parameter value for "rtspUrl", which did not have a default value in the topology payload.

几秒钟内,你可在“输出”窗口中看到以下响应:Within few seconds, you will see the following response in the Output window:

[DirectMethod] Invoking Direct Method [GraphInstanceSet] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 201,
  "payload": {
    "systemData": {
      "createdAt": "2020-05-12T23:30:20.666Z",
      "lastModifiedAt": "2020-05-12T23:30:20.666Z"
    },
    "name": "Sample-Graph-2",
    "properties": {
      "state": "Inactive",
      "description": "Sample graph description",
      "topologyName": "EVRtoAssetsOnMotionDetection",
      "parameters": [
        {
          "name": "rtspUrl",
          "value": "rtsp://rtspsim:554/media/lots_015.mkv"
        }
      ]
    }
  }
}

请注意响应有效负载中的以下属性:Note the following properties in the response payload:

  • 状态代码为 201,指示已创建新实例。Status code is 201, indicating a new instance was created.
  • 状态为“非活动”,指示图形实例已创建但未激活。State is "Inactive", indicating that the graph instance was created but not activated. 有关详细信息,请参阅媒体图状态。For more information, see media graph states.

请尝试在后续步骤中使用以下直接方法:Try the following direct methods as next steps:

  • 使用相同的有效负载再次调用 GraphInstanceSet,并注意返回的状态代码是 200。Invoke GraphInstanceSet again with the same payload and note that the returned status code is now 200.
  • 再次调用 GraphInstanceSet,但使用不同的说明,并注意响应有效负载中更新的说明,指示图形实例已成功更新。Invoke GraphInstanceSet again but with a different description and note the updated description in the response payload, indicating that the graph instance was successfully updated.
  • 调用 GraphInstanceSet,但将名称更改为“Sample-Graph-3”并观察响应有效负载。Invoke GraphInstanceSet but change the name to "Sample-Graph-3" and observe the response payload. 请注意,创建了一个新的图形实例(即状态代码为 201)。Note that a new graph instance is created (that is, status code is 201). 在完成快速入门后,记得清理此类重复实例。Remember to clean up such duplicate instances when you are done with the quickstart.

准备监视事件Prepare for monitoring events

创建的媒体图使用运动检测处理器节点检测运动,而且此类事件将中继到 IoT 中心。The media graph you created uses the motion detection processor node to detect motion, and such events are relayed to your IoT Hub. 若要准备观察此类事件,请执行以下步骤In order to prepare for observing such events, follow these steps

  1. 在 Visual Studio Code 中打开“资源管理器”窗格,然后在左下角查找“Azure IoT 中心”。Open the Explorer pane in Visual Studio Code and look for Azure IoT Hub at the bottom-left corner.

  2. 展开“设备”节点Expand the Devices node

  3. 右键单击“lva-sample-device”,然后选择“开始监视内置事件监视”选项Right-clink on lva-sample-device and chose the option "Start Monitoring Built-in Event Monitoring"

    开始监视内置事件监视

    几秒钟内,你可在“输出”窗口看到以下消息:Within seconds, you will see the following messages in the OUTPUT window:

[IoTHubMonitor] Start monitoring message arrived in built-in endpoint for all devices ...
[IoTHubMonitor] Created partition receiver [0] for consumerGroup [$Default]
[IoTHubMonitor] Created partition receiver [1] for consumerGroup [$Default]
[IoTHubMonitor] Created partition receiver [2] for consumerGroup [$Default]
[IoTHubMonitor] Created partition receiver [3] for consumerGroup [$Default]

调用 GraphInstanceActivateInvoke GraphInstanceActivate

现在激活图形实例,它通过模块启动实时视频流。Now activate the graph instance - which starts the flow of live video through the module. 使用以下有效负载调用直接方法 GraphInstanceActivate:Invoke the direct method GraphInstanceActivate with the following payload:

{
    "@apiVersion" : "1.0",
    "name" : "Sample-Graph-2"
}

几秒钟内,你应该可在“输出”窗口中看到以下响应Within few seconds, you should see the following response in the OUTPUT window

[DirectMethod] Invoking Direct Method [GraphInstanceActivate] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": null
}

响应有效负载中的状态代码 200 指示图形实例已成功激活。Status code of 200 in the response payload indicates that the graph instance was successfully activated.

调用 GraphInstanceGetInvoke GraphInstanceGet

现在使用以下有效负载调用 GraphInstanceGet 直接方法:Now invoke the GraphInstanceGet direct method with the following payload:

{
    "@apiVersion" : "1.0",
    "name" : "Sample-Graph-2"
}

几秒钟内,你应该可在“输出”窗口中看到以下响应Within few seconds, you should see the following response in the OUTPUT window

[DirectMethod] Invoking Direct Method [GraphInstanceGet] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": {
    "systemData": {
      "createdAt": "2020-05-12T23:30:20.666Z",
      "lastModifiedAt": "2020-05-12T23:30:20.666Z"
    },
    "name": "Sample-Graph-2",
    "properties": {
      "state": "Active",
      "description": "Sample graph description",
      "topologyName": "EVRtoAssetsOnMotionDetection",
      "parameters": [
        {
          "name": "rtspUrl",
          "value": "rtsp://rtspsim:554/media/lots_015.mkv"
        }
      ]
    }
  }
}

请注意响应有效负载中的以下属性:Note the following properties in the response payload:

  • 状态代码为 200,表示成功。Status code is 200, indicating success.
  • 状态为“活动”,表示图形实例现在处于“活动”状态。State is "Active", indicating the graph instance is now in "Active" state.

观察结果Observe results

上面创建和激活的图形实例使用运动检测处理器节点来检测传入实时视频流中的运动,并将事件发送到 IoT 中心接收器。The graph instance that you created and activated above uses the motion detection processor node to detect motion in the incoming live video stream and sends events to IoT Hub sink. 这些事件随后被转送到 IoT 中心,现在可以观察到这一点。These events are then relayed to your IoT Hub, which can now be observed. 可在“输出”窗口看到以下消息You will see the following messages in the OUTPUT window

[IoTHubMonitor] [4:33:04 PM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "sdp": "SDP:\nv=0\r\no=- 1589326384077235 1 IN IP4 XXX.XX.XX.XXX\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/lots_015.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.04.12\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-73.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/lots_015.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=640028;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\r\na=control:track1\r\n"
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.media/mediaservices/{amsAccountName}",
    "subject": "/graphInstances/Sample-Graph-2/sources/rtspSource",
    "eventType": "Microsoft.Media.Graph.Diagnostics.MediaSessionEstablished",
    "eventTime": "2020-05-12T23:33:04.077Z",
    "dataVersion": "1.0"
  }
}
[IoTHubMonitor] [4:33:09 PM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "timestamp": 143039375044290,
    "inferences": [
      {
        "type": "motion",
        "motion": {
          "box": {
            "l": 0.48954,
            "t": 0.140741,
            "w": 0.075,
            "h": 0.058824
          }
        }
      }
    ]
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.media/mediaservices/{amsAccountName}",
    "subject": "/graphInstances/Sample-Graph-2/processors/md",
    "eventType": "Microsoft.Media.Graph.Analytics.Inference",
    "eventTime": "2020-05-12T23:33:09.381Z",
    "dataVersion": "1.0"
  }
}

请注意上述消息中的以下属性Note the following properties in the above messages

  • 每条消息都包含“正文”部分和“applicationProperties”部分。Each message contains a "body" section and an "applicationProperties" section. 若要理解这些部分的内容,请阅读创建和读取 IoT 中心消息一文。To understand what these sections represent, read the article Create and Read IoT Hub message.
  • 第一条消息是诊断事件 (MediaSessionEstablished),表示 RTSP源节点(使用者)能够与 RTSP 模拟器建立连接,并开始接收(模拟的)实时源。The first message is a Diagnostics event, MediaSessionEstablished, saying that the RTSP Source node (subject) was able to establish connection with the RTSP simulator, and begin to receive a (simulated) live feed.
  • ApplicationProperties 中的“subject”引用生成消息的图形拓扑中的节点。The "subject" in applicationProperties references the node in the graph topology from which the message was generated. 在本例中,该消息来自 RTSP 源节点。In this case, the message is originating from the RTSP source node.
  • applicationProperties 中的“eventType”指示这是诊断事件。"eventType" in applicationProperties indicates that this is a Diagnostics event.
  • “eventTime”指示事件发生的时间。"eventTime" indicates the time when the event occurred.
  • “body”包含有关诊断事件的数据 - 它是 SDP 消息。"body" contains data about the diagnostic event - it's the SDP message.
  • 第二条消息是一个分析事件。The second message is an Analytics event. 可以查看到是在 MediaSessionEstablished 消息后大约 5 秒发送了该消息,这对应于视频开始时到汽车穿过停车场的时间之间的延迟。You can check that it is sent roughly 5 seconds after the MediaSessionEstablished message, which corresponds to the delay between the start of the video, and when the car drives through the parking lot.
  • applicationProperties 中的“使用者”引用图形中的运动检测处理器节点,该节点生成此消息The "subject" in applicationProperties references the motion detection processor node in the graph, which generated this message
  • 该事件是推理事件,因此“body”包含“timestamp”和“inferences”数据。The event is an Inference event and hence the body contains "timestamp" and "inferences" data.
  • “inferences”部分指示“type”为“motion”,并且具有关于“motion”事件的其他数据。"inferences" section indicates that the "type" is "motion" and has additional data about the "motion" event.

下面是你将看到的下一条消息。The next message you will see is the following.

[IoTHubMonitor] [4:33:10 PM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "outputType": "assetName",
    "outputLocation": "sampleAssetFromEVR-LVAEdge-20200512T233309Z"
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.media/mediaservices/{amsAccountName}",
    "subject": "/graphInstances/Sample-Graph-2/sinks/assetSink",
    "eventType": "Microsoft.Media.Graph.Operational.RecordingStarted",
    "eventTime": "2020-05-12T23:33:10.392Z",
    "dataVersion": "1.0"
  }
}
  • 第三条消息是操作事件。The third message is an Operational event. 可以查看到它是紧随动作检测消息之后发送的,后者充当了触发器,使录制过程开始。You can check that it is sent almost immediately after the motion detection message, which acted as the trigger to start recording.
  • ApplicationProperties 中的“subject”引用图形中的资产接收器节点,该节点生成此消息。The "subject" in applicationProperties references the asset sink node in the graph, which generated this message.
  • 正文包含有关输出位置的信息,在本例中是用于录制视频的 Azure 媒体服务资产的名称。The body contains information about the output location, which in this case is the name of the Azure Media Service asset into which video is recorded. 请记下此值,稍后你将在此快速入门中使用它。Note down this value - you will use it later in the quickstart.

在拓扑中,为信号入口处理器节点配置的激活时间为 30 秒,这意味着,图形拓扑会将大约 30 秒的视频录制到资产中。In the topology, the signal gate processor node was configured with activation times of 30 seconds, which means that the graph topology will record roughly 30 seconds worth of video into the asset. 录制视频时,运动检测处理器节点会继续发出推理事件,这些事件将显示在“输出”窗口中。While video is being recorded, the motion detection processor node will continue to emit Inference events, which will show up in the OUTPUT window. 一段时间后,会显示以下消息。After some time, you will see the following message.

[IoTHubMonitor] [4:33:31 PM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "outputType": "assetName",
    "outputLocation": "sampleAssetFromEVR-LVAEdge-20200512T233309Z"
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.media/mediaservices/{amsAccountName}",
    "subject": "/graphInstances/Sample-Graph-2/sinks/assetSink",
    "eventType": "Microsoft.Media.Graph.Operational.RecordingAvailable",
    "eventTime": "2020-05-12T23:33:31.051Z",
    "dataVersion": "1.0"
  }
}
  • 此消息也是操作事件。This message is also an Operational event. 事件 RecordingAvailable 表示已向资产写入足够的数据,以便播放机/客户端启动视频播放The event, RecordingAvailable, indicates that enough data has been written to the Asset in order for players/clients to initiate playback of the video
  • applicationProperties 中的“使用者”引用图形中的资产接收器节点,该节点生成此消息The "subject" in applicationProperties references the asset sink node in the graph, which generated this message
  • 正文包含有关输出位置的信息,在本例中是用于录制视频的 Azure 媒体服务资产的名称。The body contains information about the output location, which in this case is the name of the Azure Media Service asset into which video is recorded.

如果允许图形实例继续运行,则会显示此消息。If you let the graph instance continue to run you will see this message.

[IoTHubMonitor] [4:33:40 PM] Message received from [lva-sample-device/lvaEdge]:
{
  "body": {
    "outputType": "assetName",
    "outputLocation": "sampleAssetFromEVR-LVAEdge-20200512T233309Z"
  },
  "applicationProperties": {
    "topic": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.media/mediaservices/{amsAccountName}",
    "subject": "/graphInstances/Sample-Graph-2/sinks/assetSink",
    "eventType": "Microsoft.Media.Graph.Operational.RecordingStopped",
    "eventTime": "2020-05-12T23:33:40.014Z",
    "dataVersion": "1.0"
  }
}
  • 此消息也是操作事件。This message is also an Operational event. 事件 RecordingStopped 指示已停止录制。The event, RecordingStopped, indicates that recording has stopped.
  • 请注意,自 RecordingStarted 事件发生后已经过大约 30 秒,这与信号入口处理器节点中激活时间的值匹配。Note that roughly 30 seconds has elapsed since the RecordingStarted event, matching the values of the activation times in the signal gate processor node.
  • ApplicationProperties 中的“subject”引用图形中的资产接收器节点,该节点生成此消息。The "subject" in applicationProperties references the asset sink node in the graph, which generated this message.
  • 正文包含有关输出位置的信息,在本例中是用于录制视频的 Azure 媒体服务资产的名称。The body contains information about the output location, which in this case is the name of the Azure Media Service asset into which video is recorded.

如果允许图形实例继续运行,RTSP 模拟器将到达视频文件的末尾并停止/断开连接。If you let the graph instance continue to run, the RTSP simulator will reach the end of the video file and stop/disconnect. 然后,RTSP 源节点会重新连接到模拟器,此过程将重复。The RTSP source node will then reconnect to the simulator, and the process will repeat.

调用其他直接方法调用进行清理Invoke additional direct method calls to clean up

现在,调用直接方法以停用和删除图形实例(按该顺序执行操作)。Now, invoke direct methods to deactivate and delete the graph instance (in that order).

调用 GraphInstanceDeactivateInvoke GraphInstanceDeactivate

使用以下有效负载调用 GraphInstanceDeactivate 直接方法:Invoke the GraphInstanceDeactivate direct method with the following payload:

{
    "@apiVersion" : "1.0",
    "name" : "Sample-Graph-2"
}

几秒钟内,你应该可在“输出”窗口中看到以下响应。Within few seconds, you should see the following response in the OUTPUT window.

[DirectMethod] Invoking Direct Method [GraphInstanceDeactivate] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": null
}

状态代码 200 指示图形实例已成功停用。Status code of 200 indicates that the graph instance was successfully deactivated.

请尝试以下步骤:Try the following, as next steps:

  • 如前面各节所示调用 GraphInstanceGet,并观察“状态”值。Invoke GraphInstanceGet as indicated in the earlier sections and observe the "state" value.

调用 GraphInstanceDeleteInvoke GraphInstanceDelete

使用以下有效负载调用直接方法 GraphInstanceDeleteInvoke the direct method GraphInstanceDelete with the following payload

{
    "@apiVersion" : "1.0",
    "name" : "Sample-Graph-2"
}

几秒钟内,你应该可在“输出”窗口中看到以下响应:Within few seconds, you should see the following response in the OUTPUT window:

[DirectMethod] Invoking Direct Method [GraphInstanceDelete] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": null
}

响应中的状态代码 200 指示图形实例已成功删除。Status code of 200 in the response indicates that the graph instance was successfully deleted.

调用 GraphTopologyDeleteInvoke GraphTopologyDelete

使用以下有效负载调用 GraphTopologyDelete 直接方法:Invoke the GraphTopologyDelete direct method with the following payload:

{
    "@apiVersion" : "1.0",
    "name" : "EVRtoAssetsOnMotionDetection"
}

几秒钟内,你应该可在“输出”窗口中看到以下响应Within few seconds, you should see the following response in the OUTPUT window

[DirectMethod] Invoking Direct Method [GraphTopologyDelete] to [lva-sample-device/lvaEdge] ...
[DirectMethod] Response from [lva-sample-device/lvaEdge]:
{
  "status": 200,
  "payload": null
}

状态代码 200 指示 MediaGraph 拓扑已成功删除。Status code of 200 indicates that the MediaGraph topology was successfully deleted.

请尝试在后续步骤中使用以下直接方法:Try the following direct methods as next steps:

  • 调用 GraphTopologyList 并观察到模块中没有图形拓扑。Invoke GraphTopologyList and observe that there are no graph topologies in the module.
  • 使用与 GraphTopologyList 相同的有效负载调用 GraphInstanceList,并观察到未枚举任何图形实例。Invoke GraphInstanceList with the same payload as GraphTopologyList and observe that are no graph instances enumerated.

播放录制的视频Playing back the recorded video

接下来,可以使用 Azure 门户播放录制的视频。Next, you can use the Azure portal to play back the video you recorded.

  1. 登录到 Azure 门户,在搜索框中键入“媒体服务”。Log into the Azure portal, type "Media Services" in the search box.

  2. 找到 Azure 媒体服务帐户并将其打开。Locate your Azure Media Services account and open it.

  3. 在媒体服务列表中找到并选择“资产”项。Locate and select the Assets entry in the Media Services listing.

    媒体服务列表中的“资产”项

  4. 如果在本快速入门中你是首次使用 Azure 媒体服务,则仅会列出此快速入门中生成的资产,你可以选取最早的资产。If this quickstart is your first use of Azure Media Services, only the assets generated from this quickstart will be listed, and you can pick the oldest one.

  5. 或者使用在上述操作事件中作为 outputLocation 提供的资产的名称。Else, use the name of the asset that was provided as the outputLocation in the Operational events above.

  6. 在打开的详细信息页中,单击“流式处理 URL”文本框正下方的“新建”链接。In the details page that opens, click on the "Create new" link just below the Streaming URL textbox.

    流 URL

  7. 在为“添加流式处理定位符”打开的窗格中,接受默认值,并点击底部的“添加”。In the pane that opens for "Add streaming locator", accept the defaults and hit "Add" at the bottom.

  8. 在“资产详细信息”页中,视频播放器现在应已加载到视频的第一帧,可以点击播放按钮。In the Asset details page, the video player should now load to the first frame of the video, and you can hit the play button. 检查确认确实看到了汽车穿过停车场的那部分视频。Check that you see the portion of the video where the car is moving in the parking lot.

    显示

备注

由于模拟的实时视频是在激活图形时启动,因此与一天中的时间值不相关,也不会通过此播放机快捷方式公开。Since the simulated live video starts when you activate the graph, the time-of-day values are not relevant, and not exposed via this player shortcut. 连续视频录制和播放的相关教程显示了如何显示时间戳。The tutorial on continuous video recording and playback shows you how you can display the timestamps.

清理资源Clean up resources

如果不打算继续使用此应用程序,请删除本快速入门中创建的资源。If you are not going to continue to use this application, delete the resources created in this quickstart.

后续步骤Next steps

  • 了解如何以编程方式调用 IoT Edge 上的实时视频分析直接方法Learn how to invoke Live Video Analytics on IoT Edge direct methods programmatically.
  • 详细了解诊断消息。Learn more about diagnostic messages.