将 IoT Edge 上的实时视频分析从 1.0 升级为 2.0Upgrading Live Video Analytics on IoT Edge from 1.0 to 2.0

本文介绍升级“Azure IoT Edge 上的实时视频分析 (LVA)”模块时需要考虑的差异和不同事项。This article covers the differences and the different things to consider when upgrading the Live Video Analytics (LVA) on Azure IoT Edge module.

更改列表Change List

标题Title 实时视频分析 1.0Live Video Analytics 1.0 实时视频分析 2.0Live Video Analytics 2.0 说明Description
容器映像Container Image mcr.microsoft.com/media/live-video-analytics:1mcr.microsoft.com/media/live-video-analytics:1 mcr.microsoft.com/media/live-video-analytics:2mcr.microsoft.com/media/live-video-analytics:2 Microsoft 发布的适用于 Azure IoT Edge 上的实时视频分析的 docker 映像Microsoft Published docker images for Live Video Analytics on Azure IoT Edge
MediaGraph 节点MediaGraph nodes
Sources RTSP 源
IoT 中心消息源
RTSP 源
IoT 中心消息源
充当媒体引入源和消息源的 MediaGraph 节点。MediaGraph nodes that act as sources for media ingestion and messages.
ProcessorsProcessors 运动检测处理器
帧速率筛选器处理器
HTTP 扩展处理器
Grpc 扩展处理器
信号门处理器
运动检测处理器
帧速率筛选器处理器
HTTP 扩展处理器
Grpc 扩展处理器
信号门处理器
能够实现在发送到 AI 推理服务器之前格式化媒体的 MediaGraph 节点。MediaGraph nodes that enable you to format the media before sending to AI inference servers.
接收器Sinks 资产接收器
文件接收器
IoT 中心消息接收器
资产接收器
文件接收器
IoT 中心消息接收器
能够实现存储已处理媒体的 MediaGraph 节点。MediaGraph nodes that enable you to store the processed media.
支持的 MediaGraphsSupported MediaGraphs
拓扑Topologies 连续视频录制
运动分析和连续视频录制
外部分析和连续视频录制
基于事件的运动录制
基于事件的 AI 录制
基于事件的外部事件录制
运动分析
运动分析后进行 AI 推理
连续视频录制
运动分析和连续视频录制
外部分析和连续视频录制
基于事件的运动录制
基于事件的 AI 录制
基于事件的外部事件录制
运动分析
运动分析后进行 AI 推理
AI 组合
音频和视频录制
能够实现定义图形蓝图的 MediaGraph 拓扑,其中参数作为值的占位符。MediaGraph topologies that enable you to define the blueprint of a graph, with parameters as placeholders for values.

将“IoT Edge 上的实时视频分析”模块从 1.0 升级为 2.0Upgrading the Live Video Analytics on IoT Edge module from 1.0 to 2.0

升级“IoT Edge 上的实时视频分析”模块时,请确保更新以下信息。When upgrading the Live Video Analytics on IoT Edge module, make sure you update the following information.

容器映像Container Image

在部署模板中,在 modules 节点下查找“IoT Edge 上的实时视频分析”模块,并将 image 字段更新为:In your deployment template, look for your Live Video Analytics on IoT Edge module under the modules node and update the image field as:

"image": "mcr.microsoft.com/media/live-video-analytics:2"

提示

如果尚未修改“IoT Edge 上的实时视频分析”模块的名称,请在模块节点下查找 lvaEdgeIf you haven't modified the name of the Live Video Analytics on IoT Edge module, look for lvaEdge under the module node.

拓扑文件更改Topology File changes

在拓扑文件中,确保将 apiVersion 设置为 2.0In your topology files, make sure apiVersion is set to 2.0

MediaGraph 节点更改MediaGraph node changes

  • 相机源中的音频现在可以连同视频一起传递到下游。Audio from your camera source can now be passed downstream along with video. outputSelectors 的帮助下,可以将单纯的音频、单纯的视频或音频和视频一起传递给任何图形节点 。You can pass audio-only or video-only or audio and video both with the help of outputSelectors to any graph node. 例如,如果想要从 RTSP 源中只选择视频并将其传递到下游,请将以下内容添加到 RTSP 源节点:For example, if you want to select video only from the RTSP source and pass it downstream, then add the following to the RTSP source node:
    "outputSelectors": [
      {
        "property": "mediaType",
        "operator": "is",
        "value": "video"
      }
    

备注

outputSelectors 是一个可选属性。The outputSelectors is an optional property. 如果未使用此属性,则媒体图将来自 RTSP 相机的音频(如果启用)和视频传递到下游。If this is not used, then the media graph will pass the audio (if enabled) and video from the RTSP camera downstream.

  • MediaGraphHttpExtensionMediaGraphGrpcExtension 处理器中,请注意以下更改:In MediaGraphHttpExtension and MediaGraphGrpcExtension processors, note the following changes:

    映像属性Image properties

    • MediaGraphImageFormatEncoded 不再受支持。MediaGraphImageFormatEncoded is no longer supported.
      • 请改用 MediaGraphImageFormatBmpMediaGraphImageFormatJpegMediaGraphImageFormatPngInstead, use MediaGraphImageFormatBmp or MediaGraphImageFormatJpeg or MediaGraphImageFormatPng. 例如,应用于对象的For example,
        "image": {
                "scale": 
                {
                    "mode": "preserveAspectRatio",
                    "width": "416",
                    "height": "416"
                },
                "format": 
                {
                    "@type": "#Microsoft.Media.MediaGraphImageFormatJpeg"
                }
            }
        
        • 如果要使用原始映像,请使用 MediaGraphImageFormatRawIf you want to use RAW images, use MediaGraphImageFormatRaw. 为此,请将映像节点更新为:To use this, update the image node as:
        "image": {
                "scale": 
                {
                    "mode": "preserveAspectRatio",
                    "width": "416",
                    "height": "416"
                },
                "format": 
                {
                    "@type": "#Microsoft.Media.MediaGraphImageFormatRaw",
                    "pixelFormat": "rgba"
                }
            }
        

        备注

        pixelFormat 的可能值包括:yuv420prgb565bergb565lergb555bergb555lergb24bgr24argbrgbaabgrbgraPossible values of pixelFormat include: yuv420p,rgb565be, rgb565le, rgb555be, rgb555le, rgb24, bgr24, argb, rgba, abgr, bgra

    Grpc 扩展处理器的 extensionConfigurationextensionConfiguration for Grpc extension processor

    • MediaGraphGrpcExtension 处理器中提供了一个名为 extensionConfiguration 的新属性,该属性为可选字符串,可用于 gRPC 协定中。In MediaGraphGrpcExtension processor, a new property called extensionConfiguration is available, which is an optional string that can be used as a part of the gRPC contract. 此字段可用于将任何数据传递到推理服务器,你可以定义推理服务器如何使用这些数据。This field can be used to pass any data to the inference server and you can define how the inference server uses that data.
      当你在单个推理服务器中打包多个 AI 模型时,需要使用此属性。One use case of this property is when you have multiple AI models packaged in a single inference server. 使用此属性时,无需为每个 AI 模型公开节点。With this property you will not need to expose a node for every AI model. 相反,对于图形实例,扩展提供商可以使用 extensionConfiguration 属性定义如何选择不同的 AI 模型,并且在执行期间,LVA 会将此字符串传递给推断服务器,该服务器可以使用此字符串来调用所需的 AI 模型。Instead, for a graph instance, as an extension provider, you can define how to select the different AI models using the extensionConfiguration property and during execution, LVA will pass this string to the inferencing server, which can use this to invoke the desired AI model.

    AI 组合AI Composition

    • 实时视频分析2.0 现在支持在拓扑中使用多个媒体图形扩展处理器。Live Video Analytics 2.0 now supports using more than one media graph extension processor within a topology. 可以将来自 RTSP 相机的媒体帧按顺序、并行或两者结合的方式传递给不同的 AI 模型。You can pass the media frames from the RTSP camera to different AI models either sequentially, in parallel or in a combination of both. 请参阅样本拓扑,其中显示了两个按顺序使用的 AI 模型。Please see a sample topology showing two AI models being used sequentially.

具有接收器节点的磁盘空间管理Disk Space management with sink nodes

  • 在“文件接收器”节点中,现在可以指定“IoT Edge 上的实时视频分析”模块可以使用多少磁盘空间来存储已处理的映像。In your File sink node, you can now specify how much disk space the Live Video Analytics on IoT Edge module can use to store the processed images. 为此,请将 maximumSizeMiB 字段添加到“文件接收器”节点。To do so, add the maximumSizeMiB field to the FileSink node. 样本“文件接收器”节点如下所示:A sample File Sink node is as follows:
    "sinks": [
      {
        "@type": "#Microsoft.Media.MediaGraphFileSink",
        "name": "fileSink",
        "inputs": [
          {
            "nodeName": "signalGateProcessor",
            "outputSelectors": [
              {
                "property": "mediaType",
                "operator": "is",
                "value": "video"
              }
            ]
          }
        ],
        "fileNamePattern": "sampleFiles-${System.DateTime}",
        "maximumSizeMiB":"512",
        "baseDirectoryPath":"/var/media"
      }
    ]
    
  • 在“资产接收器”节点中,可以指定“IoT Edge 上的实时视频分析”模块可以使用多少磁盘空间来存储已处理的映像。In your Asset sink node, you can specify how much disk space the Live Video Analytics on IoT Edge module can use to store the processed images. 为此,请将 localMediaCacheMaximumSizeMiB 字段添加到“资产接收器”节点。To do so, add the localMediaCacheMaximumSizeMiB field to the Asset Sink node. 样本“资产接收器”节点如下所示:A sample Asset Sink node is as follows:
    "sinks": [
      {
        "@type": "#Microsoft.Media.MediaGraphAssetSink",
        "name": "AssetSink",
        "inputs": [
          {
            "nodeName": "signalGateProcessor",
            "outputSelectors": [
              {
                "property": "mediaType",
                "operator": "is",
                "value": "video"
              }
            ]
          }
        ],
        "assetNamePattern": "sampleAsset-${System.GraphInstanceName}",
        "segmentLength": "PT30S",
        "localMediaCacheMaximumSizeMiB":"200",
        "localMediaCachePath":"/var/lib/azuremediaservices/tmp/"
      }
    ]
    

    备注

    “文件接收器”路径拆分为基本目录路径和文件名模式,而“资产接收器”路径仅包括基本目录路径 。The File sink path is split into base directory path and file name pattern, whereas the Asset sink path includes the base directory path.

帧速率管理Frame Rate management

  • IoT Edge 上的实时视频分析 2.0 模块中已弃用 MediaGraphFrameRateFilterProcessorMediaGraphFrameRateFilterProcessor is deprecated in Live Video Analytics on IoT Edge 2.0 module.
    • 若要对传入视频进行采样以用于处理,请将 samplingOptions 属性添加到 MediaGraph 扩展处理器(MediaGraphHttpExtensionMediaGraphGrpcExtensionTo sample the incoming video for processing, add the samplingOptions property to the MediaGraph extension processors (MediaGraphHttpExtension or MediaGraphGrpcExtension)
         "samplingOptions": 
         {
           "skipSamplesWithoutAnnotation": "false",  // If true, limits the samples submitted to the extension to only samples which have associated inference(s) 
           "maximumSamplesPerSecond": "20"   // Maximum rate of samples submitted to the extension
         },
    

使用 Telegraf 的 Prometheus 格式的模块指标Module metrics in the Prometheus format using Telegraf

在此版本中,可以使用 Telegraf 将指标发送到 Azure Monitor。With this release, Telegraf can be used to send metrics to Azure Monitor. 指标可能会从那里定向到 Log Analytics 或事件中心。From there, the metrics may be directed to Log Analytics or an event hub.

事件的分类

使用 docker 可以轻松生成带自定义配置的 Telegraf 映像。You can produce a Telegraf image with a custom configuration easily using docker. 请访问监视和日志记录页,了解详细信息。Learn more in the Monitoring and logging page.

后续步骤Next steps

IoT Edge 上的实时视频分析入门Get started with Live Video Analytics on IoT Edge