媒体图扩展Media graph extension

使用 IoT Edge 上的实时视频分析,可以通过图形扩展节点扩展媒体图处理功能。Live Video Analytics on IoT Edge allows you to extend the media graph processing capabilities through a graph extension node. 你的分析扩展插件可以利用传统的图像处理技术或计算机视觉 AI 模型。Your analytics extension plugin can make use of traditional image-processing techniques or computer vision AI models. 通过在媒体图中包含扩展处理器节点来启用图形扩展。Graph extensions are enabled by including an extension processor node in a media graph. 扩展处理器节点将视频帧中继到配置的终结点,并充当扩展的接口。The extension processor node relays video frames to the configured endpoint and acts as the interface to your extension. 可建立到本地或远程终结点的连接,并且可使用身份验证和 TLS 加密来保护它(如果需要)。The connection can be made to a local or remote endpoint and it can be secured by authentication and TLS encryption, if required. 此外,图形扩展处理器节点允许在将视频帧提交到自定义扩展之前对其进行可选的缩放和编码。Additionally, the graph extension processor node allows for optional scaling and encoding of the video frames before they are submitted to your custom extension.

实时视频分析支持两种媒体图扩展处理器:Live Video Analytics supports two kinds of media graph extension processors:

图扩展节点希望分析扩展插件以 JSON 格式返回结果。The graph extension node expects the analytics extension plugin to return the results in JSON format. 理想情况下,结果应遵循推理元数据架构对象模型Ideally the results should follow the inference metadata schema object model.

HTTP 扩展处理器HTTP extension processor

HTTP 扩展处理器支持使用 HTTP 协议的可扩展性方案,其中性能和/或最佳资源利用率不是主要问题。HTTP extension processor enables extensibility scenarios using the HTTP protocol, where performance and/or optimal resource utilization is not the primary concern. 可以通过 HTTP REST 终结点将自己的 AI 公开给媒体图。You can expose your own AI to a media graph via an HTTP REST endpoint.

在以下情况下使用 HTTP 扩展处理器节点:Use HTTP extension processor node when:

  • 你想与现有的 HTTP 推断系统实现更好的互操作性。You want better interoperability with existing HTTP Inferencing systems.
  • 低性能数据传输是可接受的。Low-performance data transfer is acceptable.
  • 你想将简单的请求-响应接口用于实时视频分析。You want to use a simple request-response interface to Live Video Analytics.

gRPC 扩展处理器gRPC extension processor

gRPC 扩展处理器使用基于 gRPC 的高性能结构化协议实现可扩展性方案。gRPC extension processor enables extensibility scenarios using gRPC based, highly performant structured protocol. 它非常适用于性能和/或最佳资源利用率优先的场景。It is ideal for scenarios where performance and/or optimal resource utilization is a priority. 通过 gRPC 扩展处理器,你可以充分利用结构化数据定义。The gRPC extension processor enables you to get the full benefit of the structured data definitions. gRPC 通过以下方式提供优秀的内容传输性能:gRPC offers high content transfer performance using:

gRPC 扩展处理器可用于发送媒体属性以及交换推理消息。The gRPC extension processor can be used for sending media properties along with exchanging inference messages. 因此,在下列情况下,请使用 gRPC 扩展处理器节点:So, use a gRPC extension processor node when you:

  • 需要使用结构化的协定(例如请求和响应的结构化消息)Want to use a structured contract (for example, structured messages for requests and responses)
  • 想要使用协议缓冲区 (protobuf) 作为通信的基础消息交换格式。Want to use Protocol Buffers (protobuf) as its underlying message interchange format for communication.
  • 希望在单流会话(而不是传统的请求-响应模型)中与 gRPC 服务器通信,它需要自定义请求处理程序来分析传入的请求并调用正确的实现函数。Want to communicate with a gRPC server in single stream session instead of the traditional request-response model needing a custom request handler to parse incoming requests and call the right implementation functions.
  • 需要在实时视频分析和你的模块之间进行低延迟和高吞吐量的通信。Want low latency and high throughput communication between Live Video Analytics and your module.

将你的推断模型用于实时视频分析Use your inferencing model with Live Video Analytics

媒体图扩展允许你在任何可用的推理运行时(如 ONNX、TensorFlow、PyTorch 或你自己的 docker 容器中的其他运行时)上运行所选的推理模型。Media Graph Extensions allow you to run inference models of your choice on any available inference runtime, such as ONNX, TensorFlow, PyTorch, or others in your own docker container. 推断自定义扩展应与实时视频分析边缘模块一起部署,以获得最佳性能,然后将通过图形拓扑中包含的 HTTP 扩展处理器或 gRPC 扩展处理器进行调用。The inferencing custom extension should be deployed alongside Live Video Analytics edge module for best performance and will then be invoked via the HTTP extension processor or the gRPC extension processor included in your graph topology. 另外,可以通过在媒体扩展处理器的上游添加运动检测器处理器来限制对自定义扩展的调用频率。Additionally, the frequency of the calls into your custom extension can be throttled by optionally adding a motion detector processor upstream to the media extension processor.

下图描绘了大致数据流:The diagram below depicts the high-level data flow:

AI 推理服务

示例Samples

可以使用快速入门来入门,这些快速入门介绍了使用预生成扩展服务进行实时视频分析,该服务使用 HTTP 扩展处理器低帧速率运行或使用 gRPC 扩展处理器高帧速率运行You can get started with one of our quickstarts that illustrate live video analytics with pre-built extension service at low frame rates with HTTP extension processor or at high frame rates with gRPC extension processor

对于高级用户,你可查看关于实时视频分析的 Jupyter 笔记本示例。For advanced users, you can checkout some of our Jupyter notebook samples for Live Video Analytics. 这些笔记本将为你提供以下方面的媒体图扩展分步说明:These notebooks will provide you with step-by-step instructions for the media graph extensions on:

  • 如何创建扩展服务的 Docker 容器映像How to create a Docker container image of an extension service
  • 如何将扩展服务与实时视频分析容器一起部署为容器How to deploy the extension service as a container along with the Live Video Analytics container
  • 如何使用带有扩展客户端的实时视频分析媒体图并将其指向扩展终结点 (HTTP/gRPC)How to use a Live Video Analytics media graph with an extension client and point it to the extension endpoint (HTTP/gRPC)