Azure 时序见解第 2 代事件源Azure Time Series Insights Gen2 Event Sources

Azure 时序见解第 2 代环境最多可以有两个流式传输事件源。Your Azure Time Series Insights Gen2 environment can have up to two streaming event sources. 支持使用两种类型的 Azure 资源作为输入:Two types of Azure resources are supported as inputs:

必须以 UTF-8 编码的 JSON 形式发送事件。Events must be sent as UTF-8 encoded JSON.

创建或编辑事件源Create or edit event sources

你的事件源资源可以与你的 Azure 时序见解 Gen2 环境位于同一 Azure 订阅中,也可以位于其他订阅。你可以使用 Azure门户Azure CLIARM 模板REST API 来创建、编辑或删除环境的事件源。Your event source resource(s) can live in the same Azure subscription as your Azure Time Series Insights Gen2 environment or a different subscription.You can use the Azure portal, Azure CLI, ARM Templates, and the REST API to create, edit, or remove your environment's event sources.

连接事件源时,Azure 时序见解第 2 代环境会从最早的事件开始,读取当前存储在 IoT 中心或事件中心的所有事件。When you connect an event source, your Azure Time Series Insights Gen2 environment will read all of the events currently stored in your Iot or Event Hub, starting with the oldest event.


  • 将事件源附加到 Azure 时序见解第 2 代环境时,可能会遇到较高的初始延迟。You may experience high initial latency when attaching an event source to your Azure Time Series Insights Gen2 environment.
  • 事件源的延迟取决于 IoT 中心或事件中心内当前的事件数。Event source latency depends on the number of events currently in your IoT Hub or Event Hub.
  • 在事件源数据第一次引入后,高延迟会降低。High latency will subside after event source data is first ingested. 如果持续遇到较高的延迟,请通过 Azure 门户提交支持票证。Submit a support ticket through the Azure portal if you experience ongoing high latency.

流式引入最佳做法Streaming ingestion best practices

  • 始终为 Azure 时序见解第 2 代环境创建唯一的使用者组以使用来自事件源的数据。Always create a unique consumer group for your Azure Time Series Insights Gen2 environment to consume data from your event source. 重新使用使用者组可能会导致随机断开连接,并且可能会导致数据丢失。Re-using consumer groups can cause random disconnects and may result in data loss.

  • 在同一 Azure 区域中配置 Azure 时序见解第 2 代环境和 IoT 中心和/或事件中心。Configure your Azure Time Series Insights Gen2 environment and your IoT Hub and/or Event Hubs in the same Azure region. 尽管可以在单独的区域中配置事件源,但此方案不受支持,并且我们不能保证高可用性。Although it is possible to configure an event source in a separate region, this scenario is not supported and we cannot guarantee high availability.

  • 请勿超出环境的吞吐量速率限制或每个分区的限制。Do not go beyond your environment's throughput rate limit or per partition limit.

  • 配置一个当你的环境在处理数据的过程中遇到问题时要发送的延迟警报Configure a lag alert to be notified if your environment is experiencing issues processing data. 请参阅下面的生产工作负荷,了解建议的警报条件。See Production workloads below for suggested alert conditions.

  • 流式传输引入仅限用于近实时数据和最新数据,不支持流式传输历史数据。Use streaming ingestion for near real-time and recent data only, streaming historical data is not supported.

  • 了解如何对属性进行转义以及 JSON 数据如何平展和存储。Understand how properties will be escaped and JSON data flattened and stored.

  • 提供事件源连接字符串时,请遵循最低权限原则。Follow the principle of least privilege when providing event source connection strings. 对于事件中心,请配置仅包含“发送”声明的共享访问策略;对于 IoT 中心,请仅使用“服务连接”权限。For Event Hubs, configure a shared access policy with the send claim only, and for IoT Hub use the service connect permission only.

生产工作负荷Production workloads

除了上述最佳做法,我们建议你为业务关键型工作负荷实现以下各项。In addition to the best practices above, we recommend that you implement the following for business critical workloads.

  • 将 IoT 中心或事件中心的数据保留时间增加到最大值 7 天。Increase your IoT Hub or Event Hub data retention time to the maximum of 7 days.

  • 在 Azure 门户中创建环境警报。Create environment alerts in the Azure portal. 基于平台指标的警报使你可以验证端到端管道行为。Alerts based on platform metrics allow you to validate end-to-end pipeline behavior. 此处是有关创建和管理警报的说明。The instructions for creating and managing alerts are here. 建议的警报条件:Suggested alert conditions:

    • IngressReceivedMessagesTimeLag 大于 5 分钟IngressReceivedMessagesTimeLag is greater than 5 minutes
    • IngressReceivedBytes 为 0IngressReceivedBytes is 0
  • 请在 IoT 中心分区或事件中心分区之间保持引入负载均衡。Keep your ingestion load balanced between your IoT Hub or Event Hub partitions.

历史数据引入Historical Data Ingestion

Azure 时序见解第 2 代目前不支持使用流式传输管道导入历史数据。Using the streaming pipeline to import historical data is not currently supported in Azure Time Series Insights Gen2. 如果需要将过去的数据导入到环境中,请遵循以下准则:If you need to import past data into your environment, follow the guidelines below:

  • 不要并行传输实时和历史数据。Do not stream live and historical data in parallel. 引入无序数据将导致查询性能下降。Ingesting out of order data will result in degraded query performance.
  • 按时间顺序引入历史数据,以获得最佳性能。Ingest historical data in time-ordered fashion for best performance.
  • 不超过以下引入吞吐率上限。Stay within the ingestion throughput rate limits below.
  • 如果数据早于你的 Warm 存储保留期,请禁用 Warm 存储。Disable Warm Store if the data is older than your Warm Store retention period.

事件源时间戳Event source timestamp

配置事件源时,系统会要求你提供时间戳 ID 属性。When configuring an event source you'll be asked to provide a timestamp ID property. 时间戳属性用于跟踪一段时间内的事件,这是将要在查询 API 中用作 $event.$ts 的时间,并用于在 Azure 时序见解资源管理器中绘制序列。The timestamp property is used to track events over time, this is the time that will be used as the $event.$ts in the Query APIs and for plotting series in the Azure Time Series Insights Explorer. 如果在创建时未提供此属性,或者事件中缺少时间戳属性,则会将事件的 IoT 中心或事件中心排队时间用作默认值。If no property is provided at creation time, or if the timestamp property is missing from an event, then the event's IoT Hub or Events Hubs enqueued time will be used as the default. 时间戳属性值以 UTC 格式存储。Timestamp property values are stored in UTC.

通常情况下,用户会选择自定义时间戳属性,并使用传感器或标记生成读数时的时间,而不是使用默认的中心排队时间。In general, users will opt to customize the timestamp property and use the time when the sensor or tag generated the reading rather than using the default hub enqueued time. 当设备出现间歇性连接中断,并将一批延迟的消息转发到 Azure 时序见解第 2 代时,尤其需要这样做。This is particularly necessary when devices have intermittent connectivity loss and a batch of delayed messages are forwarded to Azure Time Series Insights Gen2.

如果你的自定义时间戳在嵌套 JSON 对象或数组中,则需要根据平展和转义命名约定提供正确的属性名称。If your custom timestamp is within a nested JSON object or an array you'll need to provide the correct property name following our flattening and escaping naming conventions. 例如,此处显示的 JSON 有效负载的事件源时间戳 应当输入为 "values.time"For example, the event source timestamp for the JSON payload shown here should be entered as "values.time".

时区偏移Time zone offsets

时间戳必须以 ISO 8601 格式发送并将以 UTC 格式存储。Timestamps must be sent in ISO 8601 format and will be stored in UTC. 如果提供了时区偏移,则会应用该偏移,然后会以 UTC 格式存储和返回时间。If a time zone offset is provided, the offset will be applied and then the time stored and returned in UTC format. 如果偏移格式不正确,则会忽略它。If the offset is improperly formatted it will be ignored. 在你的解决方案可能没有原始偏移的上下文的情况下,你可以在另外的单独事件属性中发送偏移数据,以确保它会保留,并且你的应用程序可以在查询响应中进行引用。In situations where your solution might not have context of the original offset, you can send the offset data in an additional separate event property to ensure that it's preserved and that your application can reference in a query response.

时区偏移应设置为以下格式之一:The time zone offset should be formatted as one of the following:


后续步骤Next steps