Azure 事件中心的功能和术语Features and terminology in Azure Event Hubs

Azure 事件中心是可缩放的事件处理服务,它引入并处理大量事件和数据,具有低延迟和高可靠性。Azure Event Hubs is a scalable event processing service that ingests and processes large volumes of events and data, with low latency and high reliability. 有关简要概述,请参阅什么是事件中心?See What is Event Hubs? for a high-level overview.

本文基于概述文章中的信息编写,并提供有关事件中心组件和功能的技术和实现详细信息。This article builds on the information in the overview article, and provides technical and implementation details about Event Hubs components and features.

命名空间Namespace

事件中心命名空间提供 DNS 集成网络终结点与一系列的访问控制和网络集成管理功能(例如 IP 筛选、虚拟网络服务终结点和专用链接),并且是用于多个事件中心实例(或 Kafka 用语中的“主题”)之一的管理容器。An Event Hubs namespace provides DNS integrated network endpoints and a range of access control and network integration management features such as IP filtering, virtual network service endpoint, and Private Link and is the management container for one of multiple Event Hub instances (or topics, in Kafka parlance).

事件发布者Event publishers

任何向事件中心发送数据的实体都是事件发布者(与“事件生成者”同义) 。Any entity that sends data to an Event Hub is an event publisher (synonymously used with event producer). 事件发布者可以使用 HTTPS、AMQP 1.0 或 Kafka 协议发布事件。Event publishers can publish events using HTTPS or AMQP 1.0 or the Kafka protocol. 事件发布者会将基于 Azure Active Directory 的授权与 OAuth2 颁发的 JWT 令牌或特定于事件中心的共享访问签名 (SAS) 令牌配合使用,从而获取发布访问权限。Event publishers use Azure Active Directory based authorization with OAuth2-issued JWT tokens or an Event Hub-specific Shared Access Signature (SAS) token gain publishing access.

发布事件Publishing an event

可以通过 AMQP 1.0、Kafka 协议或 HTTPS 发布事件。You can publish an event via AMQP 1.0, the Kafka protocol, or HTTPS. 事件中心服务提供 REST API.NETJavaPythonJavaScriptGo 客户端库,用于将事件发布到事件中心。The Event Hubs service provides REST API and .NET, Java, Python, JavaScript, and Go client libraries for publishing events to an event hub. 对于其他运行时和平台,可以使用任何 AMQP 1.0 客户端,例如 Apache QpidFor other runtimes and platforms, you can use any AMQP 1.0 client, such as Apache Qpid.

是要使用 AMQP 还 HTTPS 根据具体的使用方案而定。The choice to use AMQP or HTTPS is specific to the usage scenario. AMQP 除了需要使用传输级别安全 (TLS) 或 SSL/TLS 以外,还需要建立持久的双向套接字。AMQP requires the establishment of a persistent bidirectional socket in addition to transport level security (TLS) or SSL/TLS. 初始化会话时,AMQP 具有较高的网络成本,但是 HTTPS 需要为每个请求使用额外的 TLS 开销。AMQP has higher network costs when initializing the session, however HTTPS requires additional TLS overhead for every request. 对于频繁进行发布的发布者,AMQP 的性能明显更高,并且,在将 AMQP 与异步发布代码配合使用时,延迟时间可能会大大降低。AMQP has significantly higher performance for frequent publishers and can achieve much lower latencies when used with asynchronous publishing code.

可以逐个或者成批发送事件。You can publish events individually or batched. 单个发布的限制为 1 MB,不管它是单个事件还是一批事件。A single publication has a limit of 1 MB, regardless of whether it is a single event or a batch. 发布大于此阈值的事件将被拒绝。Publishing events larger than this threshold will be rejected.

事件中心吞吐量是通过使用分区和吞吐量单位分配进行缩放的(请参阅下文)。Event Hubs throughput is scaled by using partitions and throughput-unit allocations (see below). 发布者最好是不知道为事件中心选择的特定分区模型,而只是指定用于始终如一地将相关事件分配给同一分区的分区键。It is a best practice for publishers to remain unaware of the specific partitioning model chosen for an Event Hub and to only specify a partition key that is used to consistently assign related events to the same partition.

分区键

事件中心确保共享分区键值的所有事件存储在一起,并按到达顺序进行传递。Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival. 如果将分区键与发布者策略结合使用,则发布者的标识与分区键的值必须匹配。If partition keys are used with publisher policies, then the identity of the publisher and the value of the partition key must match. 否则会出错。Otherwise, an error occurs.

事件保留期Event Retention

已发布的事件会根据可配置的基于时间的保留期策略从事件中心删除。Published events are removed from an Event Hub based on a configurable, timed-based retention policy. 下面是一些要点:Here are a few important points:

  • “默认”值和可能的“最短”保留期为“1 天(24 小时)” 。The default value and shortest possible retention period is 1 day (24 hours).
  • 对于事件中心“标准”层,最长保留期为“7 天” 。For Event Hubs Standard, the maximum retention period is 7 days.
  • 对于事件中心“专用”层,最长保留期为“90 天” 。For Event Hubs Dedicated, the maximum retention period is 90 days.
  • 如果更改保留期,该更改会应用于所有消息,包括事件中心内已有的消息。If you change the retention period, it applies to all messages including messages that are already in the event hub.

事件中心在配置的保留时间内保留事件,该时间适用于所有分区。Event Hubs retains events for a configured retention time that applies across all partitions. 达到保持期后,事件自动被删除。Events are automatically removed when the retention period has been reached. 如果指定的保留期为一天,则该事件将在得到接受后的 24 小时后变为不可用。If you specify a retention period of one day, the event will become unavailable exactly 24 hours after it has been accepted. 你无法显式地删除事件。You cannot explicitly delete events.

如果需要将事件存档到超过允许的保留期,可以通过打开“事件中心捕获”功能将事件自动存储在 Azure 存储或 Azure Data Lake 中If you need to archive events beyond the allowed retention period, you can have them automatically stored in Azure Storage or Azure Data Lake by turning on the Event Hubs Capture feature.

事件中心的数据保留限制以时间为基础,其原因是为了防止大量的历史客户数据被捕获到仅由时间戳索引且仅允许顺序访问的深层存储中。The reason for Event Hubs' limit on data retention based on time is to prevent large volumes of historic customer data getting trapped in a deep store that is only indexed by a timestamp and only allows for sequential access. 此处的体系结构理念是,历史数据需要比事件中心或 Kafka 提供的实时事件接口更丰富的索引和更直接的访问。The architectural philosophy here is that historic data needs richer indexing and more direct access than the real-time eventing interface that Event Hubs or Kafka provide. 事件流引擎不太适合充当用于事件溯源的数据湖或长期存档。Event stream engines are not well suited to play the role of data lakes or long-term archives for event sourcing.

备注

事件中心是一个实时事件流引擎,其设计意图并不是用于代替数据库以及/或者用作无限期保存的事件流的永久存储。Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a permanent store for infinitely held event streams.

事件流获取的历史记录越深,查找指定流的某个特定历史记录切片所需要的辅助索引就会越多。The deeper the history of an event stream gets, the more you will need auxiliary indexes to find a particular historical slice of a given stream.

事件中心捕获直接与 Azure Blob 存储和 Azure Data Lake Storage 集成。Event Hubs Capture integrates directly with Azure Blob Storage and Azure Data Lake Storage.

发布者策略Publisher policy

使用事件中心可以通过 发布者策略 对事件发布者进行精细控制。Event Hubs enables granular control over event publishers through publisher policies. 发布者策略是运行时功能,旨在为大量的独立事件发布者提供方便。Publisher policies are run-time features designed to facilitate large numbers of independent event publishers. 借助发布者策略,每个发布者在使用以下机制将事件发布到事件中心时可以使用自身的唯一标识符:With publisher policies, each publisher uses its own unique identifier when publishing events to an event hub, using the following mechanism:

//<my namespace>.servicebus.chinacloudapi.cn/<event hub name>/publishers/<my publisher name>

不需要提前创建发布者名称,但它们必须与发布事件时使用的 SAS 令牌匹配,以确保发布者标识保持独立。You don't have to create publisher names ahead of time, but they must match the SAS token used when publishing an event, in order to ensure independent publisher identities. 使用发布者策略时,PartitionKey 值设置为发布者名称。When using publisher policies, the PartitionKey value is set to the publisher name. 若要正常工作,这些值必须匹配。To work properly, these values must match.

捕获Capture

使用事件中心捕获,可以自动捕获事件中心的流式处理数据,并将其保存到所选 Blob 存储帐户或 Azure Data Lake 服务帐户。Event Hubs Capture enables you to automatically capture the streaming data in Event Hubs and save it to your choice of either a Blob storage account, or an Azure Data Lake Service account. 可以从 Azure 门户启用“捕获”,并指定用于执行捕获的最小大小和时间窗口。You can enable Capture from the Azure portal, and specify a minimum size and time window to perform the capture. 使用事件中心捕获,用户可以指定自己的 Azure Blob 存储帐户和容器或 Azure Data Lake 服务帐户(其中之一用于存储已捕获数据)。Using Event Hubs Capture, you specify your own Azure Blob Storage account and container, or Azure Data Lake Service account, one of which is used to store the captured data. 捕获的数据以 Apache Avro 格式编写。Captured data is written in the Apache Avro format.

分区Partitions

事件中心将发送到事件中心的事件序列组织到一个或多个分区中。Event Hubs organizes sequences of events sent to an event hub into one or more partitions. 当较新的事件到达时,它们将添加到此序列的末尾。As newer events arrive, they're added to the end of this sequence.

事件中心

可以将分区视为“提交日志”。A partition can be thought of as a "commit log". 分区保存事件数据,这些数据包含事件的主体、描述事件的用户定义属性包以及元数据(例如它在分区中的偏移量、它在流序列中的编号以及它被接受时的服务端时间戳)。Partitions hold event data that contains body of the event, a user-defined property bag describing the event, metadata such as its offset in the partition, its number in the stream sequence, and service-side timestamp at which it was accepted.

显示从旧到新的事件序列的示意图。

使用分区的优势Advantages of using partitions

事件中心旨在帮助处理量较大的事件,分区通过两种方式对此提供帮助:Event Hubs is designed to help with processing of large volumes of events, and partitioning helps with that in two ways:

  • 尽管事件中心是一项 PaaS 服务,但其背后存在一个物理现实,并且维护一个保持事件顺序的日志需要将这些事件一起保存在基础存储及其副本中,这将导致出现针对此类日志的吞吐量上限。Even though Event Hubs is a PaaS service, there's a physical reality underneath, and maintaining a log that preserves the order of events requires that these events are being kept together in the underlying storage and its replicas and that results in a throughput ceiling for such a log. 分区允许将多个并行日志用于同一个事件中心,从而使可用的原始 IO 吞吐容量倍增。Partitioning allows for multiple parallel logs to be used for the same event hub and therefore multiplying the available raw IO throughput capacity.
  • 你自己的应用程序必须能够及时处理要发送到事件中心的事件量。Your own applications must be able to keep up with processing the volume of events that are being sent into an event hub. 这可能很复杂,并且需要大量的横向扩展并行处理容量。It may be complex and requires substantial, scaled-out, parallel processing capacity. 用于处理事件的单个进程的容量有限,因此需要多个进程。The capacity of a single process to handle events is limited, so you need several processes. 分区是解决方案为这些进程供给容量的一种方式,它们还能确保每个事件都有一个明确的处理所有者。Partitions are how your solution feeds those processes and yet ensures that each event has a clear processing owner.

分区数Number of partitions

分区数在创建时指定,并且在事件中心标准层中必须介于 1 和 32 之间。The number of partitions is specified at creation and must be between 1 and 32 in Event Hubs Standard. 在事件中心专用层中,每个容量单位的分区计数最多可达 2000 个分区。The partition count can be up to 2000 partitions per Capacity Unit in Event Hubs Dedicated.

建议在特定事件中心的应用程序峰值负载期间,至少选择你预期需要的持续吞吐量单位 (TU) 数量的分区。We recommend that you choose at least as many partitions as you expect to require in sustained throughput units (TU) during the peak load of your application for that particular Event Hub. 应该以吞吐容量为 1 TU(1 MByte 输入,2 MByte 输出)的单个分区进行计算。You should calculate with a single partition having a throughput capacity of 1 TU (1 MByte in, 2 MByte out). 你可以扩展命名空间上的 TU 或群集的容量单位,而不依赖分区计数。You can scale the TUs on your namespace or the capacity units of your cluster independent of the partition count. 当命名空间设置为 1 TU 容量时,具有 32 个分区的事件中心或具有 1 个分区的事件中心会产生完全相同的费用。An Event Hub with 32 partitions or an Event Hub with 1 partition incur the exact same cost when the namespace is set to 1 TU capacity.

创建事件中心后,可以增加专用事件中心群集中事件中心的分区计数,但当分区键到分区的映射发生更改时,流在分区之间的分布也会发生更改,因此如果应用程序中事件的相对顺序很重要,你应该尽力避免此类更改。The partition count for an event hub in a dedicated Event Hubs cluster can be increased after the event hub has been created, but the distribution of streams across partitions will change when it's done as the mapping of partition keys to partitions changes, so you should try hard to avoid such changes if the relative order of events matters in your application.

将分区数设置为允许的最大值很有吸引力,但请始终记住,事件流需要进行结构化,这样你才能真正利用多个分区。Setting the number of partitions to the maximum permitted value is tempting, but always keep in mind that your event streams need to be structured such that you can indeed take advantage of multiple partitions. 如果需要跨所有事件或仅少数几个子流保持绝对顺序,则你可能无法利用多个分区。If you need absolute order preservation across all events or only a handful of substreams, you may not be able to take advantage of many partitions. 而且,多个分区会使处理端更加复杂。Also, many partitions make the processing side more complex.

事件到分区的映射Mapping of events to partitions

可以使用分区键将传入事件数据映射到特定分区,以便进行数据组织。You can use a partition key to map incoming event data into specific partitions for the purpose of data organization. 分区键是发送者提供的、要传递给事件中心的值。The partition key is a sender-supplied value passed into an event hub. 该键通过静态哈希函数进行处理,以便分配分区。It is processed through a static hashing function, which creates the partition assignment. 如果在发布事件时未指定分区键,则会使用循环分配。If you don't specify a partition key when publishing an event, a round-robin assignment is used.

事件发布者只知道其分区密钥,而不知道事件要发布到的分区。The event publisher is only aware of its partition key, not the partition to which the events are published. 键与分区的这种分离使发送者无需了解有关下游处理的过多信息。This decoupling of key and partition insulates the sender from needing to know too much about the downstream processing. 每个设备或用户的唯一标识就可以充当一个适当的分区键,但是,也可以使用其他属性(例如地理位置),以便将相关的事件分组到单个分区中。A per-device or user unique identity makes a good partition key, but other attributes such as geography can also be used to group related events into a single partition.

通过指定分区键,可使相关事件保持在同一分区中,并按其发送的确切顺序排列。Specifying a partition key enables keeping related events together in the same partition and in the exact order in which they were sent. 分区键是派生自应用程序上下文并标识事件之间的相互关系的字符串。The partition key is some string that is derived from your application context and identifies the interrelationship of the events. 分区键标识的事件序列是一个流。A sequence of events identified by a partition key is a stream. 分区是针对许多此类流的多路复用日志存储。A partition is a multiplexed log store for many such streams.

备注

尽管你可以直接向分区发送事件,但我们不建议这样做,尤其是保持高可用性至关重要时。While you can send events directly to partitions, we don't recommend it, especially when high availability is important to you. 这种做法会将事件中心的可用性降级到分区级别。It downgrades the availability of an event hub to partition-level. 有关详细信息,请参阅可用性和一致性For more information, see Availability and Consistency.

SAS 令牌SAS tokens

事件中心使用在命名空间和事件中心级别提供的共享访问签名。Event Hubs uses Shared Access Signatures, which are available at the namespace and event hub level. SAS 令牌是从 SAS 密钥生成的,它是以特定格式编码的 URL 的 SHA 哈希。A SAS token is generated from a SAS key and is an SHA hash of a URL, encoded in a specific format. 事件中心可以使用密钥(策略)的名称和令牌重新生成哈希,以便对发送者进行身份验证。Using the name of the key (policy) and the token, Event Hubs can regenerate the hash and thus authenticate the sender. 通常,为事件发布者创建的 SAS 令牌只对特定的事件中心具有“发送”权限。Normally, SAS tokens for event publishers are created with only send privileges on a specific event hub. 此 SAS 令牌 URL 机制是“发布者策略”中介绍的发布者标识的基础。This SAS token URL mechanism is the basis for publisher identification introduced in the publisher policy. 有关使用 SAS 的详细信息,请参阅使用服务总线进行共享访问签名身份验证For more information about working with SAS, see Shared Access Signature Authentication with Service Bus.

事件使用者Event consumers

从事件中心读取事件数据的任何实体称为“事件使用者”。Any entity that reads event data from an event hub is an event consumer. 所有事件中心使用者都通过 AMQP 1.0 会话进行连接,事件会在可用时通过该会话传送。All Event Hubs consumers connect via the AMQP 1.0 session and events are delivered through the session as they become available. 客户端不需要轮询数据可用性。The client does not need to poll for data availability.

使用者组Consumer groups

事件中心的发布/订阅机制通过“使用者组”启用。The publish/subscribe mechanism of Event Hubs is enabled through consumer groups. 使用者组是整个事件中心视图(状态、位置或偏移量)。A consumer group is a view (state, position, or offset) of an entire event hub. 使用者组使多个消费应用程序都有各自独立的事件流视图,并按自身步调和偏移量独立读取流。Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets.

在流处理体系结构中,每个下游应用程序相当于一个使用者组。In a stream processing architecture, each downstream application equates to a consumer group. 如果要将事件数据写入长期存储,则该存储写入器应用程序就是一个使用者组。If you want to write event data to long-term storage, then that storage writer application is a consumer group. 然后,复杂的事件处理可由另一个独立的使用者组执行。Complex event processing can then be performed by another, separate consumer group. 只能通过使用者组访问分区。You can only access partitions through a consumer group. 事件中心内始终有一个默认的使用者组,最多可为一个标准层事件中心创建 20 个使用者组。There is always a default consumer group in an event hub, and you can create up to 20 consumer groups for a Standard tier event hub.

每个使用者组的分区上最多可以有 5 个并发读取者,但是 建议每个使用者组的分区上只有一个活动接收者There can be at most 5 concurrent readers on a partition per consumer group; however it is recommended that there is only one active receiver on a partition per consumer group. 在单个分区中,每个读取者接收所有消息。Within a single partition, each reader receives all of the messages. 如果在同一分区上有多个读取者,则处理重复消息。If you have multiple readers on the same partition, then you process duplicate messages. 需在代码中处理此问题,这并非易于处理的。You need to handle this in your code, which may not be trivial. 但是,在某些情况下,这是一种有效的方法。However, it's a valid approach in some scenarios.

Azure SDK 提供的某些客户端是智能使用者代理,可以自动管理详细信息,以确保每个分区都有一个读取者,并确保正在读取事件中心的所有分区。Some clients offered by the Azure SDKs are intelligent consumer agents that automatically manage the details of ensuring that each partition has a single reader and that all partitions for an event hub are being read from. 这样,你的代码的处理范围便可集中于从事件中心读取的事件,从而可以忽略分区的许多细节。This allows your code to focus on processing the events being read from the event hub so it can ignore many of the details of the partitions. 有关详细信息,请参阅连接到分区For more information, see Connect to a partition.

以下示例显示了使用者组 URI 约定:The following examples show the consumer group URI convention:

//<my namespace>.servicebus.chinacloudapi.cn/<event hub name>/<Consumer Group #1>
//<my namespace>.servicebus.chinacloudapi.cn/<event hub name>/<Consumer Group #2>

下图显示了事件中心流处理体系结构:The following figure shows the Event Hubs stream processing architecture:

事件中心体系结构

流偏移量Stream offsets

偏移量 是事件在分区中的位置。An offset is the position of an event within a partition. 可以将偏移量视为客户端游标。You can think of an offset as a client-side cursor. 偏移量是事件的字节编号。The offset is a byte numbering of the event. 有了该偏移量,事件使用者(读取者)便可以在事件流中指定要从其开始读取事件的点。This offset enables an event consumer (reader) to specify a point in the event stream from which they want to begin reading events. 可以时间戳或者偏移量值的形式指定偏移量。You can specify the offset as a timestamp or as an offset value. 使用者负责在事件中心服务的外部存储其自身的偏移量值。Consumers are responsible for storing their own offset values outside of the Event Hubs service. 在分区中,每个事件都包含一个偏移量。Within a partition, each event includes an offset.

分区偏移

检查点Checkpointing

检查点 是读取者在分区事件序列中标记或提交其位置时执行的过程。Checkpointing is a process by which readers mark or commit their position within a partition event sequence. 检查点操作由使用者负责,并在使用者组中的每个分区上进行。Checkpointing is the responsibility of the consumer and occurs on a per-partition basis within a consumer group. 这种责任意味着,对于每个使用者组而言,每个分区读取者必须跟踪它在事件流中的当前位置,当它认为数据流已完成时,可以通知服务。This responsibility means that for each consumer group, each partition reader must keep track of its current position in the event stream, and can inform the service when it considers the data stream complete.

如果读取者与分区断开连接,当它重新连接时,将开始读取前面由该使用者组中该分区的最后一个读取者提交的检查点。If a reader disconnects from a partition, when it reconnects it begins reading at the checkpoint that was previously submitted by the last reader of that partition in that consumer group. 当读取者建立连接时,它会将此偏移量传递给事件中心,以指定要从其开始读取数据的位置。When the reader connects, it passes the offset to the event hub to specify the location at which to start reading. 这样,用户便可以使用检查点将事件标记为已由下游应用程序“完成”,并且在不同计算机上运行的读取者之间发生故障转移时,还可以提供弹性。In this way, you can use checkpointing to both mark events as "complete" by downstream applications, and to provide resiliency if a failover between readers running on different machines occurs. 若要返回到较旧的数据,可以在此检查点过程中指定较低的偏移量。It is possible to return to older data by specifying a lower offset from this checkpointing process. 借助此机制,检查点可以实现故障转移复原和事件流重放。Through this mechanism, checkpointing enables both failover resiliency and event stream replay.

重要

偏移量由事件中心服务提供。Offsets are provided by the Event Hubs service. 在处理事件时,由使用者负责检查点。It is the responsibility of the consumer to checkpoint as events are processed.

备注

如果你在一个环境中使用 Azure Blob 存储作为检查点存储,该环境支持与 Azure 上通常可用的存储 Blob SDK 版本不同的版本,那么你需要使用代码将存储服务 API 版本更改为该环境支持的特定版本。If you are using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than those typically available on Azure, you'll need to use code to change the Storage service API version to the specific version supported by that environment. 例如,如果在 Azure Stack Hub 版本 2002 上运行事件中心,则存储服务的最高可用版本为 2017-11-09。For example, if you are running Event Hubs on an Azure Stack Hub version 2002, the highest available version for the Storage service is version 2017-11-09. 在这种情况下,需要使用代码将存储服务 API 版本设定为 2017-11-09。In this case, you need to use code to target the Storage service API version to 2017-11-09. 如需通过示例来了解如何以特定的存储 API 版本为目标,请参阅“GitHub 上的这些示例”:For an example on how to target a specific Storage API version, see these samples on GitHub:

常见的使用者任务Common consumer tasks

所有事件中心使用者都通过 AMQP 1.0 会话,一种状态感知型双向信道进行连接。All Event Hubs consumers connect via an AMQP 1.0 session, a state-aware bidirectional communication channel. 每个分区都提供一个 AMQP 1.0 会话,方便传输按分区隔离的事件。Each partition has an AMQP 1.0 session that facilitates the transport of events segregated by partition.

连接到分区Connect to a partition

在连接到分区时,常见的做法是使用租用机制来协调读取者与特定分区的连接。When connecting to partitions, it's common practice to use a leasing mechanism to coordinate reader connections to specific partitions. 这样,便可以做到一个使用者组中每分区只有一个活动的读取者。This way, it's possible for every partition in a consumer group to have only one active reader. 使用事件中心 SDK 中的客户端(充当智能使用者代理)可以简化检查点、租用和管理读取者的操作。Checkpointing, leasing, and managing readers are simplified by using the clients within the Event Hubs SDKs, which act as intelligent consumer agents. 它们是:These are:

读取事件Read events

为特定分区建立 AMQP 1.0 会话和链接后,事件中心服务会将事件传送到 AMQP 1.0 客户端。After an AMQP 1.0 session and link is opened for a specific partition, events are delivered to the AMQP 1.0 client by the Event Hubs service. 与 HTTP GET 等基于提取的机制相比,此传送机制可以实现更高的吞吐量和更低的延迟。This delivery mechanism enables higher throughput and lower latency than pull-based mechanisms such as HTTP GET. 将事件发送到客户端时,每个事件数据实例包含重要的元数据,例如,用于简化对事件序列执行的检查点操作的偏移量和序列号。As events are sent to the client, each event data instance contains important metadata such as the offset and sequence number that are used to facilitate checkpointing on the event sequence.

事件数据:Event data:

  • OffsetOffset
  • 序列号Sequence number
  • 正文Body
  • 用户属性User properties
  • 系统属性System properties

用户负责管理偏移量。It is your responsibility to manage the offset.

后续步骤Next steps

有关事件中心的详细信息,请访问以下链接:For more information about Event Hubs, visit the following links: