Azure 流分析作业的兼容性级别Compatibility level for Azure Stream Analytics jobs

本文介绍 Azure 流分析中的兼容性级别选项。This article describes the compatibility level option in Azure Stream Analytics. 流分析是一个托管服务,它会定期进行功能更新和性能改进。Stream Analytics is a managed service, with regular feature updates, and performance improvements. 该服务的大多数运行时更新会自动提供给最终用户。Most of the service's runtimes updates are automatically made available to end users.

但是,该服务中的某些新功能可能会引入重大更改,例如,现有作业的行为更改,或正在运行的作业中使用数据的方式更改。However, some new functionality in the service may introduce a major change, such as a change in the behavior of an existing job, or a change in the way data is consumed in running jobs. 可以通过降低兼容性级别设置,使现有流分析作业保持运行状态而不会发生重大更改。You can keep your existing Stream Analytics jobs running without major changes by leaving the compatibility level setting lowered. 如果已准备使用最新的运行时行为,可以选择提高兼容性级别。When you are ready for the latest runtime behaviors, you can opt-in by raising the compatibility level.

选择兼容性级别Choose a compatibility level

兼容性级别控制流分析作业的运行时行为。Compatibility level controls the runtime behavior of a stream analytics job.

Azure 流分析目前支持三种兼容性级别:Azure Stream Analytics currently supports three compatibility levels:

  • 1.0 - 最初的兼容性级别,几年前正式发布 Azure 流分析期间引入。1.0 - Original compatibility level, introduced during general availability of Azure Stream Analytics several years ago.
  • 1.1 - 以前的行为1.1 - Previous behavior
  • 1.2 - 最新行为和最新改进1.2 - Newest behavior with most recent improvements

创建新的流分析作业时,最佳做法是使用最新的兼容性级别来创建它。When you create a new Stream Analytics job, it's a best practice to create it by using the latest compatibility level. 依赖于最新的行为着手作业设计,以免将来添加更改和增大复杂性。Start your job design relying upon the latest behaviors, to avoid added change and complexity later on.

设置兼容性级别Set the compatibility level

可以使用 Azure 门户或创建作业 REST API 调用来设置流分析作业的兼容性级别。You can set the compatibility level for a Stream Analytics job in the Azure portal or by using the create job REST API call.

若要在 Azure 门户中更新作业的兼容性级别:To update the compatibility level of the job in the Azure portal:

  1. 使用 Azure 门户定位到你的流分析作业。Use the Azure portal to locate to your Stream Analytics job.
  2. 停止 该作业,然后更新兼容性级别。Stop the job before updating the compatibility level. 如果作业处于运行状态,则无法更新兼容性级别。You can't update the compatibility level if your job is in a running state.
  3. 在“配置”标题下,选择“兼容性级别”。 Under the Configure heading, select Compatibility level.
  4. 选择所需的兼容性级别值。Choose the compatibility level value that you want.
  5. 选择页面底部的“保存” 。Select Save at the bottom of the page.

Azure 门户中的流分析兼容性级别

在更新兼容性级别时,T 编译器会使用与所选兼容性级别相对应的语法来验证作业。When you update the compatibility level, the T-compiler validates the job with the syntax that corresponds to the selected compatibility level.

兼容性级别 1.2Compatibility level 1.2

在兼容性级别 1.2 中引入了以下重大更改:The following major changes are introduced in compatibility level 1.2:

AMQP 消息传递协议AMQP messaging protocol

1.2 级别:Azure 流分析使用 高级消息队列协议 (AMQP) 消息传递协议将内容写入服务总线队列和主题。1.2 level: Azure Stream Analytics uses Advanced Message Queueing Protocol (AMQP) messaging protocol to write to Service Bus Queues and Topics. 通过 AMQP 可使用开放标准协议构建跨平台的混合应用程序。AMQP enables you to build cross-platform, hybrid applications using an open standard protocol.

地理空间函数Geospatial functions

以前的级别: Azure 流分析使用地理计算。Previous levels: Azure Stream Analytics used Geography calculations.

1.2 级别: Azure 流分析允许计算几何投影的地理坐标。1.2 level: Azure Stream Analytics allows you to compute Geometric projected geo coordinates. 地理空间函数的签名没有变化。There's no change in the signature of the geospatial functions. 但是,其语义略有不同,与以往相比可以提高计算精度。However, their semantics is slightly different, allowing more precise computation than before.

Azure 流分析支持地理空间参考数据索引编制。Azure Stream Analytics supports geospatial reference data indexing. 可为包含地理空间元素的参考数据编制索引,以加快联接计算的速度。Reference Data containing geospatial elements can be indexed for a faster join computation.

更新的地理空间函数提供已知文本 (WKT) 地理空间格式的完整表达能力。The updated geospatial functions bring the full expressiveness of Well Known Text (WKT) geospatial format. 可以指定以前在 GeoJson 中所不支持的其他地理空间组件。You can specify other geospatial components that weren't previously supported with GeoJson.

有关详细信息,请参阅 Azure 流分析中的地理空间功能更新 - 云和 IoT EdgeFor more information, see Updates to geospatial features in Azure Stream Analytics - Cloud and IoT Edge.

针对使用多个分区的输入源的并行执行查询Parallel query execution for input sources with multiple partitions

以前的级别: Azure 流分析查询要求使用 PARTITION BY 子句在不同的输入源分区之间并行化查询处理。Previous levels: Azure Stream Analytics queries required the use of PARTITION BY clause to parallelize query processing across input source partitions.

1.2 级别: 如果查询逻辑可在不同的输入源分区之间并行化,则 Azure 流分析会创建独立的查询实例,并且并行运行计算。1.2 level: If query logic can be parallelized across input source partitions, Azure Stream Analytics creates separate query instances and runs computations in parallel.

与 CosmosDB 输出的本机批量 API 集成Native Bulk API integration with CosmosDB output

以前的级别: 更新插入行为是“插入或合并”。 Previous levels: The upsert behavior was insert or merge.

1.2 级别: 与 CosmosDB 输出的本机批量 API 集成可以最大程度地提高吞吐量,并有效地处理限制请求。1.2 level: Native Bulk API integration with CosmosDB output maximizes throughput and efficiently handles throttling requests. 有关详细信息,请参阅从 Azure 流分析输出到 Azure Cosmos DB 页。For more information, see the Azure Stream Analytics output to Azure Cosmos DB page.

更新插入行为是“插入或替换”。 The upsert behavior is insert or replace.

写入到 SQL 输出时的 DateTimeOffsetDateTimeOffset when writing to SQL output

以前的级别: DateTimeOffset 类型已调整为 UTC。Previous levels: DateTimeOffset types were adjusted to UTC.

1.2 级别: 不再调整 DateTimeOffset。1.2 level: DateTimeOffset is no longer adjusted.

写入到 SQL 输出时的 LongLong when writing to SQL output

以前的级别: 根据目标类型截断值。Previous levels: Values were truncated based on the target type.

1.2 级别: 不符合目标类型的值将根据输出错误策略进行处理。1.2 level: Values that do not fit into the target type are handled according to the output error policy.

写入到 SQL 输出时的记录和数组序列化Record and array serialization when writing to SQL output

以前的级别: 记录以“记录”的形式编写,数组以“数组”的形式编写。Previous levels: Records were written as "Record" and arrays were written as "Array".

1.2 级别: 记录和数组以 JSON 格式进行序列化。1.2 level: Records and arrays are serialized in JSON format.

严格验证函数的前缀Strict validation of prefix of functions

以前的级别: 不对函数前缀执行严格验证。Previous levels: There was no strict validation of function prefixes.

1.2 级别: Azure 流分析严格验证函数前缀。1.2 level: Azure Stream Analytics has a strict validation of function prefixes. 将前缀添加到内置函数会导致出错。Adding a prefix to a built-in function causes an error. 例如,myprefix.ABS(…) 不受支持。For example,myprefix.ABS(…) isn't supported.

将前缀添加到内置聚合也会导致出错。Adding a prefix to built-in aggregates also results in error. 例如,myprefix.SUM(…) 不受支持。For example, myprefix.SUM(…) isn't supported.

对用户定义的任何函数使用前缀“system”会导致出错。Using the prefix "system" for any user-defined functions results in error.

不允许在 Cosmos DB 输出适配器中使用数组和对象作为键属性Disallow Array and Object as key properties in Cosmos DB output adapter

以前的级别: 支持使用数组和对象类型作为键属性。Previous levels: Array and Object types were supported as a key property.

1.2 级别: 不再支持使用数组和对象类型作为键属性。1.2 level: Array and Object types are no longer supported as a key property.

兼容性级别 1.1Compatibility level 1.1

在兼容性级别 1.1 中引入了以下重大更改:The following major changes are introduced in compatibility level 1.1:

服务总线 XML 格式Service Bus XML format

1.0 级别: Azure 流分析使用 DataContractSerializer,因此消息内容包括 XML 标记。1.0 level: Azure Stream Analytics used DataContractSerializer, so the message content included XML tags. 例如:For example:

@\u0006string\b3\u0001{ "SensorId":"1", "Temperature":64\}\u0001

1.1 级别: 消息内容只包含流,没有其他的标记。1.1 level: The message content contains the stream directly with no additional tags. 例如: { "SensorId":"1", "Temperature":64}For example: { "SensorId":"1", "Temperature":64}

对字段名称保留区分大小写Persisting case-sensitivity for field names

1.0 级别: 在由 Azure 流分析引擎处理时,字段名称更改为小写。1.0 level: Field names were changed to lower case when processed by the Azure Stream Analytics engine.

1.1 级别: 由 Azure 流分析引擎处理字段名称时,对字段名称保留区分大小写。1.1 level: case-sensitivity is persisted for field names when they are processed by the Azure Stream Analytics engine.


保留区分大小写尚不适用于使用 Edge 环境托管的流分析作业。Persisting case-sensitivity isn't yet available for Stream Analytic jobs hosted by using Edge environment. 因此,如果在 Edge 上托管作业,所有字段名称都将转换为小写。As a result, all field names are converted to lowercase if your job is hosted on Edge.


1.0 级别: CREATE TABLE 命令不使用 NaN(非数值。1.0 level: CREATE TABLE command did not filter events with NaN (Not-a-Number. 例如,Infinity、-Infinity)在 FLOAT 列类型中筛选事件,因为对于这些数字来说,事件不在已记录的范围之内。For example, Infinity, -Infinity) in a FLOAT column type because they are out of the documented range for these numbers.

1.1 级别: CREATE TABLE 允许指定强架构。1.1 level: CREATE TABLE allows you to specify a strong schema. 流分析引擎验证数据是否符合此架构。The Stream Analytics engine validates that the data conforms to this schema. 使用这一模型,该命令可以通过 NaN 值筛选事件。With this model, the command can filter events with NaN values.

对于 JSON,在入口处禁用日期/时间字符串到 DateTime 类型的自动转换Disable automatic conversion of datetime strings to DateTime type at ingress for JSON

1.0 级别: JSON 分析器会在入口处将包含日期/时间/区域信息的字符串值自动转换为 DATETIME 类型,使该值立即丢失其原始格式和时区信息。1.0 level: The JSON parser would automatically convert string values with date/time/zone information to DATETIME type at ingress so the value immediately loses its original formatting and timezone information. 因为这是在入口处完成的,所以即使查询中没有使用该字段,它也会转换为 UTC DateTime。Because this is done at ingress, even if that field was not used in the query, it is converted into UTC DateTime.

1.1 级别: 没有将包含日期/时间/区域信息的字符串值自动转换为 DATETIME 类型。1.1 level: There is no automatic conversion of string values with date/time/zone information to DATETIME type. 因此,时区信息和原始格式保持不变。As a result, timezone information and original formatting are kept. 但是,如果在查询中使用 NVARCHAR(MAX) 字段作为 DATETIME 表达式的一部分(例如 DATEADD 函数),它将被转换为 DATETIME 类型来执行计算,并且会失去其原始形式。However, if the NVARCHAR(MAX) field is used in the query as part of a DATETIME expression (DATEADD function, for example), it's converted to DATETIME type to perform the computation and it loses its original form.

后续步骤Next steps