Outputs from Azure Stream Analytics

An Azure Stream Analytics job consists of an input, query, and an output. There are several output types to which you can send transformed data. This article lists the supported Stream Analytics outputs. When you design your Stream Analytics query, refer to the name of the output by using the INTO clause. You can use a single output per job, or multiple outputs per streaming job (if you need them) by adding multiple INTO clauses to the query.

To create, edit, and test Stream Analytics job outputs, you can use the Azure portal, Azure PowerShell, .NET API, and REST API.

Some outputs types support partitioning as shown in the following table.

All outputs support batching, but only some support setting the output batch size explicitly. For more information, see the output batch sizes section.

Output type Partitioning Security
Azure Functions Yes Access key
Blob storage and Azure Data Lake Gen 2 Yes Access key,
Managed Identity
Azure Data Lake Storage Gen 2 Yes Microsoft Entra user
Managed Identity
Azure Event Hubs Yes, need to set the partition key column in output configuration. Access key,
Managed Identity
Kafka (preview) Yes, need to set the partition key column in output configuration. Access key,
Managed Identity
Azure Database for PostgreSQL Yes Username and password auth
Azure Service Bus queues Yes Access key,
Managed Identity
Azure Service Bus topics Yes Access key,
Managed Identity
Azure SQL Database Yes, optional. SQL user auth,
Managed Identity

Important

Azure Stream Analytics uses Insert or Replace API by design. This operation replaces an existing entity or inserts a new entity if it does not exist in the table.

Partitioning

Stream Analytics supports partitions for all outputs. For more information on partition keys and the number of output writers, see the article for the specific output type you're interested in. Articles for output types are linked in the previous section.

Additionally, for more advanced tuning of the partitions, the number of output writers can be controlled using an INTO <partition count> (see INTO) clause in your query, which can be helpful in achieving a desired job topology. If your output adapter is not partitioned, lack of data in one input partition causes a delay up to the late arrival amount of time. In such cases, the output is merged to a single writer, which might cause bottlenecks in your pipeline. To learn more about late arrival policy, see Azure Stream Analytics event order considerations.

Output batch size

All outputs support batching, but only some support settings the batch size explicitly. Azure Stream Analytics uses variable-size batches to process events and write to outputs. Typically the Stream Analytics engine doesn't write one message at a time, and uses batches for efficiency. When the rate of both the incoming and outgoing events is high, Stream Analytics uses larger batches. When the egress rate is low, it uses smaller batches to keep latency low.

Avro and Parquet file splitting behavior

A Stream Analytics query can generate multiple schemas for a given output. The list of columns projected, and their type, can change on a row-by-row basis. By design, the Avro and Parquet formats don't support variable schemas in a single file.

The following behaviors might occur when directing a stream with variable schemas to an output using these formats:

  • If the schema change can be detected, the current output file is closed, and a new one initialized on the new schema. Splitting files as such severely slows down the output when schema changes happen frequently. This behavior can severely impact the overall performance of the job
  • If the schema change can't be detected, the row is most likely be rejected, and the job gets stuck as the row can't be output. Nested columns, or multi-type arrays, are situations that aren't discovered and be rejected.

We recommend that you consider outputs using the Avro or Parquet format to be strongly typed, or schema-on-write, and queries targeting them to be written as such (explicit conversions and projections for a uniform schema).

If multiple schemas need to be generated, consider creating multiple outputs and splitting records into each destination by using a WHERE clause.

Parquet output batching window properties

When you use Azure Resource Manager template deployment or the REST API, the two batching window properties are:

  1. timeWindow

    The maximum wait time per batch. The value should be a string of Timespan. For example, 00:02:00 for two minutes. After this time, the batch is written to the output even if the minimum rows requirement isn't met. The default value is 1 minute and the allowed maximum is 2 hours. If your blob output has path pattern frequency, the wait time can't be higher than the partition time range.

  2. sizeWindow

    The number of minimum rows per batch. For Parquet, every batch creates a new file. The current default value is 2,000 rows and the allowed maximum is 10,000 rows.

These batching window properties are only supported by API version 2017-04-01-preview or higher. Here's is an example of the JSON payload for a REST API call:

"type": "stream",
      "serialization": {
        "type": "Parquet",
        "properties": {}
      },
      "timeWindow": "00:02:00",
      "sizeWindow": "2000",
      "datasource": {
        "type": "Microsoft.Storage/Blob",
        "properties": {
          "storageAccounts" : [
          {
            "accountName": "{accountName}",
            "accountKey": "{accountKey}",
          }
          ],

Next steps