Ingest data from a container/ADLS into Azure Data Explorer

One-click ingestion enables you to quickly ingest data in JSON, CSV, and other formats into a table and easily create mapping structures. The data can be ingested either from storage, from a local file, or from a container, or as a one-time or continuous ingestion process.

This document describes using the intuitive one-click wizard to ingest CSV data from a container into a new table. Ingestion can be done as a one-time operation, or as a continuous method by setting up an Event Grid ingestion pipeline that that responds to new files in the source container and ingests qualifying data into your table. This process can be used with slight adaptations to cover a variety of different use cases.

For an overview of one-click ingestion, see One-click ingestion. For information about ingesting data into an existing table in Azure Data Explorer, see One-click ingestion to an existing table


Ingest new data

  1. In the left menu of the Web UI, right-click a database and select Ingest new data.

    Ingest new data.

  2. In the Ingest new data window, the Destination tab is selected. The Cluster and Database fields are automatically populated.

    1. To add a new connection to a cluster, select Add cluster connection below the auto-populated cluster name.

      Ingest new data tab- add a new cluster connection.

    2. In the popup window, enter the Connection URI for the cluster you are connecting.

    3. Enter a Display Name that you want to use to identify this cluster, and select Add.

      Add cluster URI and description to add a new cluster connection in Azure Data Explorer.

  3. In Table, check Create new table and enter a name for the new table. You can use alphanumeric, hyphens, and underscores. Special characters aren't supported.


    Table names must be between 1 and 1024 characters.

    Create a new table one-click ingestion.

  4. Select Next: Source

Select an ingestion type

Under Source type, do the following steps:

  1. Select From blob container (blob container, ADLS Gen2 container). You can ingest up to 5000 blobs from a single container.
  2. In the Link to storage field, add the blob URI with SAS token or Account key of the container, and optionally enter the sample size. To ingest from a folder within this container, see Ingest from folder in a container.


The SAS URL can be created manually or automatically.

One-click ingestion from container.

Ingest from folder in a container

To ingest from a specific folder within a container, generate a string of the following format:


You'll use this string instead of the SAS URL in select an ingestion type.

  1. Navigate to the storage account, and select Storage Explorer > Select Blob Containers

    Screenshot access blob containers in Azure Storage account.

  2. Browse to the selected folder, and select Copy URL. Paste this value into a temporary file and add ; to the end of this string.

    Screenshot of copy URL in folder in blob container - Azure Storage account.

  3. On the left menu under Settings, select Access keys.

    screenshot of Access keys storage account copy Key string.

  4. Under key 1, copy the Key string. Paste this value at the end of your string from step 2.

Storage subscription error

If you get the following error message when ingesting from a storage account:

Couldn't find the storage under your selected subscriptions. Please add the storage account storage_account_name subscription to your selected subscriptions in the portal.

  1. Select the icon from the top-right menu tray. A Directory + subscription pane opens.

  2. In the All subscriptions dropdown, add your storage account's subscription to the selected list.

    Screenshot of Directory + subscription pane with subscription dropdown highlighted by a red box.

Filter data

If you want to, filter the data to ingest only files that begin end with specific characters.

For example, filter for all files that begin with the word .csv extension.

One click ingestion filter.

The system will select one of the files at random and the schema will be generated based on that Schema defining file. You can select a different file.

Edit the schema

Select Next: Schema to view and edit your table column configuration. The service automatically identifies if the schema is compressed by looking at the name of the source.

In the Schema tab:

  1. Confirm the format selected in Data format:

    In this case, the data format is CSV

  2. You can select the check box Ignore the first record to ignore the heading row of the file.

    Select include column names.

  3. In the Mapping name field, enter a mapping name. You can use alphanumeric characters and underscores. Spaces, special characters, and hyphens aren't supported.

Edit the table

When ingesting to a new table, alter various aspects of the table when creating the table.

The changes you can make in a table depend on the following parameters:

  • Table type is new or existing
  • Mapping type is new or existing
Table type Mapping type Available adjustments
New table New mapping Change data type, Rename column, New column, Delete column, Update column, Sort ascending, Sort descending
Existing table New mapping New column (on which you can then change data type, rename, and update),
Update column, Sort ascending, Sort descending
Existing mapping Sort ascending, Sort descending


When adding a new column or updating a column, you can change mapping transformations. For more information, see Mapping transformations


For tabular formats, you can't map a column twice. To map to an existing column, first delete the new column.

Command editor

Above the Editor pane, select the v button to open the editor. In the editor, you can view and copy the automatic commands generated from your inputs.

One click ingestion edit view.

Select Next: Summary to create a table and mapping and to begin data ingestion.

Complete data ingestion

In the Data ingestion completed window, all three steps will be marked with green check marks when data ingestion finishes successfully.

One click ingestion complete.

Explore quick queries and tools

In the tiles below the ingestion progress, explore Quick queries or Tools:

  • Quick queries includes links to the Web UI with example queries.

  • Tools includes links to Undo or Delete new data on the Web UI, which enable you to troubleshoot issues by running the relevant .drop commands.


    You might lose data when you use .drop commands. Use them carefully. Drop commands will only revert the changes that were made by this ingestion flow (new extents and columns). Nothing else will be dropped.

Create continuous ingestion

Continuous ingestion enables you to create an Event Grid that listens for new files in the source container. Any new file that meets the criteria of the pre-defined parameters (prefix, suffix, and so on) will be automatically ingested into the destination table.

  1. Select Event Grid in the Continuous ingestion tile to open the Azure portal. The data connection page opens with the Event Grid data connector opened and with source and target parameters already entered (source container, tables, and mappings).

    continuous ingestion button.

Data connection: Basics

  1. The Data connection blade opens with the Basics tab selected.
  2. Enter the Storage account.
  3. Choose the Event type that will trigger ingestion.
  4. Select Next: Ingest properties

Screen shot of Data connection blade with Basics tab selected. Fields that should be selected are highlighted by a red box.

Ingest properties

The Ingest properties tab opens with pre-filled routing settings. The target table name, format, and mapping name are taken from the table created above.

Screen shot of Ingest properties blade.

Select Next: Review + create

Review + create

Review the resources, and select Create.

Screen shot of review and create blade.

Next steps