Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The queued ingestion REST API allows you to programmatically submit one or more blobs for ingestion into a specified database and table. This method is ideal for automated workflows and external systems that need to trigger ingestion dynamically.
Permissions
To use the REST API for queued ingestion, you need:
- Ingestor role with table scope to ingest data into an existing table.
- Database User role to access the target database.
For more information, see Role-based access control.
HTTP Endpoint
URL: /v1/rest/ingestion/queued/{database}/{table}
Method: POST
| Parameter | Type | Required | Description |
|---|---|---|---|
database |
string |
✔️ | The name of the target database. |
table |
string |
✔️ | The name of the target table. |
Request body parameters
The request must be a JSON object with the following structure.
Top-level fields
| Field | Type | Required | Description |
|---|---|---|---|
blobs |
array |
✔️ | A list of blob objects to be ingested. See Blob object for details. |
properties |
object |
✔️ | An object containing ingestion properties. See Supported ingestion properties. |
timestamp |
datetime |
No | Optional timestamp indicating when the ingestion request was created. |
Blob object
Each item in the blobs array must follow this structure:
| Field | Type | Required | Description |
|---|---|---|---|
url |
string |
✔️ | The URL of the blob to ingest. The service performs light validation on this field. The URL must be accessible by the service. For non-public blobs, include authentication information as part of the URL (for example, SAS token). See storage connection strings for details |
sourceId |
Guid |
No | An identifier for the source blob. |
rawSize |
integer |
No | The size of the blob before compression (nullable). |
Supported ingestion properties
| Property | Type | Description |
|---|---|---|
format |
string |
Data format (for example, csv, json). |
enableTracking |
bool |
If true, returns an ingestionOperationId for status tracking. |
tags |
array |
List of tags to associate with the ingested data. |
skipBatching |
bool |
If true, disables batching of blobs. |
deleteAfterDownload |
bool |
If true, deletes the blob after ingestion. |
ingestionMappingReference |
string |
Reference to a predefined ingestion mapping. |
creationTime |
string |
ISO8601 timestamp for the ingested data extents. |
ingestIfNotExists |
array |
Prevents ingestion if data with matching tags already exists. |
ignoreFirstRecord |
bool |
If true, skips the first record (for example, header row). |
validationPolicy |
string |
JSON string defining validation behavior. |
zipPattern |
string |
Regex pattern for extracting files from zipped blobs. |
Example
POST /v1/rest/ingestion/queued/MyDatabase/MyTable
Content-Type: application/json
Authorization: Bearer <access_token>
{
"timestamp": "2025-10-01T12:00:00Z",
"blobs": [
{
"url": "https://example.com/blob1.csv.gz",
"sourceId": "123a6999-411e-4226-a333-a79992dd9b95",
"rawSize": 1048576
}
],
"properties": {
"format": "csv",
"enableTracking": true,
"tags": ["ingest-by:rest"],
"ingestionMappingReference": "csv_mapping",
"creationTime": "2025-10-01T11:00:00Z"
}
}
Note
Setting "enableTracking": true will return a non-empty ingestionOperationId in the response, which can be used to monitor ingestion status using Operations Results - Get.
Response
| Condition | Response |
|---|---|
Tracking enabled (enableTracking: true) |
Returns a nonempty ingestionOperationId. |
| Tracking disabled or omitted | Returns an empty ingestionOperationId. |
Tracking enabled
{
"ingestionOperationId": "ingest_op_12345"
}
Tracking disabled
{
"ingestionOperationId": ""
}
Performance tips
- Submit up to 20 blobs per request.
- Submitting more than 20 blobs in a single request is not supported.
- Use
enableTrackingto monitor ingestion status via the status endpoint. - Avoid setting
skipBatchingunless ingestion latency is critical.