Azure Blob storage trigger for Azure Functions
The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as input to the function.
Tip
There are several ways to execute your function code based on changes to blobs in a storage container. If you choose to use the Blob storage trigger, note that there are two implementations offered: a polling-based one (referenced in this article) and an event-based one. It is recommended that you use the event-based implementation as it has lower latency than the other. Also, the Flex Consumption plan supports only the event-based Blob storage trigger.
For details about differences between the two implementations of the Blob storage trigger, as well as other triggering options, see Working with blobs.
For information on setup and configuration details, see the overview.
Important
This article uses tabs to support multiple versions of the Node.js programming model. The v4 model is currently in preview and is designed to have a more flexible and intuitive experience for JavaScript and TypeScript developers. Learn more about the differences between v3 and v4 in the upgrade guide.
Important
This article uses tabs to support multiple versions of the Python programming model. The v2 model is generally available and is designed to provide a more code-centric way for authoring functions through decorators. For more details about how the v2 model works, refer to the Azure Functions Python developer guide.
Example
A C# function can be created using one of the following C# modes:
- In-process class library: compiled C# function that runs in the same process as the Functions runtime.
- Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated process is required to support C# functions running on .NET 5.0.
- C# script: used primarily when creating C# functions in the Azure portal.
Important
Support will end for the in-process model on November 10, 2026. We highly recommend that you migrate your apps to the isolated worker model for full support.
The following example is a C# function that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the test-samples-trigger container. It reads a text file from the test-samples-input container and creates a new text file in an output container based on the name of the triggered file.
public static class BlobFunction
{
[Function(nameof(BlobFunction))]
[BlobOutput("test-samples-output/{name}-output.txt")]
public static string Run(
[BlobTrigger("test-samples-trigger/{name}")] string myTriggerItem,
[BlobInput("test-samples-input/sample1.txt")] string myBlob,
FunctionContext context)
{
var logger = context.GetLogger("BlobFunction");
logger.LogInformation("Triggered Item = {myTriggerItem}", myTriggerItem);
logger.LogInformation("Input Item = {myBlob}", myBlob);
// Blob Output
return "blob-output content";
}
}
This function writes a log when a blob is added or updated in the myblob
container.
@FunctionName("blobprocessor")
public void run(
@BlobTrigger(name = "file",
dataType = "binary",
path = "myblob/{name}",
connection = "MyStorageAccountAppSetting") byte[] content,
@BindingName("name") String filename,
final ExecutionContext context
) {
context.getLogger().info("Name: " + filename + " Size: " + content.length + " bytes");
}
The following example shows a blob trigger TypeScript code. The function writes a log when a blob is added or updated in the samples-workitems
container.
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
import { app, InvocationContext } from '@azure/functions';
export async function storageBlobTrigger1(blob: Buffer, context: InvocationContext): Promise<void> {
context.log(
`Storage blob function processed blob "${context.triggerMetadata.name}" with size ${blob.length} bytes`
);
}
app.storageBlob('storageBlobTrigger1', {
path: 'samples-workitems/{name}',
connection: 'MyStorageAccountAppSetting',
handler: storageBlobTrigger1,
});
The following example shows a blob trigger JavaScript code. The function writes a log when a blob is added or updated in the samples-workitems
container.
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
const { app } = require('@azure/functions');
app.storageBlob('storageBlobTrigger1', {
path: 'samples-workitems/{name}',
connection: 'MyStorageAccountAppSetting',
handler: (blob, context) => {
context.log(
`Storage blob function processed blob "${context.triggerMetadata.name}" with size ${blob.length} bytes`
);
},
});
The following example demonstrates how to create a function that runs when a file is added to source
blob storage container.
The function configuration file (function.json) includes a binding with the type
of blobTrigger
and direction
set to in
.
{
"bindings": [
{
"name": "InputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "source/{name}",
"connection": "MyStorageAccountConnectionString"
}
]
}
Here's the associated code for the run.ps1 file.
param([byte[]] $InputBlob, $TriggerMetadata)
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($InputBlob.Length) bytes"
This example uses SDK types to directly access the underlying BlobClient
object provided by the Blob storage trigger:
import logging
import azure.functions as func
import azurefunctions.extensions.bindings.blob as blob
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.blob_trigger(
arg_name="client", path="PATH/TO/BLOB", connection="AzureWebJobsStorage"
)
def blob_trigger(client: blob.BlobClient):
logging.info(
f"Python blob trigger function processed blob \n"
f"Properties: {client.get_blob_properties()}\n"
f"Blob content head: {client.download_blob().read(size=1)}"
)
For examples of using other SDK types, see the ContainerClient
and StorageStreamDownloader
samples.
To learn more, including how to enable SDK type bindings in your project, see SDK type bindings.
This example logs information from the incoming blob metadata.
import logging
import azure.functions as func
app = func.FunctionApp()
@app.function_name(name="BlobTrigger1")
@app.blob_trigger(arg_name="myblob",
path="PATH/TO/BLOB",
connection="CONNECTION_SETTING")
def test_function(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
Attributes
Both in-process and isolated worker process C# libraries use the BlobAttribute attribute to define the function. C# script instead uses a function.json configuration file as described in the C# scripting guide.
The attribute's constructor takes the following parameters:
Parameter | Description |
---|---|
BlobPath | The path to the blob. |
Connection | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See Connections. |
Access | Indicates whether you will be reading or writing. |
Source | Sets the source of the triggering event. Use BlobTriggerSource.EventGrid for an Event Grid-based blob trigger, which provides much lower latency. The default is BlobTriggerSource.LogsAndContainerScan , which uses the standard polling mechanism to detect changes in the container. |
Here's an BlobTrigger
attribute in a method signature:
[Function(nameof(BlobFunction))]
[BlobOutput("test-samples-output/{name}-output.txt")]
public static string Run(
[BlobTrigger("test-samples-trigger/{name}")] string myTriggerItem,
[BlobInput("test-samples-input/sample1.txt")] string myBlob,
FunctionContext context)
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Decorators
Applies only to the Python v2 programming model.
For Python v2 functions defined using decorators, the following properties on the blob_trigger
decorator define the Blob Storage trigger:
Property | Description |
---|---|
arg_name |
Declares the parameter name in the function signature. When the function is triggered, this parameter's value has the contents of the queue message. |
path |
The container to monitor. May be a blob name pattern. |
connection |
The storage account connection string. |
source |
Sets the source of the triggering event. Use EventGrid for an Event Grid-based blob trigger, which provides much lower latency. The default is LogsAndContainerScan , which uses the standard polling mechanism to detect changes in the container. |
For Python functions defined by using function.json, see the Configuration section.
Annotations
The @BlobTrigger
attribute is used to give you access to the blob that triggered the function. Refer to the trigger example for details. Use the source
property to set the source of the triggering event. Use EventGrid
for an Event Grid-based blob trigger, which provides much lower latency. The default is LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container. |
Configuration
Applies only to the Python v1 programming model.
The following table explains the properties that you can set on the options
object passed to the app.storageBlob()
method.
Property | Description |
---|---|
path | The container to monitor. May be a blob name pattern. |
connection | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See Connections. |
source | Sets the source of the triggering event. Use EventGrid for an Event Grid-based blob trigger, which provides much lower latency. The default is LogsAndContainerScan , which uses the standard polling mechanism to detect changes in the container. |
The following table explains the binding configuration properties that you set in the function.json file.
function.json property | Description |
---|---|
type | Must be set to blobTrigger . This property is set automatically when you create the trigger in the Azure portal. |
direction | Must be set to in . This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the usage section. |
name | The name of the variable that represents the blob in function code. |
path | The container to monitor. May be a blob name pattern. |
connection | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See Connections. |
source | Sets the source of the triggering event. Use EventGrid for an Event Grid-based blob trigger, which provides much lower latency. The default is LogsAndContainerScan , which uses the standard polling mechanism to detect changes in the container. |
See the Example section for complete examples.
Metadata
The blob trigger provides several metadata properties. These properties can be used as part of binding expressions in other bindings or as parameters in your code. These values have the same semantics as the Cloud​Blob type.
Property | Type | Description |
---|---|---|
BlobTrigger |
string |
The path to the triggering blob. |
Uri |
System.Uri |
The blob's URI for the primary location. |
Properties |
BlobProperties | The blob's system properties. |
Metadata |
IDictionary<string,string> |
The user-defined metadata for the blob. |
The following example logs the path to the triggering blob, including the container:
public static void Run(string myBlob, string blobTrigger, ILogger log)
{
log.LogInformation($"Full blob path: {blobTrigger}");
}
Metadata
The blob trigger provides several metadata properties. These properties can be used as part of binding expressions in other bindings or as parameters in your code.
Property | Description |
---|---|
blobTrigger |
The path to the triggering blob. |
uri |
The blob's URI for the primary location. |
properties |
The blob's system properties. |
metadata |
The user-defined metadata for the blob. |
Metadata
Metadata is available through the $TriggerMetadata
parameter.
Usage
The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app.
The blob trigger can bind to the following types:
Type | Description |
---|---|
string |
The blob content as a string. Use when the blob content is simple text. |
byte[] |
The bytes of the blob content. |
JSON serializable types | When a blob contains JSON data, Functions tries to deserialize the JSON data into a plain-old CLR object (POCO) type. |
Stream | (Preview1) An input stream of the blob content. |
BlobClient, BlockBlobClient, PageBlobClient, AppendBlobClient, BlobBaseClient |
(Preview1) A client connected to the blob. This offers the most control for processing the blob and can be used to write back to the blob if the connection has sufficient permission. |
1 To use these types, you need to reference Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs 5.1.1-preview2 or later and the common dependencies for SDK type bindings.
Binding to string
, or Byte[]
is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a Stream
or BlobClient
type. For more information, see Concurrency and memory usage.
If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to the correct Storage SDK version.
You can also use the StorageAccountAttribute to specify the storage account to use. You can do this when you need to use a different storage account than other functions in the library. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("BlobTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
....
}
The storage account to use is determined in the following order:
- The
BlobTrigger
attribute'sConnection
property. - The
StorageAccount
attribute applied to the same parameter as theBlobTrigger
attribute. - The
StorageAccount
attribute applied to the function. - The
StorageAccount
attribute applied to the class. - The default storage account for the function app, which is defined in the
AzureWebJobsStorage
application setting.
The @BlobTrigger
attribute is used to give you access to the blob that triggered the function. Refer to the trigger example for details.
Access the blob data via a parameter that matches the name designated by binding's name parameter in the function.json file.
Access blob data via the parameter typed as InputStream. Refer to the trigger example for details.
Functions also supports Python SDK type bindings for Azure Blob storage, which lets you work with blob data using these underlying SDK types:
Important
SDK types support for Python is currently in preview and is only supported for the Python v2 programming model. For more information, see SDK types in Python.
Connections
The connection
property is a reference to environment configuration that specifies how the app should connect to Azure Blobs. It may specify:
- The name of an application setting containing a connection string
- The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact match is used.
Connection string
To obtain a connection string, follow the steps shown at Manage storage account access keys. The connection string must be for a general-purpose storage account, not a Blob storage account.
This connection string should be stored in an application setting with a name matching the value specified by the connection
property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set connection
to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage". If you leave connection
empty, the Functions runtime uses the default Storage connection string in the app setting that is named AzureWebJobsStorage
.
Identity-based connections
If you're using version 5.x or higher of the extension, instead of using a connection string with a secret, you can have the app use an Azure Active Directory identity. To use an identity, you define settings under a common prefix that maps to the connection
property in the trigger and binding configuration.
If you're setting connection
to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all other connections, the extension requires the following properties:
Property | Environment variable template | Description | Example value |
---|---|---|---|
Blob Service URI | <CONNECTION_NAME_PREFIX>__serviceUri 1 |
The data plane URI of the blob service to which you're connecting, using the HTTPS scheme. | https://<storage_account_name>.blob.core.chinacloudapi.cn |
1 <CONNECTION_NAME_PREFIX>__blobServiceUri
can be used as an alias. If the connection configuration will be used by a blob trigger, blobServiceUri
must also be accompanied by queueServiceUri
. See below.
The serviceUri
form can't be used when the overall connection configuration is to be used across blobs, queues, and/or tables. The URI can only designate the blob service. As an alternative, you can provide a URI specifically for each service, allowing a single connection to be used. If both versions are provided, the multi-service form is used. To configure the connection for multiple services, instead of <CONNECTION_NAME_PREFIX>__serviceUri
, set:
Property | Environment variable template | Description | Example value |
---|---|---|---|
Blob Service URI | <CONNECTION_NAME_PREFIX>__blobServiceUri |
The data plane URI of the blob service to which you're connecting, using the HTTPS scheme. | https://<storage_account_name>.blob.core.chinacloudapi.cn |
Queue Service URI (required for blob triggers2) | <CONNECTION_NAME_PREFIX>__queueServiceUri |
The data plane URI of a queue service, using the HTTPS scheme. This value is only needed for blob triggers. | https://<storage_account_name>.queue.core.chinacloudapi.cn |
2 The blob trigger handles failure across multiple retries by writing poison blobs to a queue. In the serviceUri
form, the AzureWebJobsStorage
connection is used. However, when specifying blobServiceUri
, a queue service URI must also be provided with queueServiceUri
. It's recommended that you use the service from the same storage account as the blob service. You also need to make sure the trigger can read and write messages in the configured queue service by assigning a role like Storage Queue Data Contributor.
Other properties may be set to customize the connection. See Common properties for identity-based connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-assigned identity is used by default, although a user-assigned identity can be specified with the credential
and clientID
properties. Note that configuring a user-assigned identity with a resource ID is not supported. When run in other contexts, such as local development, your developer identity is used instead, although this can be customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
Important
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere to the principle of least privilege, granting the identity only required privileges. For example, if the app only needs to be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would want to ensure the role assignment is scoped only over the resources that need to be read.
You need to create a role assignment that provides access to your blob container at runtime. Management roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the Blob Storage extension in normal operation. Your application may require further permissions based on the code you write.
Binding type | Example built-in roles |
---|---|
Trigger | Storage Blob Data Owner and Storage Queue Data Contributor1 Extra permissions must also be granted to the AzureWebJobsStorage connection.2 |
Input binding | Storage Blob Data Reader |
Output binding | Storage Blob Data Owner |
1 The blob trigger handles failure across multiple retries by writing poison blobs to a queue on the storage account specified by the connection.
2 The AzureWebJobsStorage connection is used internally for blobs and queues that enable the trigger. If it's configured to use an identity-based connection, it needs extra permissions beyond the default requirement. The required permissions are covered by the Storage Blob Data Owner, Storage Queue Data Contributor, and Storage Account Contributor roles. To learn more, see Connecting to host storage with an identity.
Blob name patterns
You can specify a blob name pattern in the path
property in function.json or in the BlobTrigger
attribute constructor. The name pattern can be a filter or binding expression. The following sections provide examples.
Tip
A container name can't contain a resolver in the name pattern.
Get file name and extension
The following example shows how to bind to the blob file name and extension separately:
"path": "input/{blobname}.{blobextension}",
If the blob is named original-Blob1.txt, the values of the blobname
and blobextension
variables in function code are original-Blob1 and txt.
Filter on blob name
The following example triggers only on blobs in the input
container that start with the string "original-":
"path": "input/original-{name}",
If the blob name is original-Blob1.txt, the value of the name
variable in function code is Blob1.txt
.
Filter on file type
The following example triggers only on .png files:
"path": "samples/{name}.png",
Filter on curly braces in file names
To look for curly braces in file names, escape the braces by using two braces. The following example filters for blobs that have curly braces in the name:
"path": "images/{{20140101}}-{name}",
If the blob is named {20140101}-soundfile.mp3, the name
variable value in the function code is soundfile.mp3.
Polling and latency
Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals. If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle.
Warning
Storage logs are created on a "best effort" basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed.
If you require faster or more reliable blob processing, you should consider switching your hosting to use an App Service plan with Always On enabled, which may result in increased costs. You might also consider using a trigger other than the classic polling blob trigger. For more information and a comparison of the various triggering options for blob storage containers, see Trigger on a blob container.
Blob receipts
The Azure Functions runtime ensures that no blob trigger function gets called more than once for the same new or updated blob. To determine if a given blob version has been processed, it maintains blob receipts.
Azure Functions stores blob receipts in a container named azure-webjobs-hosts in the Azure storage account for your function app (defined by the app setting AzureWebJobsStorage
). A blob receipt has the following information:
- The triggered function (
<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>
, for example:MyFunctionApp.Functions.CopyBlob
) - The container name
- The blob type (
BlockBlob
orPageBlob
) - The blob name
- The ETag (a blob version identifier, for example:
0x8D1DC6E70A277EF
)
To force reprocessing of a blob, delete the blob receipt for that blob from the azure-webjobs-hosts container manually. While reprocessing might not occur immediately, it's guaranteed to occur at a later point in time. To reprocess immediately, the scaninfo blob in azure-webjobs-hosts/blobscaninfo can be updated. Any blobs with a last modified timestamp after the LatestScan
property will be scanned again.
Poison blobs
When a blob trigger function fails for a given blob, Azure Functions retries that function a total of five times by default.
If all 5 tries fail, Azure Functions adds a message to a Storage queue named webjobs-blobtrigger-poison. The maximum number of retries is configurable. The same MaxDequeueCount setting is used for poison blob handling and poison queue message handling. The queue message for poison blobs is a JSON object that contains the following properties:
- FunctionId (in the format
<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>
) - BlobType (
BlockBlob
orPageBlob
) - ContainerName
- BlobName
- ETag (a blob version identifier, for example:
0x8D1DC6E70A277EF
)
Memory usage and concurrency
When you bind to an output type that doesn't support streaming, such as string
, or Byte[]
, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see Binding types.
At this time, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs.
Memory usage can be further impacted when multiple function instances are concurrently processing blob data. If you are having memory issues using a Blob trigger, consider reducing the number of concurrent executions permitted. Of course, reducing the concurrency can have the side effect of increasing the backlog of blobs waiting to be processed. The memory limits of your function app depends on the plan. For more information, see Service limits.
The way that you can control the number of concurrent executions depends on the version of the Storage extension you are using.
When using version 5.0.0 of the Storage extension or a later version, you control trigger concurrency by using the maxDegreeOfParallelism
setting in the blobs configuration in host.json.
Limits apply separately to each function that uses a blob trigger.
host.json properties
The host.json file contains settings that control blob trigger behavior. See the host.json settings section for details regarding available settings.