Storage considerations for Azure Functions

Azure Functions requires an Azure Storage account when you create a function app instance. The following storage services may be used by your function app:

Storage service Functions usage
Azure Blob Storage Maintain bindings state and function keys1.
Used by default for task hubs in Durable Functions.
May be used to store function app code for Linux Consumption remote build or as part of external package URL deployments.
Azure Files2 File share used to store and run your function app code in a Consumption Plan and Premium Plan.
Azure Queue Storage Used by default for task hubs in Durable Functions. Used for failure and retry handling in specific Azure Functions triggers. Used for object tracking by the Blob Storage trigger.
Azure Table Storage Used by default for task hubs in Durable Functions.

1 Blob storage is the default store for function keys, but you can configure an alternate store.

2 Azure Files is set up by default, but you can create an app without Azure Files under certain conditions.

Important considerations

You must strongly consider the following facts regarding the storage accounts used by your function apps:

  • When your function app is hosted on the Consumption plan or Premium plan, your function code and configuration files are stored in Azure Files in the linked storage account. When you delete this storage account, the content is deleted and can't be recovered. For more information, see Storage account was deleted

  • Important data, such as function code, access keys, and other important service-related data, may be persisted in the storage account. You must carefully manage access to the storage accounts used by function apps in the following ways:

    • Audit and limit the access of apps and users to the storage account based on a least-privilege model. Permissions to the storage account can come from data actions in the assigned role or through permission to perform the listKeys operation.

    • Monitor both control plane activity (such as retrieving keys) and data plane operations (such as writing to a blob) in your storage account. Consider maintaining storage logs in a location other than Azure Storage. For more information, see Storage logs.

Storage account requirements

Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to work with the new function app. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. You can also use an existing storage account with your function app. The following restrictions apply to storage accounts used by your function app, so you must make sure an existing storage account meets these requirements:

  • The account type must support Blob, Queue, and Table storage. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage. To learn more about storage account types, see Storage account overview.

  • Storage accounts already secured by using firewalls or virtual private networks can't be used in the portal creation flow. For more information, see Restrict your storage account to a virtual network.

  • When creating your function app in the portal, you're only allowed to choose an existing storage account in the same region as the function app you're creating. This is a performance optimization and not a strict limitation. To learn more, see Storage account location.

  • When creating your function app on a plan with availability zone support enabled, only zone-redundant storage accounts are supported.

Storage account guidance

Every function app requires a storage account to operate. When that account is deleted, your function app won't run. To troubleshoot storage-related issues, see How to troubleshoot storage-related issues. The following other considerations apply to the Storage account used by function apps.

Storage account location

For best performance, your function app should use a storage account in the same region, which reduces latency. The Azure portal enforces this best practice. If for some reason you need to use a storage account in a region different than your function app, you must create your function app outside of the portal.

The storage account must be accessible to the function app. If you need to use a secured storage account, consider restricting your storage account to a virtual network.

Storage account connection setting

By default, function apps configure the AzureWebJobsStorage connection as a connection string stored in the AzureWebJobsStorage application setting, but you can also configure AzureWebJobsStorage to use an identity-based connection without a secret.

Function apps are configured to use Azure Files by storing a connection string in the WEBSITE_CONTENTAZUREFILECONNECTIONSTRING application setting and providing the name of the file share in the WEBSITE_CONTENTSHARE application setting.


A storage account connection string must be updated when you regenerate storage keys. Read more about storage key management here.

Shared storage accounts

It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the Azurite storage emulator. In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.

You may need to use separate storage accounts to avoid host ID collisions.

Lifecycle management policy considerations

You shouldn't apply lifecycle management policies to your Blob Storage account used by your function app. Functions uses Blob storage to persist important information, such as function access keys, and policies may remove blobs (such as keys) needed by the Functions host. If you must use policies, exclude containers used by Functions, which are prefixed with azure-webjobs or scm.

Storage logs

Because function code and keys may be persisted in the storage account, logging of activity against the storage account is a good way to monitor for unauthorized access. Azure Monitor resource logs can be used to track events against the storage data plane. See Monitoring Azure Storage for details on how to configure and examine these logs.

The Azure Monitor activity log shows control plane events, including the listKeys operation. However, you should also configure resource logs for the storage account to track subsequent use of keys or other identity-based data plane operations. You should have at least the StorageWrite log category enabled to be able to identify modifications to the data outside of normal Functions operations.

To limit the potential impact of any broadly scoped storage permissions, consider using a nonstorage destination for these logs, such as Log Analytics. For more information, see Monitoring Azure Blob Storage.

Optimize storage performance

To maximize performance, use a separate storage account for each function app. This is particularly important when you have Durable Functions or Event Hub triggered functions, which both generate a high volume of storage transactions. When your application logic interacts with Azure Storage, either directly (using the Storage SDK) or through one of the storage bindings, you should use a dedicated storage account. For example, if you have an Event Hub-triggered function writing some data to blob storage, use two storage accounts—one for the function app and another for the blobs being stored by the function.

Working with blobs

A key scenario for Functions is file processing of files in a blob container, such as for image processing or sentiment analysis. To learn more, see Process file uploads.

Trigger on a blob container

There are several ways to execute your function code based on changes to blobs in a storage container. Use the following table to determine which function trigger best fits your needs:

Consideration Blob Storage (standard) Blob Storage (event-based) Queue Storage Event Grid
Latency High (up to 10 min) Low Medium Low
Storage account limitations Blob-only accounts not supported¹ general purpose v1 not supported none general purpose v1 not supported
Extension version Any Storage v5.x+ Any Any
Processes existing blobs Yes No No No
Filters Blob name pattern Event filters n/a Event filters
Requires event subscription No Yes No Yes
Supports high-scale² No Yes Yes Yes
Description Default trigger behavior, which relies on polling the container for updates. For more information, see the examples in the Blob storage trigger reference. Consumes blob storage events from an event subscription. Requires a Source parameter value of EventGrid. For more information, see Tutorial: Trigger Azure Functions on blob containers using an event subscription. Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have nonstorage events trigger your function. For more information, see How to work with Event Grid triggers and bindings in Azure Functions.

1 Blob Storage input and output bindings support blob-only accounts.
2 High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.

Storage data encryption

Azure Storage encrypts all data in a storage account at rest. For more information, see Azure Storage encryption for data at rest.

By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption of blob and file data. These keys must be present in Azure Key Vault for Functions to be able to access the storage account. To learn more, see Encryption at rest using customer-managed keys.

In-region data residency

When all customer data must remain within a single region, the storage account associated with the function app must be one with in-region redundancy. An in-region redundant storage account also must be used with Azure Durable Functions.

Host ID considerations

Functions uses a host ID value as a way to uniquely identify a particular function app in stored artifacts. By default, this ID is autogenerated from the name of the function app, truncated to the first 32 characters. This ID is then used when storing per-app correlation and tracking information in the linked storage account. When you have function apps with names longer than 32 characters and when the first 32 characters are identical, this truncation can result in duplicate host ID values. When two function apps with identical host IDs use the same storage account, you get a host ID collision because stored data can't be uniquely linked to the correct function app.


This same kind of host ID collison can occur between a function app in a production slot and the same function app in a staging slot, when both slots use the same storage account.

Starting with version 3.x of the Functions runtime, host ID collision is detected and a warning is logged. In version 4.x, an error is logged and the host is stopped, resulting in a hard failure. More details about host ID collision can be found in this issue.

Avoiding host ID collisions

You can use the following strategies to avoid host ID collisions:

  • Use a separated storage account for each function app or slot involved in the collision.
  • Rename one of your function apps to a value fewer than 32 characters in length, which changes the computed host ID for the app and removes the collision.
  • Set an explicit host ID for one or more of the colliding apps. To learn more, see Host ID override.


Changing the storage account associated with an existing function app or changing the app's host ID can impact the behavior of existing functions. For example, a Blob Storage trigger tracks whether it's processed individual blobs by writing receipts under a specific host ID path in storage. When the host ID changes or you point to a new storage account, previously processed blobs may be reprocessed.

Override the host ID

You can explicitly set a specific host ID for your function app in the application settings by using the AzureFunctionsWebHost__hostid setting. For more information, see AzureFunctionsWebHost__hostid.

When the collision occurs between slots, you must set a specific host ID for each slot, including the production slot. You must also mark these settings as deployment settings so they don't get swapped. To learn how to create app settings, see Work with application settings.

Create an app without Azure Files

Azure Files is set up by default for Elastic Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is deployed using an external package URL, the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system may not be available for all function app instances.

When Azure Files isn't used, you must meet the following requirements:

  • You must deploy from an external package URL.
  • Your app can't rely on a shared writeable file system.
  • The app can't use version 1.x of the Functions runtime.
  • Log streaming experiences in clients such as the Azure portal default to file system logs. You should instead rely on Application Insights logs.

If the above are properly accounted for, you may create the app without Azure Files. Create the function app without specifying the WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE application settings. You can avoid these settings by generating an ARM template for a standard deployment, removing the two settings, and then deploying the template.

Because Functions use Azure Files during parts of the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Elastic Premium plans.

Mount file shares

This functionality is current only available when running on Linux.

You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can use existing machine learning models or other data in your functions. You can use the following command to mount an existing share to your Linux function app.

az webapp config storage-account add

In this command, share-name is the name of the existing Azure Files share, and custom-id can be any string that uniquely defines the share when mounted to the function app. Also, mount-path is the path from which the share is accessed in your function app. mount-path must be in the format /dir-name, and it can't start with /home.

For a complete example, see the scripts in Create a Python function app and mount a Azure Files share.

Currently, only a storage-type of AzureFiles is supported. You can only mount five shares to a given function app. Mounting a file share may increase the cold start time by at least 200-300 ms, or even more when the storage account is in a different region.

The mounted share is available to your function code at the mount-path specified. For example, when mount-path is /path/to/mount, you can access the target directory by file system APIs, as in the following Python example:

import os

files_in_share = os.listdir("/path/to/mount")

Next steps

Learn more about Azure Functions hosting options.