Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
A storage account contains all of your Azure Storage data objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTP or HTTPS. Data in your storage account is durable, highly available, secure, and massively scalable.
Learn how to create a storage account.
Types of storage accounts
Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model.
The following table describes the types of storage accounts that we recommend for most scenarios. All of these types use the Azure Resource Manager deployment model.
| Type of storage account | Supported storage services | Redundancy options | Usage |
|---|---|---|---|
| Standard general-purpose v2 | Azure Blob Storage (including Azure Data Lake Storage1), Azure Queue Storage, Azure Table Storage, and Azure Files | Locally redundant storage (LRS) / geo-redundant storage (GRS) / read-access geo-redundant storage (RA-GRS) Zone-redundant storage (ZRS) / geo zone-redundant storage (GZRS) / read-access geo zone-redundant storage (RA-GZRS)2 |
Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios that use Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type. |
| Premium block blobs3 | Blob Storage (including Data Lake Storage1) | LRS ZRS2 |
Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. Learn more about example workloads. |
| Premium file shares3 | Azure Files | LRS ZRS2 |
Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. |
| Premium page blobs3 | Page blobs only | LRS ZRS2 |
Premium storage account type for page blobs only. Learn more about page blobs and sample use cases. |
1 Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Blob Storage. For more information, see Introduction to Data Lake Storage and Create a storage account to use with Data Lake Storage.
2 ZRS, GZRS, and RA-GZRS are available only for standard general-purpose v2, premium block blobs, premium file shares, and premium page blobs accounts in certain regions. For more information, see Azure Storage redundancy.
3 Premium performance storage accounts use solid-state drives (SSDs) for low latency and high throughput.
Legacy storage accounts are also supported. For more information, see Legacy storage account types.
The service-level agreement (SLA) for Azure storage accounts is available on the SLA page for online services.
Note
You can't change a storage account to a different type after it's created. To move your data to a storage account of a different type, you must create a new account and copy the data to the new account.
Storage account name
When you name your storage account, keep these rules in mind:
- Storage account names must be between 3 and 24 characters in length. They can contain only numbers and lowercase letters.
- Your storage account name must be unique within Azure. No two storage accounts can have the same name.
Storage account workloads
Azure Storage customers use various workloads to store data, access data, and derive insights to meet their business objectives. Each workload uses specific protocols for data operations based on its requirements and industry standards.
The following sections offer a high-level categorization of different primary workloads for your storage accounts.
Cloud native
Cloud-native apps are large-scale distributed applications that are built on a foundation of cloud paradigms and technologies. This modern approach focuses on cloud scale and performance capabilities.
Cloud-native apps can be based on microservices architecture, use managed services, and employ continuous delivery to achieve reliability. These applications are typically categorized into web apps, mobile apps, containerized apps, and serverless or function as a service (FaaS) types.
Analytics
Analytics is the systematic, computational analysis of data and statistics. This science involves discovering, interpreting, and communicating meaningful insights and patterns found in data.
The discovered data can be manipulated and interpreted in ways that can help a business further objectives and meet goals. These workloads typically consist of a pipeline that ingests large volumes of data. The data are prepped, curated, and aggregated for downstream consumption via Power BI, data warehouses, or applications.
Analytics workloads can require high ingress and egress, which drives higher throughput on your storage account. Analytics types include real-time analytics, advanced analytics, predictive analytics, emotional analytics, sentiment analysis, and more. For analytics, we guarantee that our customers have high throughput access to large amounts of data in distributed storage architectures.
High-performance computing
High-performance computing (HPC) is the aggregation of multiple computing nodes that act on the same set of tasks. They can achieve more than a single node can achieve in a specified time frame.
With HPC, powerful processors work in parallel to process massive, multidimensional datasets. HPC workloads require high-throughput read and write operations for workloads like gene sequencing and reservoir simulation. HPC workloads also include applications with high input/output operations per second (IOPS) and low-latency access to a large number of small files. They use these files for workloads like seismic interpretation, autonomous driving, and risk workloads.
The primary goal is to solve complex problems at ultra-fast speeds. Other examples of high-performance computing include fluid dynamics and other physical simulation or analysis that require scalability and high throughput. We help enable our customers to perform HPC by ensuring that large amounts of data are accessible with a large amount of concurrency.
Backup and archive
Business continuity and disaster recovery (BCDR) is a business's ability to remain operational after an adverse event. In terms of storage, this objective equates to maintaining business continuity across outages to storage systems.
With the introduction of backup-as-a-service offerings throughout the industry, BCDR data is increasingly migrating to the cloud. The backup and archive workload functions as the last line of defense against ransomware and malicious attacks. When there's a service interruption or accidental deletion or corruption of data, recovering the data in an efficient and orchestrated manner is the highest priority. Azure Storage makes it possible to store and retrieve large amounts of data in the most cost-effective fashion.
Machine learning and AI
AI is technology that simulates human intelligence and problem-solving capabilities in machines. Machine learning (ML) is a subdiscipline of AI that uses algorithms to create models that enable machines to perform tasks. AI and ML workloads are the newest on Azure and are growing at a rapid pace.
You can apply this type of workload across every industry to improve metrics and meet performance goals. These types of technologies can lead to discoveries of life-saving drugs and practices in the field of medicine and health, while also providing health assessments.
Other everyday uses of ML and AI include fraud detection, image recognition, and flagging misinformation. These workloads typically need:
- Highly specialized compute (a lot of GPUs).
- High throughput and IOPS.
- Low-latency access to storage.
- POSIX file system access.
Azure Storage supports these types of workloads by storing checkpoints and providing storage for large-scale datasets and models. These datasets and models read and write at a pace to keep GPUs utilized.
Recommended workload configurations
The following table illustrates Microsoft's suggested storage account configurations for each workload. When you change configuration options (associated with each workload), there are cost implications.
View pricing at Block blob pricing.
| Workload | Type of account | Performance | Redundancy | Hierarchical namespace enabled | Default access tier | Soft delete enabled |
|---|---|---|---|---|---|---|
| Cloud native | General purpose v2 | Standard | ZRS, RA-GRS | No | Hot | Yes |
| Analytics | General purpose v2 | Standard | ZRS1, RA-GRS | Yes2 | Hot | Yes |
| HPC | General purpose v2 | Standard | ZRS, RA-GRS | Yes | Hot | Yes |
| Backup and archive | General purpose v2 | Standard | ZRS, RA-GRS | No | Cool3 | Yes |
| Machine learning and AI | General purpose v2 | Standard | ZRS, RA-GRS | Yes | Hot | No |
1 ZRS is a good default for analytics workloads because it offers more redundancy compared to LRS. It protects against zonal failures while remaining fully compatible with analytics frameworks. Customers that require more redundancy for an analytics workload can also use geo-redundant storage (GRS or RA-GRS).
2 The hierarchical namespace is a core capability of Data Lake Storage. It enhances data organization and access efficiency for large amounts of data, making it ideal for analytics workloads.
3 The cool access tier offers a cost-effective solution for storing infrequently accessed data (typical for a backup and archive workload). Customers can also consider the cold access tier after evaluating costs.
Storage account endpoints
A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has a URL address that includes your unique account name. The combination of the account name and the service endpoint forms the endpoints for your storage account.
You can configure your storage account to use a custom domain for the Blob Storage endpoint. For more information, see Map a custom domain to an Azure Blob Storage endpoint.
Important
When you reference a service endpoint in a client application, we recommend that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change. If you rely on a cached IP address, you might experience unexpected behavior.
Additionally, we recommend that you honor the time to live (TTL) of the DNS record and avoid overriding it. If you override the DNS TTL, you might experience unexpected behavior.
Standard endpoints
A standard service endpoint in Azure Storage includes:
- The protocol. (We recommend HTTPS.)
- The storage account name as the subdomain.
- A fixed domain that includes the name of the service.
The following table lists the format for the standard endpoints for each of the Azure Storage services.
| Storage service | Endpoint |
|---|---|
| Blob Storage | https://<storage-account>.blob.core.chinacloudapi.cn |
| Static website (Blob Storage) | https://<storage-account>.web.core.chinacloudapi.cn |
| Data Lake Storage | https://<storage-account>.dfs.core.chinacloudapi.cn |
| Azure Files | https://<storage-account>.file.core.chinacloudapi.cn |
| Queue Storage | https://<storage-account>.queue.core.chinacloudapi.cn |
| Table Storage | https://<storage-account>.table.core.chinacloudapi.cn |
You can easily construct the URL for an object in Azure Storage. Append the object's location in the storage account to the endpoint. For example, the URL for a blob is similar to:
https://<mystorageaccount>.blob.core.chinacloudapi.cn/<mycontainer>/<myblob>
Migrating a storage account
The following table summarizes and points to guidance on how to move, upgrade, or migrate a storage account:
| Migration scenario | Details |
|---|---|
| Move a storage account to a different subscription | Azure Resource Manager provides options for moving a resource to a different subscription. For more information, see Move resources to a new resource group or subscription. |
| Move a storage account to a different resource group | Azure Resource Manager provides options for moving a resource to a different resource group. For more information, see Move resources to a new resource group or subscription. |
| Move a storage account to a different region | To move a storage account, create a copy of your storage account in another region. Then, move your data to that account by using AzCopy or another tool of your choice. For more information, see Move an Azure Storage account to another region. |
| Upgrade to a general-purpose v2 storage account | You can upgrade a general-purpose v1 storage account or legacy Blob Storage account to a general-purpose v2 account. This action can't be undone. For more information, see Upgrade to a general-purpose v2 storage account. |
| Migrate a classic storage account to Azure Resource Manager | The Azure Resource Manager deployment model is superior to the classic deployment model in terms of functionality, scalability, and security. For more information about migrating a classic storage account to Azure Resource Manager, see Platform-supported migration of IaaS resources from classic to Azure Resource Manager. |
Transferring data into a storage account
Azure provides services and utilities to import your data from on-premises storage devices or third-party cloud storage providers. Which solution you use depends on the quantity of data you're transferring. For more information, see Azure Storage migration overview.
Storage account encryption
All data in your storage account is automatically encrypted on the service side. For more information about encryption and key management, see Azure Storage encryption for data at rest.
Storage account billing
Azure Storage bills based on your storage account usage. All objects in a storage account are billed together as a group. Storage costs are calculated according to the following factors:
- Region: The geographical region in which your account is based.
- Account type: The type of storage account you're using.
- Access tier: The data usage pattern that you specified for your general-purpose v2 or Blob Storage account.
- Capacity: How much of your storage account allotment you're using to store data.
- Redundancy: How many copies of your data are maintained at one time, and in what locations.
- Transactions: All read and write operations to Azure Storage.
- Data egress: Any data transferred out of an Azure region. When an application that isn't running in the same region accesses the data in your storage account, you're charged for data egress. For information about using resource groups to group your data and services in the same region to limit egress charges, see What is an Azure resource group?
The Azure Storage pricing page provides detailed pricing information based on account type, storage capacity, replication, and transactions. The Data Transfers pricing details provides detailed pricing information for data egress. You can use the Azure Storage pricing calculator to help estimate your costs.
Retired storage account types
The following account types are retired or scheduled for retirement. They aren't recommended for new deployments. If you still have these accounts, plan to migrate to a supported account type.
Important
Azure storage accounts that use the classic deployment model (ASM) type were retired on August 31, 2024. Migrate to the Azure Resource Manager deployment model. For migration guidance, see classic account migration overview. For more information, see Update on classic storage account retirement.
| Retired account type | Supported services | Redundancy options | Deployment model | Guidance |
|---|---|---|---|---|
| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic¹ | Upgrade existing general-purpose v1 accounts to general-purpose v2 to access modern features and cost-optimization capabilities. Before you upgrade, you can model capacity and operations costs by reading General-purpose v1 account migration. For the in-place upgrade, see storage account upgrade. |
| Blob Storage | Block blobs and append blobs | LRS/GRS/RA-GRS | Resource Manager | Upgrade existing legacy Blob Storage accounts to GPv2 to use access tiers and lifecycle management. See Legacy Blob Storage account migration overview and access tiers overview. |
| Classic (ASM) storage accounts | Blob Storage, Queue Storage, Table Storage, Azure Files | LRS/GRS/RA-GRS | classic | Retired. Migrate to the Resource Manager deployment model. See classic account migration overview. |
¹ Classic denotes the Azure Service Management deployment model.
Scalability targets for standard storage accounts
The following table describes default limits for Azure general-purpose v2 (GPv2), general-purpose v1 (GPv1), and Blob Storage accounts.
A few entries in the table also apply to disk access and are explicitly labeled. Disk access is a resource that is exclusively used for importing or exporting managed disks through private links.
Customers should use a GPv2 storage account, because GPv1 is being retired. You can easily upgrade a GPv1 or Blob Storage account to a GPv2 account with no downtime and no need to copy data. For more information, see Upgrade to a GPv2 storage account.
The ingress limit refers to all data sent to a storage account or disk access. The egress limit refers to all data received from a storage account or disk access.
Note
You can request higher capacity and ingress limits. To request an increase, contact Azure Support.
| Resource | Limit |
|---|---|
| Maximum number of storage accounts per region per subscription, including standard, and premium storage accounts. | 250 |
| Default maximum storage account capacity. | 5 PiB 1 |
| Maximum number of blob containers, blobs, directories, and subdirectories (if hierarchical namespace is enabled), file shares, tables, queues, entities, or messages per storage account. | No limit |
Default maximum request rate per general-purpose v2, Blob Storage account, and disk access resources in the following regions:
|
40,000 requests per second1 |
| Default maximum request rate per general-purpose v2, Blob Storage account, and disk access resources in regions that aren't listed in the previous row. | 20,000 requests per second1 |
Default maximum ingress per general-purpose v2, Blob Storage account, and disk access resources in the following regions:
|
60 Gbps1 |
| Default maximum ingress per general-purpose v2, Blob Storage account, and disk access resources in regions that aren't listed in the previous row. | 25 Gbps1 |
| Default maximum ingress for general-purpose v1 storage accounts (all regions). | 10 Gbps1 |
Default maximum egress for general-purpose v2, Blob Storage accounts, and disk access resources in the following regions:
|
200 Gbps1 |
| Default maximum egress for general-purpose v2, Blob Storage accounts, and disk access resources in regions that aren't listed in the previous row. | 50 Gbps1 |
| Maximum egress for general-purpose v1 storage accounts | 10 Gbps if RA-GRS/GRS is enabled, 15 Gbps for LRS/ZRS |
| Maximum number of IP address rules per storage account. | 400 |
| Maximum number of virtual network rules per storage account. | 400 |
| Maximum number of resource instance rules per storage account. | 200 |
| Maximum number of private endpoints per storage account. | 200 |
1 Azure Storage standard accounts support higher capacity limits and higher limits for ingress and egress by request. To request an increase in account limits, contact Azure Support.