在 Azure Stack HCI 中规划卷Plan volumes in Azure Stack HCI

适用于:Azure Stack HCI 版本 20H2;Windows Server 2019Applies to: Azure Stack HCI, version 20H2; Windows Server 2019

本主题指导你根据工作负载的性能和容量需求规划 Azure Stack HCI 中的卷,包括选择其文件系统、复原类型和大小。This topic provides guidance for how to plan volumes in Azure Stack HCI to meet the performance and capacity needs of your workloads, including choosing their filesystem, resiliency type, and size.

内容回顾:什么是卷Review: What are volumes

卷用于存放工作负荷所需的文件,例如 Hyper-V 虚拟机的 VHD 或 VHDX 文件。Volumes are where you put the files your workloads need, such as VHD or VHDX files for Hyper-V virtual machines. 卷将存储池中的驱动器组合在一起,以引入存储空间直通的容错性、可扩展性和性能优势,这是 Azure Stack HCI 背后的软件定义存储技术。Volumes combine the drives in the storage pool to introduce the fault tolerance, scalability, and performance benefits of Storage Spaces Direct, the software-defined storage technology behind Azure Stack HCI.


在有关存储空间直通的整篇文档中,我们使用术语“卷”共指卷及其下的虚拟磁盘,包括群集共享卷 (CSV) 和 ReFS 等其他内置 Windows 功能所提供的功能。Throughout documentation for Storage Spaces Direct, we use term "volume" to refer jointly to the volume and the virtual disk under it, including functionality provided by other built-in Windows features such as Cluster Shared Volumes (CSV) and ReFS. 不了解这些实现级别的差异也能成功规划和部署存储空间直通。Understanding these implementation-level distinctions is not necessary to plan and deploy Storage Spaces Direct successfully.


群集中的所有服务器可以同时访问所有卷。All volumes are accessible by all servers in the cluster at the same time. 创建卷后,这些卷将显示在所有服务器上的 C:\ClusterStorage\ 中。Once created, they show up at C:\ClusterStorage\ on all servers.


选择要创建的卷数Choosing how many volumes to create

建议的卷数为群集中服务器数的倍数。We recommend making the number of volumes a multiple of the number of servers in your cluster. 例如,如果你有 4 台服务器,则相比创建 3 个或 5 个卷,创建 4 个卷可以获得更稳定的性能。For example, if you have 4 servers, you will experience more consistent performance with 4 total volumes than with 3 or 5. 这样,群集就可以在服务器之间平均分配“所有权”(由一台服务器处理每个卷的元数据业务流程)。This allows the cluster to distribute volume "ownership" (one server handles metadata orchestration for each volume) evenly among servers.

我们建议将卷总数限制为每个群集最多 64 个卷。We recommend limiting the total number of volumes to 64 volumes per cluster.

选择文件系统Choosing the filesystem

我们建议将新的弹性文件系统 (ReFS) 用于存储空间直通。We recommend using the new Resilient File System (ReFS) for Storage Spaces Direct. ReFS 是专为虚拟化打造的高级文件系统,其优点包括大幅的性能加速和内置的数据损坏防护。ReFS is the premier filesystem purpose-built for virtualization and offers many advantages, including dramatic performance accelerations and built-in protection against data corruption. 它支持几乎所有主要 NTFS 功能,包括 Windows Server 1709 和更高版本中的重复数据删除。It supports nearly all key NTFS features, including Data Deduplication in Windows Server version 1709 and later. 有关详细信息,请参阅 ReFS 功能比较表格See the ReFS feature comparison table for details.

如果工作负荷需要 ReFS 尚不支持的功能,你可以改用 NTFS。If your workload requires a feature that ReFS doesn't support yet, you can use NTFS instead.


具有不同文件系统的卷可以共置在同一群集中。Volumes with different file systems can coexist in the same cluster.

选择复原类型Choosing the resiliency type

存储空间直通中的卷提供复原能力来防范硬件问题(例如驱动器或服务器故障),并在整个服务器维护期间(例如软件更新)实现持续可用性。Volumes in Storage Spaces Direct provide resiliency to protect against hardware problems, such as drive or server failures, and to enable continuous availability throughout server maintenance, such as software updates.


可选择的复原类型与驱动器类型无关。Which resiliency types you can choose is independent of which types of drives you have.

使用两台服务器With two servers

如果群集中有两个服务器,可以使用双向镜像,或者你可以使用嵌套复原。With two servers in the cluster, you can use two-way mirroring or you can use nested resiliency.

双向镜像为所有数据保留两个副本,每台服务器中的驱动器上有一个副本。Two-way mirroring keeps two copies of all data, one copy on the drives in each server. 其存储效率为 50%;若要写入 1 TB 数据,存储池中至少需要 2 TB 的物理存储容量。Its storage efficiency is 50 percent; to write 1 TB of data, you need at least 2 TB of physical storage capacity in the storage pool. 双向镜像每次可安全承受一次硬件故障(一台服务器或驱动器的故障)。Two-way mirroring can safely tolerate one hardware failure at a time (one server or drive).


嵌套复原在具有双向镜像的服务器之间提供数据复原,然后在具有双向镜像或镜像加速奇偶校验的服务器中添加复原功能。Nested resiliency provides data resiliency between servers with two-way mirroring, then adds resiliency within a server with two-way mirroring or mirror-accelerated parity. 即使其中一台服务器正在重启或不可用,嵌套功能也提供数据复原。Nesting provides data resilience even when one server is restarting or unavailable. 具有嵌套双向镜像的存储效率为 25%,而镜像加速奇偶校验的效率大约为 35-40%。Its storage efficiency is 25 percent with nested two-way mirroring and around 35-40 percent for nested mirror-accelerated parity. 嵌套复原可以安全承受同时发生两次硬件故障(两个驱动器,或者一台服务器加上剩余服务器上的驱动器的故障)。Nested resiliency can safely tolerate two hardware failures at a time (two drives, or a server and a drive on the remaining server). 由于添加了数据复原功能,我们建议在两个服务器群集的生产部署上使用嵌套复原。Because of this added data resilience, we recommend using nested resiliency on production deployments of two-server clusters. 有关详细信息,请参阅嵌套复原For more info, see Nested resiliency.


已安装三个服务器With three servers

如果安装了三个服务器,你应使用三向镜像,以便获得更好的容错和更高的性能。With three servers, you should use three-way mirroring for better fault tolerance and performance. 三向镜像将保留所有数据的三个副本,每个服务器的驱动器上都会保留一个副本。Three-way mirroring keeps three copies of all data, one copy on the drives in each server. 其存储效率为 33.3% – 若要写入 1 TB 数据,存储池中至少需要 3 TB 的物理存储容量。Its storage efficiency is 33.3 percent – to write 1 TB of data, you need at least 3 TB of physical storage capacity in the storage pool. 三向镜像可以安全承受至少两个硬件(驱动器或服务器)同时出现问题Three-way mirroring can safely tolerate at least two hardware problems (drive or server) at a time. 如果有 2 个节点不可用,存储池将由于 2/3 的磁盘不可用而失去仲裁,并且虚拟磁盘将不可访问。If 2 nodes become unavailable the storage pool will lose quorum, since 2/3 of the disks are not available, and the virtual disks will be unaccessible. 但是,某个节点可以关闭,另一个节点上的一个或多个磁盘可以发生故障,在此情况下,虚拟磁盘仍会保持联机。However, a node can be down and one or more disks on another node can fail and the virtual disks will remain online. 例如,如果你正在重新启动一台服务器,此时另一个驱动器或服务器突然发生故障,在这种情况下,所有数据将保持安全,可供持续访问。For example, if you're rebooting one server when suddenly another drive or server fails, all data remains safe and continuously accessible.


已安装四个或更多个服务器With four or more servers

使用四台或更多服务器时,可对每个卷选择是要使用三向镜像、双重奇偶校验(通常称作“擦除编码”),还是将此两者与镜像加速奇偶校验混合使用。With four or more servers, you can choose for each volume whether to use three-way mirroring, dual parity (often called "erasure coding"), or mix the two with mirror-accelerated parity.

双奇偶校验提供了与三向镜像相同的容错,但存储效率更好。Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. 使用四台服务器时,其存储效率为 50.0%;若要存储 2 TB 数据,存储池中需要 4 TB 物理存储容量。With four servers, its storage efficiency is 50.0 percent; to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. 使用七台服务器时,其存储效率增大至 66.7%,最高可提升至 80.0%。This increases to 66.7 percent storage efficiency with seven servers, and continues up to 80.0 percent storage efficiency. 弊端是奇偶校验编码需要进行更多的计算,这可能会限制其性能。The tradeoff is that parity encoding is more compute-intensive, which can limit its performance.


要使用的复原类型取决于你的工作负载需求。Which resiliency type to use depends on the needs of your workload. 下表汇总了每种复原类型适用的工作负荷,以及每种复原类型的性能和存储效率。Here's a table that summarizes which workloads are a good fit for each resiliency type, as well as the performance and storage efficiency of each resiliency type.

复原类型Resiliency type 容量效率Capacity efficiency SpeedSpeed 工作负载Workloads
镜像Mirror 存储效率表现为 33%
三向镜像:33%Three-way mirror: 33%
双向镜像:50%Two-way-mirror: 50%
性能表现为 100%
最高性能Highest performance
虚拟化工作负荷Virtualized workloads
其他高性能工作负荷Other high performance workloads
镜像加速奇偶校验Mirror-accelerated parity 存储效率表现为大约 50%
取决于镜像和奇偶校验之比Depends on proportion of mirror and parity
性能表现为大约 20%
速度比镜像慢很多,但比双重奇偶校验最多快两倍Much slower than mirror, but up to twice as fast as dual-parity
最适合用于大规模有序写入和读取Best for large sequential writes and reads
存档和备份Archival and backup
虚拟化桌面基础结构Virtualized desktop infrastructure
双重奇偶校验Dual-parity 存储效率表现为大约 80%
4 台服务器:50%4 servers: 50%
16 台服务器:最高 80%16 servers: up to 80%
性能表现为大约 10%
写入时的最高 I/O 延迟和 CPU 使用率Highest I/O latency & CPU usage on writes
最适合用于大规模有序写入和读取Best for large sequential writes and reads
存档和备份Archival and backup
虚拟化桌面基础结构Virtualized desktop infrastructure

当性能最重要时When performance matters most

具有严格的延迟要求或需要大量混合的随机 IOPS 的工作负载(例如 SQL Server 数据库或性能敏感型 Hyper-V 虚拟机)应在使用镜像最大限度提高性能的卷上运行。Workloads that have strict latency requirements or that need lots of mixed random IOPS, such as SQL Server databases or performance-sensitive Hyper-V virtual machines, should run on volumes that use mirroring to maximize performance.


镜像的速度比任何其他复原类型都快。Mirroring is faster than any other resiliency type. 我们将镜像用于几乎所有性能示例。We use mirroring for nearly all our performance examples.

当容量最重要时When capacity matters most

不频繁写入的工作负载(如数据仓库或“冷”存储)应在使用双奇偶校验最大限度提高存储效率的卷上运行。Workloads that write infrequently, such as data warehouses or "cold" storage, should run on volumes that use dual parity to maximize storage efficiency. 某些其他工作负载(如传统文件服务器、虚拟桌面基础结构 (VDI) 或不会产生大量快速漂移的随机 IO 流量和/或不需要最佳性能的其他工作负载)也可以由你随意决定是否使用双奇偶校验。Certain other workloads, such as traditional file servers, virtual desktop infrastructure (VDI), or others that don't create lots of fast-drifting random IO traffic and/or don't require the best performance may also use dual parity, at your discretion. 与镜像相比,奇偶校验不可避免地会增加 CPU 使用率和 IO 延迟,特别是在写入时。Parity inevitably increases CPU utilization and IO latency, particularly on writes, compared to mirroring.

当批量写入数据时When data is written in bulk

按顺序大量写入数据的工作负荷(例如存档或备份目标)有另一个选项:可在一个卷上使用镜像和双重奇偶校验。Workloads that write in large, sequential passes, such as archival or backup targets, have another option: one volume can mix mirroring and dual parity. 写入首先在镜像部分中进行,稍后逐渐移到奇偶校验部分。Writes land first in the mirrored portion and are gradually moved into the parity portion later. 这样,可以在较长时间内发生计算密集型奇偶校验编码,从而在大型写入到达时加快引入速度并减小资源使用率。This accelerates ingestion and reduces resource utilization when large writes arrive by allowing the compute-intensive parity encoding to happen over a longer time. 在调整部分大小时,请考虑同时发生的写入数目(例如,每天一次备份)应该充分适合于镜像部分。When sizing the portions, consider that the quantity of writes that happen at once (such as one daily backup) should comfortably fit in the mirror portion. 例如,如果每天引入 100 GB 一次,则考虑使用 150 GB 到 200 GB 之间的镜像,对于其他情况,则考虑使用双奇偶校验。For example, if you ingest 100 GB once daily, consider using mirroring for 150 GB to 200 GB, and dual parity for the rest.

所产生的存储效率取决于你选择的比例。The resulting storage efficiency depends on the proportions you choose.


如果你观察到写入性能在引入数据的中途突然下降,这可能表示镜像部分不够大,或镜像加速奇偶校验不太适合你的用例。If you observe an abrupt decrease in write performance partway through data ingestion, it may indicate that the mirror portion is not large enough or that mirror-accelerated parity isn't well suited for your use case. 例如,如果写入性能从 400 MB/秒下降到 40 MB/秒,请考虑扩展镜像部分或改用三向镜像。As an example, if write performance decreases from 400 MB/s to 40 MB/s, consider expanding the mirror portion or switching to three-way mirror.

关于具有 NVMe、SSD 和 HDD 的部署About deployments with NVMe, SSD, and HDD

在涉及两种驱动器类型的部署中,较快的驱动器提供缓存,而较慢的驱动器提供容量。In deployments with two types of drives, the faster drives provide caching while the slower drives provide capacity. 该过程将自动进行 – 有关详细信息,请参阅了解存储空间直通中的缓存This happens automatically – for more information, see Understanding the cache in Storage Spaces Direct. 在此类部署中,所有卷最终都位于相同类型的驱动器(容量驱动器)上。In such deployments, all volumes ultimately reside on the same type of drives – the capacity drives.

在涉及所有三种驱动器类型的部署中,只有最快的驱动器 (NVMe) 会提供缓存,剩下两种类型的驱动器(SSD 和 HDD)会提供容量。In deployments with all three types of drives, only the fastest drives (NVMe) provide caching, leaving two types of drives (SSD and HDD) to provide capacity. 对于每个卷,你可以选择它是完全驻留在 SSD 层上、完全驻留在 HDD 层上,还是跨越这两种层。For each volume, you can choose whether it resides entirely on the SSD tier, entirely on the HDD tier, or whether it spans the two.


建议使用 SSD 层,以将对性能最敏感的工作负荷放在全闪存驱动器上。We recommend using the SSD tier to place your most performance-sensitive workloads on all-flash.

选择卷的大小Choosing the size of volumes

在 Windows Server 2019 中,我们建议将每个卷的大小限制为 64 TB。We recommend limiting the size of each volume to 64 TB in Windows Server 2019.


如果使用的备份解决方案依赖于卷影复制服务 (VSS) 和 Volsnap 软件提供程序(常用于文件服务器工作负载),则将卷大小限制为 10 TB 可以改善性能和可靠性。If you use a backup solution that relies on the Volume Shadow Copy service (VSS) and the Volsnap software provider-as is common with file server workloads-limiting the volume size to 10 TB will improve performance and reliability. 使用较新 Hyper-V RCT API 和/或 ReFS 块克隆和/或本机 SQL 备份 API 的备份解决方案的性能可完全达到 32 TB 及更高。Backup solutions that use the newer Hyper-V RCT API and/or ReFS block cloning and/or the native SQL backup APIs perform well up to 32 TB and beyond.


卷的大小指的是其可用容量,即它可以存储的数据量。The size of a volume refers to its usable capacity, the amount of data it can store. 此大小由 New-Volumecmdlet 的 -Size 参数提供,然后在运行 Get-Volume cmdlet 时显示在 Size 属性中。This is provided by the -Size parameter of the New-Volume cmdlet and then appears in the Size property when you run the Get-Volume cmdlet.

大小完全不同于卷的占用空间,其指的是卷在存储池中占用的总物理存储容量。Size is distinct from volume's footprint, the total physical storage capacity it occupies on the storage pool. 占用空间取决于其复原类型。The footprint depends on its resiliency type. 例如,使用三向镜像的卷具有的占用空间是其大小的三倍。For example, volumes that use three-way mirroring have a footprint three times their size.

卷的占用空间需要能够容纳到存储池中。The footprints of your volumes need to fit in the storage pool.


保留容量Reserve capacity

如果使存储池中的一些容量处于未分配状态,则会给卷提供空间,以便在驱动器发生故障后“就地”进行修复,从而提高数据安全性和性能。Leaving some capacity in the storage pool unallocated gives volumes space to repair "in-place" after drives fail, improving data safety and performance. 如果有足够的容量,那么甚至在更换故障驱动器之前,即时、就地、并行修复也可以将卷还原到完全恢复状态。If there is sufficient capacity, an immediate, in-place, parallel repair can restore volumes to full resiliency even before the failed drives are replaced. 此过程自动发生。This happens automatically.

我们建议为每个服务器保留相当于一个容量驱动器的容量,最多可保留 4 个驱动器的容量。We recommend reserving the equivalent of one capacity drive per server, up to 4 drives. 你可以自行决定保留更多容量,但此最低容量建议可以保证在任何驱动器发生故障后均能够成功进行即时、就地、并行修复。You may reserve more at your discretion, but this minimum recommendation guarantees an immediate, in-place, parallel repair can succeed after the failure of any drive.


例如,如果你安装了 2 个服务器,并且你使用的是 1 TB 的容量驱动器,请留出 2 x 1 = 2 TB 的池作为保留容量。For example, if you have 2 servers and you are using 1 TB capacity drives, set aside 2 x 1 = 2 TB of the pool as reserve. 如果你安装了 3 个服务器和 1TB 的容量驱动器,请留出 3 x 1 = 3 TB 作为保留容量。If you have 3 servers and 1 TB capacity drives, set aside 3 x 1 = 3 TB as reserve. 如果你安装了 4 个或更多个服务器以及 1TB 的容量驱动器,请留出 4 x 1 = 4 TB 作为保留容量。If you have 4 or more servers and 1 TB capacity drives, set aside 4 x 1 = 4 TB as reserve.


在具有所有三种类型 (NVMe + SSD + HDD) 的驱动器的群集中,我们建议为每个服务器保留相当于一个 SSD 与一个 HDD 总和的容量,每种类型最多可保留 4 个驱动器的容量。In clusters with drives of all three types (NVMe + SSD + HDD), we recommend reserving the equivalent of one SSD plus one HDD per server, up to 4 drives of each.

示例:容量规划Example: Capacity planning

假设有一个由四个服务器组成的群集。Consider one four-server cluster. 每个服务器都安装了一些缓存驱动器以及十六个 2 TB 的容量驱动器。Each server has some cache drives plus sixteen 2 TB drives for capacity.

4 servers x 16 drives each x 2 TB each = 128 TB

从存储池内的此 128 TB 中,我们留出四个驱动器或 8 TB,以便可以进行就地修复,而不需要在驱动器发生故障后匆忙去更换驱动器。From this 128 TB in the storage pool, we set aside four drives, or 8 TB, so that in-place repairs can happen without any rush to replace drives after they fail. 这样,池中还剩下 120 TB 的物理存储容量,我们可以使用这些容量来创建卷。This leaves 120 TB of physical storage capacity in the pool with which we can create volumes.

128 TB – (4 x 2 TB) = 120 TB

假设我们需要进行部署以托管一些高度活跃的 Hyper-V 虚拟机,但是我们还有许多冷存储 - 即我们需要保留的旧文件和备份。Suppose we need our deployment to host some highly active Hyper-V virtual machines, but we also have lots of cold storage – old files and backups we need to retain. 由于我们安装了四个服务器,所以我们将创建四个卷。Because we have four servers, let's create four volumes.

我们在前两个卷(Volume1Volume2)上放置虚拟机。Let's put the virtual machines on the first two volumes, Volume1 and Volume2. 我们选择 ReFS 作为文件系统(用于加快创建速度和检查点),选择三向镜像以让复原达到最大性能。We choose ReFS as the filesystem (for the faster creation and checkpoints) and three-way mirroring for resiliency to maximize performance. 我们将冷存储放在其他两个卷(Volume 3Volume 4)上。Let's put the cold storage on the other two volumes, Volume 3 and Volume 4. 我们选择 NTFS 作为文件系统(用于重复数据删除),选择双奇偶校验以让复原达到最大容量。We choose NTFS as the filesystem (for Data Deduplication) and dual parity for resiliency to maximize capacity.

我们不必将所有卷都设为相同大小,但是为了简单起见,我们就这样设置 - 例如,我们可以将它们都设置为 12 TB。We aren't required to make all volumes the same size, but for simplicity, let's – for example, we can make them all 12 TB.

卷 1 和卷 2 各自占用 12 TB x 33.3% 的效率,即 36 TB 的物理存储容量。 Volume1 and Volume2 will each occupy 12 TB x 33.3 percent efficiency = 36 TB of physical storage capacity.

卷 3 和卷 4 各自占用 12 TB x 50.0% 的效率,即 24 TB 的物理存储容量。 Volume3 and Volume4 will each occupy 12 TB x 50.0 percent efficiency = 24 TB of physical storage capacity.

36 TB + 36 TB + 24 TB + 24 TB = 120 TB

我们池中的可用物理存储容量完全能够装下这四个卷。The four volumes fit exactly on the physical storage capacity available in our pool. 太完美了!Perfect!



你无需立即创建所有卷。You don't need to create all the volumes right away. 你始终可以在稍后扩展卷或创建新卷。You can always extend volumes or create new volumes later.

为简单起见,此示例从头到尾都使用十进制(基数为 10)单位,这意味着 1 TB = 1,000,000,000,000 字节。For simplicity, this example uses decimal (base-10) units throughout, meaning 1 TB = 1,000,000,000,000 bytes. 但是,Windows 中的存储数量按二进制(基数为 2)单位显示。However, storage quantities in Windows appear in binary (base-2) units. 例如,在 Windows 中,每个 2 TB 的驱动器都将显示为 1.82 TiB。For example, each 2 TB drive would appear as 1.82 TiB in Windows. 同样,128 TB 的存储池将显示为 116.41 TiB。Likewise, the 128 TB storage pool would appear as 116.41 TiB. 这是正常情况。This is expected.


请参阅在 Azure Stack HCI 中创建卷See Creating volumes in Azure Stack HCI.

后续步骤Next steps

有关详细信息,请参阅:For more information, see also: