Azure Stack HCI 的主机网络要求Host network requirements for Azure Stack HCI

适用于 Azure Stack HCI 版本 20H2Applies to Azure Stack HCI, version 20H2

本主题讨论 Azure Stack HCI 的主机网络注意事项和要求。This topic discusses Azure Stack HCI host networking considerations and requirements. 有关数据中心体系结构和服务器之间的物理连接的信息,请参阅物理网络要求For information on data center architectures and the physical connections between servers, see Physical network requirements.

网络流量类型Network traffic types

可以按照 Azure Stack HCI 网络流量的预期用途将其分类:Azure Stack HCI network traffic can be classified by its intended purpose:

  • 计算流量 - 源自或发往虚拟机 (VM) 的流量Compute traffic - traffic originating from or destined for a virtual machine (VM)
  • 存储流量 - 使用服务器消息块 (SMB) 的存储空间直通 (S2D) 的流量Storage traffic - traffic for Storage Spaces Direct (S2D) using Server Message Block (SMB)
  • 管理流量 - 对于管理员而言非常重要的流量,用于在 Active Directory、远程桌面、Windows 管理中心和 Windows PowerShell 中进行群集管理。Management traffic - traffic important to an administrator for cluster management, such as Active Directory, Remote Desktop, Windows Admin Center, and Windows PowerShell.

选择网络适配器Selecting a network adapter

对于 Azure Stack HCI,我们需要选择一个已通过 Windows Server 软件定义的数据中心 (SDDC) 认证并拥有标准或高级附加资质 (AQ) 的网络适配器。For Azure Stack HCI, we require choosing a network adapter that has achieved the Windows Server Software-Defined Data Center (SDDC) certification with the Standard or Premium Additional Qualification (AQ). 这些适配器支持最先进的平台功能,并已经历了我们硬件合作伙伴的大部分测试。These adapters support the most advanced platform features and have undergone the most testing by our hardware partners. 一般情况下,通过这种程度的审核就意味着硬件和驱动程序相关的质量问题会减少。Typically, this level of scrutiny leads to a reduction in hardware and driver-related quality issues.

可以通过查看适配器的 Windows Server 目录条目和适用的操作系统版本,来识别拥有标准或高级 AQ 的适配器。You can identify an adapter that has Standard or Premium AQ by reviewing the Windows Server Catalog entry for the adapter and the applicable operating system version. 以下示例表示相应适配器拥有高级 AQ:Below is an example of the notation for Premium AQ:

Windows 认证

关键网络适配器功能概述Overview of key network adapter capabilities

Azure Stack HCI 利用的重要网络适配器功能包括:Important network adapter capabilities leveraged by Azure Stack HCI include:

  • 动态虚拟机多队列(动态 VMMQ 或 d.VMMQ)Dynamic Virtual Machine Multi-Queue (Dynamic VMMQ or d.VMMQ)
  • 远程直接内存访问 (RDMA)Remote Direct Memory Access (RDMA)
  • 来宾 RDMAGuest RDMA
  • 交换机嵌入式组合 (SET)Switch Embedded Teaming (SET)

动态 VMMQDynamic VMMQ

拥有高级 AQ 的所有网络适配器都支持动态 VMMQ。All network adapters with the Premium AQ support Dynamic VMMQ. 动态 VMMQ 要求使用交换机嵌入式组合。Dynamic VMMQ requires the use of Switch Embedded Teaming.

适用的流量类型:计算Applicable traffic types: compute

所需的认证:高级Certifications required: Premium

动态 VMMQ 是一项非常智能的接收端技术,它建立在其前身(技术虚拟机队列 (VMQ)、虚拟接收端缩放 (vRSS) 和 VMMQ)的基础之上,提供以下三项主要改进:Dynamic VMMQ is an intelligent receive-side technology that builds upon its predecessors of Virtual Machine Queue (VMQ), Virtual Receive Side Scaling (vRSS), and VMMQ to provide three primary improvements:

  • 使用 CPU 核心优化主机效率Optimizes host efficiency by use of CPU cores
  • 自动优化 CPU 核心的网络流量处理,从而使 VM 能够满足并维持预期的吞吐量Automatic tuning of network traffic processing to CPU cores, thus enabling VMs to meet and maintain expected throughput
  • 使“突发”工作负载能够接收预期大小的流量Enables "bursty" workloads to receive the expected amount of traffic

有关动态 VMMQ 的详细信息,请参阅博客文章 Synthetic Accelerations(合成加速)。For more information on Dynamic VMMQ, see the blog post Synthetic Accelerations.

RDMARDMA

RDMA 是一种将网络堆栈负载转移到网络适配器的技术,可使 SMB 存储流量绕过操作系统进行处理。RDMA is a network stack offload to the network adapter allowing SMB storage traffic to bypass the operating system for processing.

RDMA 可实现高吞吐量、低延迟的网络,而且只会占用极少量的主机 CPU 资源。RDMA enables high-throughput, low-latency networking while using minimal host CPU resources. 于是,这些主机 CPU 资源可以用来运行其他 VM 或容器。These host CPU resources can then be used to run additional VMs or containers.

适用的流量类型:主机存储Applicable traffic types: host storage

所需的认证:标准Certifications required: Standard

拥有标准或高级 AQ 的所有适配器都支持 RDMA(远程直接内存访问)。All adapters with Standard or Premium AQ support RDMA (Remote Direct Memory Access). RDMA 是建议用于 Azure Stack HCI 中的存储工作负载的部署选项,可以选择性地为 VM 的存储工作负载(使用 SMB)而启用。RDMA is the recommended deployment choice for storage workloads in Azure Stack HCI and can be optionally enabled for storage workloads (using SMB) for VMs. 请参阅后面的 来宾 RDMA 部分。See the Guest RDMA section later.

Azure Stack HCI 使用 iWARP 或 RoCE 协议实现来支持 RDMA。Azure Stack HCI supports RDMA using either the iWARP or RoCE protocol implementations.

重要

RDMA 适配器只能与实现相同 RDMA 协议(iWARP 或 RoCE)的其他 RDMA 适配器配合工作。RDMA adapters only work with other RDMA adapters that implement the same RDMA protocol (iWARP or RoCE).

并非供应商提供的所有网络适配器都支持 RDMA。Not all network adapters from vendors support RDMA. 下表按字母顺序列出了可以供应已通过高级认证的 RDMA 适配器的供应商。The following table lists those vendors (in alphabetical order) that offer Premium certified RDMA adapters. 但是,此列表中未包含的某些硬件供应商也支持 RDMA。However, there are hardware vendors not included in this list that also support RDMA. 若要确认是否支持 RDMA,请参阅 Windows Server 目录See the Windows Server Catalog to verify RDMA support.

备注

Azure Stack HCI 不支持 InfiniBand (IB)。InfiniBand (IB) is not supported with Azure Stack HCI.

NIC 供应商NIC vendor iWARPiWARP RoCERoCE
BroadcomBroadcom No Yes
ChelsioChelsio Yes No
IntelIntel Yes 是(某些型号)Yes (some models)
Marvell (Qlogic/Cavium)Marvell (Qlogic/Cavium) Yes Yes
Nvidia (Mellanox)Nvidia (Mellanox) No Yes

有关部署 RDMA 的详细信息,请从 SDN GitHub 存储库下载 Word 文档。For more information on deploying RDMA, download the Word doc from the SDN GitHub repo.

Internet 广域网 RDMA 协议 (iWARP)Internet Wide Area RDMA Protocol (iWARP)

iWARP 使用传输控制协议 (TCP),可以选择性地通过基于数据中心桥接 (DCB) 优先级的流控制 (PFC) 和增强式传输服务 (ETS) 来进行增强。iWARP uses the Transmission Control Protocol (TCP), and can be optionally enhanced with Data Center Bridging (DCB) Priority-based Flow Control (PFC) and Enhanced Transmission Service (ETS).

在以下情况下,我们建议使用 iWARP:We recommend that you use iWARP if:

  • 你的网络经验非常有限,或者你不能熟练地管理网络交换机You have little or no network experience or are uncomfortable managing network switches
  • 你不控制 ToR 交换机You do not control your ToR switches
  • 你不会在部署后管理解决方案You will not be managing the solution after deployment
  • 现有部署使用了 iWARPYou already have deployments using iWARP
  • 你不确定要选择哪个选项You are unsure which option to choose

基于融合以太网的 RDMA (RoCE)RDMA over Converged Ethernet (RoCE)

RoCE 使用用户数据报协议 (UDP),需要通过数据中心桥接 (DCB) PFC 和 ETS 来提供可靠性。RoCE uses User Datagram Protocol (UDP), and requires Data Center Bridging (DCB) PFC and ETS to provide reliability.

在以下情况下,我们建议使用 RoCE:We recommend that you use RoCE if:

  • 数据中心内的现有部署使用了 RoCEYou already have deployments with RoCE in your data center
  • 你可以轻松地管理 DCB 网络要求You are comfortable managing the DCB network requirements

来宾 RDMAGuest RDMA

来宾 RDMA 可让 VM 的 SMB 工作负载获得如同在主机上使用 RDMA 一样的优势。Guest RDMA enables SMB workloads for VMs to gain the same benefits of using RDMA on hosts.

适用的流量类型:基于来宾的存储Applicable traffic types: Guest-based storage

所需的认证:高级Certifications required: Premium

使用来宾 RDMA 的主要优势是:The primary benefits of using Guest RDMA are:

  • 将 CPU 的网络流量处理负载转移到 NICCPU offload to the NIC for network traffic processing
  • 极低的延迟Extremely low latency
  • 高吞吐量High throughput

有关详细信息(包括如何部署来宾 RDMA),请从 SDN GitHub 存储库下载 Word 文档。For more information including how to deploy Guest RDMA, download the Word doc from the SDN GitHub repo.

交换机嵌入式组合 (SET)Switch Embedded Teaming (SET)

交换机嵌入式组合 (SET) 是基于软件的组合技术,已包含在 Windows Server 2016 及更高版本的 Windows server 操作系统中。Switch Embedded Teaming (SET) is a software-based teaming technology that has been included in the Windows Server operating system since Windows Server 2016. SET 不依赖于所用的网络适配器类型。SET is not dependent on the type of network adapters used.

适用的流量类型:计算、存储和管理Applicable traffic types: compute, storage, and management

所需的认证: 无(内置于 OS 中)Certifications required: none (built into the OS)

SET 是 Azure Stack HCI 支持的唯一一种组合技术。SET is the only teaming technology supported by Azure Stack HCI. 负载均衡/故障转移 (LBFO) 是 Windows Server 中常用的另一种组合技术,但不受 Azure Stack HCI 的支持。Load Balancing/Failover (LBFO) is another teaming technology commonly used with Windows Server but is not supported with Azure Stack HCI. 有关 Azure Stack HCI 中的 LBFO 的详细信息,请参阅博客文章 Teaming in Azure Stack HCI(Azure Stack HCI 中的组合)。See the blog post Teaming in Azure Stack HCI for more information on LBFO in Azure Stack HCI. SET 能够很好地处理计算、存储和管理等流量。SET works well with compute, storage, and management traffic alike.

SET 对于 Azure Stack HCI 非常重要,因为它是唯一能够实现以下功能的组合技术:SET is important for Azure Stack HCI as it is the only teaming technology that enables:

  • RDMA 适配器组合(按需)Teaming of RDMA adapters (if needed)
  • 来宾 RDMAGuest RDMA
  • 动态 VMMQDynamic VMMQ
  • 其他关键 Azure Stack HCI 功能(请参阅 Teaming in Azure Stack HCI(Azure Stack HCI 中的组合))Other key Azure Stack HCI features (see Teaming in Azure Stack HCI)

SET 基于 LBFO 提供了更多功能(包括质量和性能改进)。SET provides additional features over LBFO including quality and performance improvements. 为此,SET 要求使用对称(相同的)适配器 – 不支持组合非对称适配器。To do this, SET requires the use of symmetric (identical) adapters – teaming of asymmetric adapters is not supported. 对称网络适配器是指以下属性相同的适配器:Symmetric network adapters are those that have the same:

  • 制造商(供应商)make (vendor)
  • 型号(版本)model (version)
  • 速度(吞吐量)speed (throughput)
  • 配置configuration

要识别适配器是否对称,最简单的方法是确定其速度是否相同,以及接口说明是否相符。The easiest way to identify if adapters are symmetric is if the speeds are the same and the interface descriptions match. 它们只能在说明中列出的编号上有差别。They can deviate only in the numeral listed in the description. 使用 Get-NetAdapterAdvancedProperty cmdlet 来确保报告的配置列出相同的属性值。Use the Get-NetAdapterAdvancedProperty cmdlet to ensure the configuration reported lists the same property values.

请查看下表中的接口说明示例,其中只是编号 (#) 存在差别:See the following table for an example of the interface descriptions deviating only by numeral (#):

名称Name 接口说明Interface Description 链路速度Link Speed
NIC1NIC1 网络适配器 #1Network Adapter #1 25 Gbps25 Gbps
NIC2NIC2 网络适配器 #2Network Adapter #2 25 Gbps25 Gbps
NIC3NIC3 网络适配器 #3Network Adapter #3 25 Gbps25 Gbps
NIC4NIC4 网络适配器 #4Network Adapter #4 25 Gbps25 Gbps

RDMA 流量注意事项RDMA traffic considerations

如果实现数据中心桥接 (DCB),则必须确保在每个网络端口(包括网络交换机)中正确实现 PFC 和 ETS 配置。If you implement Data Center Bridging (DCB), you must ensure that the PFC and ETS configuration is implemented properly across every network port, including network switches. DCB 对于 RoCE 是必需的,对于 iWARP 是可选的。DCB is required for RoCE and optional for iWARP.

有关如何部署 RDMA 的详细信息,请从 SDN GitHub 存储库下载 Word 文档。For detailed information on how to deploy RDMA, download the Word doc from the SDN GitHub repo.

基于 RoCE 的 Azure Stack HCI 实现要求在结构和所有主机中配置三个 PFC 流量类(包括默认流量类):RoCE-based Azure Stack HCI implementations requires the configuration of three PFC traffic classes, including the default traffic class, across the fabric and all hosts:

群集流量类Cluster traffic class

此流量类确保为群集检测信号预留足够的带宽:This traffic class ensures there is enough bandwidth reserved for cluster heartbeats:

  • 是否必需:是Required: Yes
  • PFC 已启用:否PFC enabled: No
  • 建议的流量优先级:优先级 7Recommended traffic priority: Priority 7
  • 建议的带宽预留量:Recommended bandwidth reservation:
    • 10 GbE 或更低 RDMA 网络 = 2%10 GbE or lower RDMA networks = 2%
    • 25 GbE 或更高 RDMA 网络 = 1%25 GbE or higher RDMA networks = 1%

RDMA 流量类RDMA traffic class

此流量类确保为使用 SMB 直通进行的无损 RDA 通信预留足够的带宽:This traffic class ensures there is enough bandwidth reserved for lossless RDA communications using SMB Direct:

  • 是否必需:是Required: Yes
  • PFC 已启用:是PFC enabled: Yes
  • 建议的流量优先级:优先级 3 或 4Recommended traffic priority: Priority 3 or 4
  • 建议的带宽预留量:50%Recommended bandwidth reservation: 50%

默认流量类Default traffic class

此流量类中有群集或 RDMA 流量类中未定义的所有其他流量,包括 VM 流量和管理流量:This traffic class carries all other traffic not defined in the cluster or RDMA traffic classes, including VM traffic and management traffic:

  • 必需:默认设置(无需在主机上进行配置)Required: By default (no configuration necessary on the host)
  • 流控制 (PFC) 已启用:否Flow control (PFC) enabled: No
  • 建议的流量类:默认设置(优先级 0)Recommended traffic class: By default (Priority 0)
  • 建议的带宽预留量:默认设置(无需在主机上进行配置)Recommended bandwidth reservation: By default (no host configuration required)

存储流量模型Storage traffic models

作为 Azure Stack HCI 的存储协议,SMB 能够提供诸多优势,包括 SMB 多通道。SMB provides many benefits as the storage protocol for Azure Stack HCI including SMB Multichannel. 虽然 SMB 多通道超出了本主题的介绍范围,但我们必须了解,流量是通过 SMB 多通道可使用的每条可能链路来多路传送的。While SMB Multichannel is out-of-scope for this topic, it is important to understand that traffic is multiplexed across every possible link that SMB Multichannel can use.

备注

我们建议使用多个子网和 VLAN 在 Azure Stack HCI 中隔离存储流量。We recommend using multiple subnets and VLANs to separate storage traffic in Azure Stack HCI.

考虑以下四节点群集示例。Consider the following example of a four node cluster. 每台服务器有两个存储端口(左侧和右侧)。Each server has two storage ports (left and right side). 由于每个适配器位于同一子网和 VLAN 中,SMB 多通道会将连接分散到所有可用链路。Because each adapter is on the same subnet and VLAN, SMB Multichannel will spread connections across all available links. 因此,第一台服务器上的左侧端口 (192.168.1.1) 将连接到第二台服务器上的左侧端口 (192.168.1.2)。Therefore, the left-side port on the first server (192.168.1.1) will make a connection to the left-side port on the second server (192.168.1.2). 第一台服务器上的右侧端口 (192.168.1.12) 将连接到第二台服务器上的右侧端口。The right-side port on the first server (192.168.1.12) will connect to the right-side port on the second server. 为第三和第四台服务器建立了类似的连接。Similar connections are established for the third and fourth servers.

但是,这会建立不必要的连接,导致连接架顶式 (ToR) 交换机(标有 Xs)的互连链路(多机箱链路聚合组 (MC-LAG))发生流量拥塞。However, this creates unnecessary connections and causes congestion at the interlink (multi-chassis link aggregation group or MC-LAG) that connects the top of rack (ToR) switches (marked with Xs). 参阅下图:See the following diagram:

在四节点群集中使用相同的子网

建议的方法是对每组适配器使用不同的子网和 VLAN。The recommended approach is to use separate subnets and VLANs for each set of adapters. 在下图中,右侧端口现在使用子网 192.168.2.x /24 和 VLAN2。In the following diagram, the right-hand ports now use subnet 192.168.2.x /24 and VLAN2. 这样,左侧端口上的流量便可以保留在 TOR1 上,右侧端口上的流量可以保留在 TOR2 上。This allows traffic on the left-side ports to remain on TOR1 and the traffic on the right-side ports to remain on TOR2. 参阅下图:See the following diagram:

在四节点群集中使用不同的子网

流量带宽分配Traffic bandwidth allocation

下表显示了 Azure Stack HCI 中使用常见适配器速度的各种流量类型的示例带宽分配。The table below shows example bandwidth allocations of various traffic types, using common adapter speeds, in Azure Stack HCI. 请注意,这是融合式解决方案的一个示例,其中,所有流量类型(计算、存储和管理)在相同的物理适配器上运行,并使用 SET 进行组合。Note that this is an example of a converged solution where all traffic types (compute, storage, and management) run over the same physical adapters and are teamed using SET.

由于此用例施加了最大约束,因此它可以很好地充当基线。Since this use case poses the most constraints, it represents a good baseline. 但是,考虑到适配器数量和速度的搭配方式,只能将此解决方案视为一个示例,而不是请求支持时的一项要求。However, considering the permutations for number of adapters and speeds, this should be considered an example and not a support requirement.

此示例中做出了以下假设:The following assumptions are made for this example:

  • 每个组合有两个适配器There are two adapters per team

  • 存储总线层 (SBL)、群集共享卷 (CSV) 和 Hyper-V(实时迁移)流量:Storage Bus Layer (SBL), Cluster Shared Volume (CSV), and Hyper-V (Live Migration) traffic:

    • 使用相同的物理适配器Use the same physical adapters
    • 使用 SMBUse SMB
  • 使用数据中心桥接为 SMB 提供 50% 的带宽分配SMB is given a 50% bandwidth allocation using Data Center Bridging

    • SBL/CSV 是最高优先级流量,获得 SMB 带宽预留量的 70%,另外:SBL/CSV is the highest priority traffic and receives 70% of the SMB bandwidth reservation, and:
    • 使用 Set-SMBBandwidthLimit cmdlet 限制实时迁移 (LM),LM 获得剩余带宽的 29%Live Migration (LM) is limited using the Set-SMBBandwidthLimit cmdlet and receives 29% of the remaining bandwidth
      • 如果实时迁移的可用带宽 >= 5 Gbps 并且网络适配器功能正常,则使用 RDMA。If the available bandwidth for Live Migration is >= 5 Gbps, and the network adapters are capable, use RDMA. 使用以下 cmdlet 实现此目的:Use the following cmdlet to do so:

        Set-VMHost VirtualMachineMigrationPerformanceOption SMB
        
      • 如果实时迁移的可用带宽 < 5 Gbps,则使用压缩来减少中断时间。If the available bandwidth for Live Migration is < 5 Gbps, use compression to reduce blackout times. 使用以下 cmdlet 实现此目的:Use the following cmdlet to do so:

        Set-VMHost -VirtualMachineMigrationPerformanceOption Compression
        
  • 如果将 RDMA 用于实时迁移,请使用 SMB 带宽限制来确保实时迁移流量不会占用分配给 RDMA 流量类的所有带宽。If using RDMA with Live Migration, ensure that Live Migration traffic cannot consume the entire bandwidth allocated to the RDMA traffic class by using an SMB bandwidth limit. 请注意,此 cmdlet 采用的输入值以每秒字节数 (Bps) 为单位,而列出的网络适配器则是以每秒位数 (bps) 为单位。Be careful as this cmdlet takes entry in bytes per second (Bps) whereas network adapters are listed in bits per second (bps). 例如,使用以下 cmdlet 设置 6 Gbps 的带宽限制:Use the following cmdlet to set a bandwidth limit of 6 Gbps for example:

    Set-SMBBandwidthLimit -Category LiveMigration -BytesPerSecond 750MB
    

    备注

    在此示例中,750 MBps 等于 6 Gbps750 MBps in this example equates to 6 Gbps

下面是示例带宽分配表:Here is the example bandwidth allocation table:

NIC 速度NIC Speed 组合带宽Teamed Bandwidth SMB 带宽预留量**SMB Bandwidth Reservation** SBL/CSV %SBL/CSV % SBL/CSV 带宽SBL/CSV Bandwidth 实时迁移 %Live Migration % 最大实时迁移带宽Max Live Migration Bandwidth 检测信号 %Heartbeat % 检测信号带宽Heartbeat Bandwidth
10 Gbps10 Gbps 20 Gbps20 Gbps 10 Gbps10 Gbps 70%70% 7 Gbps7 Gbps * * 2%2% 200 Mbps200 Mbps
25 Gbps25 Gbps 50 Gbps50 Gbps 25 Gbps25 Gbps 70%70% 17.5 Gbps17.5 Gbps 29%29% 7.25 Gbps7.25 Gbps 1%1% 250 Mbps250 Mbps
40 Gbps40 Gbps 80 Gbps80 Gbps 40 Gbps40 Gbps 70%70% 28 Gbps28 Gbps 29%29% 11.6 Gbps11.6 Gbps 1%1% 400 Mbps400 Mbps
50 Gbps50 Gbps 100 Gbps100 Gbps 50 Gbps50 Gbps 70%70% 35 Gbps35 Gbps 29%29% 14.5 Gbps14.5 Gbps 1%1% 500 Mbps500 Mbps
100 Gbps100 Gbps 200 Gbps200 Gbps 100 Gbps100 Gbps 70%70% 70 Gbps70 Gbps 29%29% 29 Gbps29 Gbps 1%1% 1 Gbps1 Gbps
200 Gbps200 Gbps 400 Gbps400 Gbps 200 Gbps200 Gbps 70%70% 140 Gbps140 Gbps 29%29% 58 Gbps58 Gbps 1%1% 2 Gbps2 Gbps

*- 应使用压缩而不是 RDMA,因为用于实时迁移流量的带宽分配 < 5 Gbps*- should use compression rather than RDMA as the bandwidth allocation for Live Migration traffic is <5 Gbps

**- 50% 是此示例的示例带宽预留量**- 50% is an example bandwidth reservation for this example

拉伸群集注意事项Stretched cluster considerations

拉伸群集提供跨多个数据中心的灾难恢复。Stretched clusters provide disaster recovery that spans multiple data centers. 最简单形式的拉伸 Azure Stack HCI 群集网络如下所示:In its simplest form, a stretched Azure Stack HCI cluster network looks like this:

拉伸群集

拉伸群集具有以下要求和特征:Stretched clusters have the following requirements and characteristics:

  • RDMA 限制为单个站点,不支持在不同的站点或子网中使用。RDMA is limited to a single site, and is not supported across different sites or subnets.

  • 同一站点中的服务器必须位于同一机架和第 2 层边界中。Servers in the same site must reside in the same rack and Layer-2 boundary.

  • 在跨第 3 层边界的站点之间通信;不支持拉伸的第 2 层拓扑。Communication between sites cross a Layer-3 boundary; stretched Layer-2 topologies are not supported.

  • 如果某个站点对其存储适配器使用 RDMA,这些适配器必须位于无法在站点之间路由的独立子网和 VLAN 中。If a site uses RDMA for its storage adapters, these adapters must be on a separate subnet and VLAN that cannot route between sites. 这可以防止存储副本跨站点使用 RDMA。This prevents Storage Replica from using RDMA across sites.

  • 用于站点间通信的适配器:Adapters used for communication between sites:

    • 可以是物理或虚拟适配器(主机 vNIC)。Can be physical or virtual (host vNIC). 如果是虚拟适配器,则必须在该适配器自身的子网和 VLAN 中,为每个物理 NIC 预配一个 vNIC。If virtual, you must provision one vNIC in its own subnet and VLAN per physical NIC.
    • 必须位于其自身的、可以在站点之间路由的子网和 VLAN 中。Must be on their own subnet and VLAN that can route between sites.
    • 必须使用 Disable-NetAdapterRDMA cmdlet 禁用 RDMA。RDMA must be disabled using the Disable-NetAdapterRDMA cmdlet. 建议通过 Set-SRNetworkConstraint cmdlet 显式要求存储副本使用特定的接口。We recommend that you explicitly require Storage Replica to use specific interfaces using the Set-SRNetworkConstraint cmdlet.
    • 必须满足存储副本的任何其他要求。Must meet any additional requirements for Storage Replica.
  • 故障转移到另一个站点时,必须确保有足够的带宽用于在另一站点上运行工作负载。In the event of a failover to another site, you must ensure that enough bandwidth is available to run the workloads at the other site. 保险的做法是在站点中预配其可用容量的 50% 作为预留带宽。A safe option is to provision sites at 50% of their available capacity. 如果能够容忍故障转移期间性能降低的问题,则这不是一项硬性要求。This is not a hard requirement if you are able to tolerate lower performance during a failover.

后续步骤Next steps