将存储扩展到 Azure Stack HubExtending storage to Azure Stack Hub

本文提供 Azure Stack Hub 存储基础架构信息,可帮助你确定如何将 Azure Stack Hub 集成到现有的网络环境。This article provides Azure Stack Hub storage infrastructure information to help you decide how to integrate Azure Stack Hub into your existing networking environment. 在大致介绍如何扩展数据中心后,本文将演示两种不同的方案。After providing a general discussion of extending your datacenter, the article presents two different scenarios. 可以连接到 Windows 文件存储服务器。You can connect to a Windows file storage server. 也可以连接到 Windows iSCSI 服务器。You can also connect to a Windows iSCSI server.

将存储扩展到 Azure Stack Hub 概述Overview of extending storage to Azure Stack Hub

某些情况下,将数据存放在公有云不足以满足要求。There are scenarios where having your data located in the public cloud isn't enough. 也许你拥有的是计算密集型虚拟化数据库工作负载,它对延迟敏感,而往返公有云的时间可能会影响数据库工作负载的性能。Perhaps you have a compute-intensive virtualized database workload, sensitive to latencies, and the round-trip time to the public cloud could affect performance of the database workload. 也许本地数据存储在文件服务器、NAS 或 iSCSI 存储阵列上,需要通过本地工作负载才能访问,并且需要驻留在本地以满足法规或合规性目标。Perhaps there is data on premises, held on a file server, NAS, or iSCSI storage array, which needs to be accessed by on-premises workloads, and needs to reside on-premises to meet regulatory or compliance goals. 将本地数据驻留对许多组织而言都很重要,以上只是其中两种情况。These are just two of the scenarios where having data reside on-premises, remains important for many organizations.

为什么不直接在 Azure Stack Hub 上的存储帐户中或在 Azure Stack Hub 系统上运行的虚拟化文件服务器中承载该数据?So, why not just host that data in storage accounts on Azure Stack Hub, or inside virtualized file servers, running on the Azure Stack Hub system? 与 Azure 中不同,Azure Stack Hub 存储是有限的。Well, unlike in Azure, Azure Stack Hub storage is finite. 可供使用的容量完全取决于所选购的每个节点的容量,以及所拥有的节点数。The capacity you have available for your usage depends entirely on the per-node capacity you chose to purchase, in addition to the number of nodes you have. 此外,由于 Azure Stack Hub 是一种超融合解决方案,因而如果希望增加存储容量以满足使用需求,还需通过增加节点来增加计算占用。And because Azure Stack Hub is a Hyper-Converged solution, should you wish to grow your storage capacity to meet usage demands, you also need to grow your compute footprint through the addition of nodes. 这可能会导致成本过高,尤其是在额外需要的容量是用于可为获得 Azure Stack Hub 系统外的低成本而添加的冷存档存储的情况下。This can be potentially cost prohibitive, especially if the need to extra capacity is for cold, archival storage that could be added for low cost outside of the Azure Stack Hub system.

以上是了解以下方案的原因。Which brings you to the scenario that you will cover below. 如何将 Azure Stack Hub 系统以及 Azure Stack Hub 上运行的虚拟化工作负载简单高效地连接到 Azure Stack Hub 之外可通过网络访问的存储系统。How can you connect Azure Stack Hub systems, virtualized workloads running on the Azure Stack Hub, simply and efficiently, to storage systems outside of the Azure Stack Hub, accessible via the network.

专为扩展存储设计Design for extending storage

下图描绘了一种方案,其中,运行工作负荷的单个虚拟机连接并利用外部(在 VM 和 Azure Stack Hub 本身的外部)存储来读取/写入数据。本文重点说明简单的文件检索,但你也可以扩展本示例,使其适用于更复杂的方案,例如远程存储数据库文件。The diagram depicts a scenario, where a single virtual machine, running a workload, connects to, and utilizes external (to the VM, and the Azure Stack Hub itself) storage, for purposes of data reading/writing etc. For this article, you'll focus on simple retrieval of files, but you can expand this example for more complex scenarios, such as the remote storage of database files.

在上图中可以看到,Azure Stack Hub 系统上已部署了包含多个 NIC 的 VM。In the diagram, you'll see that the VM on the Azure Stack Hub system has been deployed with multiple NICs. 无论是从冗余还是存储最佳做法的立场来看,目标与目的地之间都必须有多个路径。From both a redundancy, but also a storage best practice, it's important to have multiple paths between target and destination. 让情况变得更为复杂的是,Azure Stack Hub 中的 VM 同时有公共和专用 IP,就像在 Azure 中的情况一样。Where things become more complex, are where VMs in Azure Stack Hub have both public and private IPs, just like in Azure. 如果外部存储必须连接到该 VM,就只能通过公共 IP 进行连接,因为专用 IP 主要用于 Azure Stack Hub 系统内部的 vNet 和子网。If the external storage needed to reach the VM, it can only do so via the public IP, as the Private IPs are primarily used within the Azure Stack Hub systems, within vNets and the subnets. 外部存储无法与 VM 的专用 IP 空间通信,除非它能通过站点到站点 VPN 连接到 vNet 本身。The external storage would not be able to communicate with the private IP space of the VM, unless it passes through a Site to Site VPN, to punch into the vNet itself. 因此,本示例的要点为通过公共 IP 空间进行通信。So, for this example, we'll focus on communication via the public IP space. 请注意,在图中的公共 IP 空间内,有 2 个不同的公共 IP 池子网。One thing to notice with the public IP space in the diagram, is that there are 2 different public IP pool subnets. 默认情况下,Azure Stack Hub 只需要一个用于公共 IP 地址的池即可,但考虑到冗余路由,可以添加另一个池。By default, Azure Stack Hub requires just one pool for public IP address purposes, but something to consider, for redundant routing, may be to add a second. 但由于无法选择特定池中的 IP 地址,实际上 VM 最后可能具有来自同一池、但跨多个虚拟网络卡的公共 IP。However, at this time, it is not possible to select an IP address from a specific pool, so you may indeed end up with VMs with public IPs from the same pool across multiple virtual network cards.

为了便于讨论,假设边界设备与外部存储之间的路由受到管理,流量可以正常遍历网络。For the purposes of this discussion, we will assume that the routing between the border devices and the external storage is taken care of, and traffic can traverse the network appropriately. 在本示例中,主干是 1 GbE、10 GbE、25 GbE 还是更快的速度并不重要,但在规划集成时必须考虑到这一点,使所有应用程序获得所需的性能来访问此外部存储。For this example, it doesn’t matter if the backbone is 1GbE, 10GbE, 25 GbE or even faster, however this would be important to consider as you plan for your integration, to address the performance needs of any applications accessing this external storage.

连接到 Windows Server iSCSI 目标Connect to a Windows Server iSCSI Target

在此方案中,我们将在 Azure Stack Hub 上部署并配置 Windows Server 2019 虚拟机,并让其准备好连接到外部 iSCSI 目标(也运行 Windows Server 2019)。In this scenario, we will deploy and configure a Windows Server 2019 virtual machine on Azure Stack Hub and prepare it to connect to an external iSCSI Target, which will also be running Windows Server 2019. 在适当的情况下,我们将启用 MPIO 等重要功能,以优化性能以及 VM 与外部存储之间的连接。Where appropriate we will enable key features such as MPIO, to optimize performance and connectivity between the VM and external storage.

在 Azure Stack Hub 上部署 Windows Server 2019 VMDeploy the Windows Server 2019 VM on Azure Stack Hub

  1. 在“Azure Stack Hub 管理门户”中,假设已正确注册此系统并已将其连接到市场,请选择“市场管理”;接下来,假设没有 Windows Server 2019 映像,选择“从 Azure 添加”并搜索“Windows Server 2019”,以添加“Windows Server 2019 Datacenter”映像 。From your Azure Stack Hub administration portal, assuming this system has been correctly registered and is connected to the marketplace, select Marketplace Management, then, assuming you don't already have a Windows Server 2019 image, select Add from Azure and then search for Windows Server 2019, adding the Windows Server 2019 Datacenter image.

    下载 Windows Server 2019 映像可能需要一段时间。Downloading a Windows Server 2019 image may take some time.

  2. 如果在 Azure Stack Hub 环境中拥有 Windows Server 2019 映像,即可登录到 Azure Stack Hub 用户门户。Once you have a Windows Server 2019 image in your Azure Stack Hub environment, sign into the Azure Stack Hub user portal.

  3. 登录到 Azure Stack Hub 用户门户后,确保自己拥有某个套餐的订阅,可用于预配 IaaS 资源(计算、存储和网络)。Once logged into the Azure Stack Hub user portal, ensure you have a subscription to an offer, that allows you to provision IaaS resources (Compute, Storage and Network).

  4. 获得可用的订阅后,回到 Azure Stack Hub 用户门户中的“仪表板”,并依次选择“创建资源”、“计算”、“Windows Server 2019 Datacenter 库项” 。Once you have a subscription available, back on the dashboard in the Azure Stack Hub user portal, select Create a resource, select Compute and then select the Windows Server 2019 Datacenter gallery item.

  5. 在“基本信息”边栏选项卡上,填写如下信息:On the Basics blade, complete the information as follows:

    a.a. 名称:VM001Name: VM001

    b.b. 用户名:localadminUsername: localadmin

    c.c. 密码确认密码:<password of your choice>Password and Confirm password: <password of your choice>

    d.d. 订阅:<subscription of your choice, with compute/storage/network resources>。Subscription: <subscription of your choice, with compute/storage/network resources>.

    e.e. 资源组:storagetesting(新建)Resource group: storagetesting (create new)

    f.f. 选择“确定”Select OK

  6. 在“选择大小”边栏选项卡上,依次选择“Standard_F8s_v2”和“选择” 。On the Choose a size blade, select a Standard_F8s_v2 and select Select.

  7. 在“设置”边栏选项卡上选择“虚拟网络”,在“创建虚拟网络”边栏选项卡上,将地址空间调整为 10.10.10.0/23 并将子网地址范围更新为 10.10.10.0/24,然后选择“确定”。 On the Settings blade, select the Virtual network and in the Create virtual network blade, adjust the address space to be 10.10.10.0/23 and update the Subnet address range to be 10.10.10.0/24 then select OK.

  8. 选择“公共 IP 地址”,然后在“创建公共 IP 地址”边栏选项卡中选择“静态”单选按钮 。Select the Public IP address, and in the Create public IP address blade, select the Static radio button.

  9. 在“选择公共入站端口”下拉列表中,选择“RDP (3389)” 。On the Select public inbound ports dropdown, select RDP (3389).

  10. 保留其他默认值,然后选择“确定”。Leave the other defaults and select OK.

  11. 阅读摘要、等待验证,然后选择“确定”开始部署。Read the summary, wait for validation, then select OK to begin the deployment. 部署应该可在大约 10 分钟内完成。The deployment should complete in around 10 minutes.

  12. 完成部署后,在“资源”下选择虚拟机名称“VM001”以打开“概述” 。Once the deployment has completed, under Resource select the virtual machine name, VM001 to open Overview.

  13. 在“DNS 名称”下,选择“配置”并提供 DNS 名称标签 vm001,选择“保存”,然后选择“VM001” 。Under DNS name, select Configure and provide a DNS name label, vm001 and select Save, then select VM001.

  14. 在“概述”边栏选项卡的右侧,选择“虚拟网络/子网”文本下的“storagetesting-vnet/default”。On the right-hand side of the overview blade, select storagetesting-vnet/default under the Virtual network/subnet text.

  15. 在“storagetesting-vnet”边栏选项卡中,依次选择“子网”、“+子网”,然后在新的“添加子网”边栏选项卡中输入以下信息,然后选择“确定” :Within the storagetesting-vnet blade, select Subnets then +Subnet, then in the new Add Subnet blade, enter the following information, then select OK:

    a.a. 名称:subnet2Name: subnet2

    b.b. 地址范围(CIDR 块) :10.10.11.0/24Address range (CIDR block): 10.10.11.0/24

    c.c. 网络安全组:无Network Security Group: None

    d.d. 路由表:无Route table: None

  16. 保存后,选择“VM001”。Once saved, select VM001.

  17. 从“概述”边栏选项卡的左侧,选择“网络”。From the left-hand side of the overview blade, select Networking.

  18. 依次选择“链接网络接口”、“创建网络接口”。 Select Attach network interface and then select Create network interface.

  19. 在“创建网络接口”边栏选项卡中,输入以下信息。On the Create network interface blade, enter the following information.

    a.a. 名称:vm001nic2Name: vm001nic2

    b.b. 子网:确保子网为 10.10.11.0/24Subnet: Ensure subnet is 10.10.11.0/24

    c.c. 网络安全组:VM001-nsgNetwork security group: VM001-nsg

    d.d. 资源组:storagetestingResource group: storagetesting

  20. 成功附加后,选择“VM001”,然后选择“停止”以关闭 VM 。Once successfully attached, select VM001 and select Stop to shut down the VM.

  21. 停止(解除分配)VM 后,从“概述”边栏选项卡左侧依次选择“网络”、“附加网络接口”、“vm001nic2”和“确定” 。Once the VM is stopped (deallocated), on the left-hand side of the overview blade, select Networking, select Attach network interface and then select vm001nic2, then select OK. 片刻之后,系统将额外的 NIC 添加到 VM。The additional NIC will be added to the VM in a few moments.

  22. 在“网络”边栏选项卡上选择“vm001nic2”选项卡,然后选择“网络接口: vm001nic2” 。Still on the Networking blade, select the vm001nic2 tab, then select Network Interface:vm001nic2.

  23. 在“vm001nic 接口”边栏选项卡上选择“IP 设置”,然后在边栏选项卡的中央选择“ipconfig1”。 On the vm001nic interface blade, select IP configurations, and in the center of the blade, select ipconfig1.

  24. 在“ipconfig1 设置”边栏选项卡上,为“公共 IP 地址”选择“已启用”,选择“配置所需的设置”、“新建”,输入 vm001nic2pip 作为名称,然后依次选择“静态”、“确定”、“保存”。 On the ipconfig1 settings blade, select Enabled for Public IP address and select Configure required settings, Create new, and enter vm001nic2pip for the name, select Static and select OK then Save.

  25. 成功保存后,返回 VM001 概述边栏选项卡,然后选择“启动”以启动配置的 Windows Server 2019 VM。Once successfully saved, return to the VM001 overview blade, and select Start to start your configured Windows Server 2019 VM.

  26. 启动后,在 VM001 中建立 RDP 会话。Once started, establish an RDP session into the VM001.

  27. 在 VM 建立连接后,打开 CMD(以管理员身份),然后输入主机名以检索 OS 的计算机名 。Once connected inside the VM, open CMD (as administrator) and enter hostname to retrieve the computer name of the OS. 它应与 VM001 匹配。It should match VM001. 记下此名称供以后使用。Make a note of this for later.

在 Azure Stack Hub 上配置 Windows Server 2019 VM 上的第二个网络适配器Configure second network adapter on Windows Server 2019 VM on Azure Stack Hub

默认情况下,Azure Stack Hub 会将默认网关分配给附加到虚拟机的第一个(主)网络接口。By default, Azure Stack Hub assigns a default gateway to the first (primary) network interface attached to the virtual machine. Azure Stack Hub 不会将默认网关分配给附加到虚拟机的其他(辅助)网络接口。Azure Stack Hub does not assign a default gateway to additional (secondary) network interfaces attached to a virtual machine. 因此,默认情况下无法与辅助网络接口所在子网的外部资源进行通信。Therefore, you are unable to communicate with resources outside the subnet that a secondary network interface is in, by default. 但是,辅助网络接口可以与子网外部的资源进行通信,尽管对不同操作系统而言,启用通信的步骤有所不同。Secondary network interfaces can, however, communicate with resources outside their subnet, though the steps to enable communication are different for different operating systems.

  1. 如果尚未打开连接,请在 VM001 中建立 RDP 连接。If you do not already have a connection open, establish an RDP connection into VM001.

  2. 以管理员身份打开 CMD,然后运行“route print”,这应返回此 VM 内的两个接口(Hyper-V 网络适配器) 。Open CMD as administrator and run route print which should return the two interfaces (Hyper-V Network Adapters) inside this VM.

  3. 现在,运行 ipconfig 以查看分配给辅助网络接口的 IP 地址。Now run ipconfig to see which IP address is assigned to the secondary network interface. 在本例中,10.10.11.4 被分配到接口 6。In this example, 10.10.11.4 is assigned to interface 6. 辅助网络接口没有返回任何默认网关地址。No default gateway address is returned for the secondary network interface.

  4. 要将发往辅助网络接口子网外部地址的所有流量路由到子网网关,请从“CMD:”运行以下命令。To route all traffic destined for addresses outside the subnet of the secondary network interface to the gateway for the subnet, run the following command from the CMD:.

    route add -p 0.0.0.0 MASK 0.0.0.0 <ipaddress> METRIC 5015 IF <interface>
    

    <ipaddress> 是当前子网的 .1 地址,<interface> 是接口编号。The <ipaddress> is the .1 address of the current subnet, and <interface> is the interface number.

  5. 若要确认添加的路由是否在路由表中,请输入“route print”命令。To confirm the added route is in the route table, enter the route print command.

  6. 还可以通过运行 ping 命令来验证出站通信:You can also validate outbound communication by running a ping command:
    ping 8.8.8.8 -S 10.10.11.4
    使用 -S 标记可以指定源地址,本例中,10.10.11.4 是目前具有默认网关的 NIC 的 IP 地址。The -S flag allows you to specify a source address, in this case, 10.10.11.4 is the IP address of the NIC that now has a default gateway.

  7. 关闭 CMD。Close CMD.

配置 Windows Server 2019 iSCSI 目标Configure the Windows Server 2019 iSCSI Target

对于本方案,需验证一个配置,其中的 Windows Server 2019 iSCSI 目标是 Azure Stack Hub 环境外、Hyper-V 上运行的虚拟机。For the purpose of this scenario, you'll be validating a configuration where the Windows Server 2019 iSCSI Target is a virtual machine running on Hyper-V, outside of the Azure Stack Hub environment. 将为此虚拟机配置 8 个虚拟处理器、1 个 VHDX 文件,最重要的是,还要配置 2 个虚拟网络适配器。This virtual machine will be configured with 8 virtual processors, a single VHDX file, and most importantly, 2 virtual network adapters. 在理想情况下,这两个网络适配器将有不同的可路由子网,但在此验证中,它们位于同一子网上。In an ideal scenario, these network adapters will have different routable subnets, but in this validation, they will have network adapters on the same subnet.

iSCSI 目标服务器可以是在 Hyper-V、VMware 或所选替代设备(专用的物理 iSCSI SAN 等)上运行的 Windows Server 2016 或 2019 物理机或虚拟机。For your iSCSI Target server, it could be Windows Server 2016 or 2019, physical or virtual, running on Hyper-V, VMware, or an alternative appliance of your choice, such as a dedicated physical iSCSI SAN. 此处的重点是与 Azure Stack Hub 系统建立入站和出站连接,但源与目标之间最好有多个路径,这样可以提供额外的冗余,并可以使用更高级的功能(例如 MPIO)来提升性能。The key focus here, is connectivity into and out of the Azure Stack Hub system, however having multiple paths between the source and destination is preferably, as it provides additional redundancy, and allows the use of more advanced capabilities to drive increased performance, such as MPIO.

建议先使用最新的累积更新和修补程序更新 Windows Server 2019 iSCSI Target(必要时重启),再继续配置文件共享。I'd encourage you to update your Windows Server 2019 iSCSI Target with the latest cumulative updates and fixes, rebooting if necessary, before proceeding with the configuration of file shares.

更新并重启后,可将此服务器配置为 iSCSI 目标。Once updated and rebooted, you can now configure this server as an iSCSI Target.

  1. 打开“服务器管理器”,依次选择“管理”、“添加角色和功能”。 Open Server Manager and select Manage, then Add Roles and Features.

  2. 打开后,依次选择“下一步”、“基于角色或基于功能的安装”,然后继续完成各项选择,直到转到“选择服务器角色”页 。Once opened, select Next, select Role-based or feature-based installation, and proceed through the selections until you reach the Select server roles page.

  3. 展开“文件和存储服务”,然后展开“文件和 iSCSI 服务”并勾选“iSCSI 目标服务器”框,接受添加新功能的任何弹出提示,然后继续完成操作 。Expand File and Storage Services, expand File & iSCSI Services and tick the iSCSI Target Server box, accepting any popup prompts to add new features, then proceed through to completion.

    完成后,关闭“服务器管理器”。Once completed, close Server Manager.

  4. 打开“文件资源管理器”,导航到 C:\,然后新建文件夹,命名为 iSCSI 。Open File Explorer, navigate to C:\ and create a new folder, called iSCSI.

  5. 重新打开“服务器管理器”,然后从左侧菜单选择“文件和存储服务” 。Reopen Server Manager and select File and Storage Services from the left-hand menu.

  6. 选择 iSCSI,然后在右窗格中选择“启动新的 iSCSI 虚拟磁盘向导以创建 iSCSI 虚拟磁盘”链接 。Select iSCSI and select the "To create an iSCSI virtual disk, start the New iSCSI Virtual Disk Wizard" link on the right pane. 选择它。select it. 此时将弹出一个向导。A wizard pops-up.

  7. 在“选择 iSCSI 虚拟磁盘位置”页上,选择“键入自定义路径”的单选按钮,然后浏览到 C:\iSCSI 并选择“下一步” 。On the Select iSCSI virtual disk location page, select the radio button for Type a custom path and browse to your C:\iSCSI and select Next.

  8. 将 iSCSI 虚拟磁盘命名为 iSCSIdisk1,根据需要提供说明,然后选择“下一步” 。Give the iSCSI virtual disk a name of iSCSIdisk1 and optionally, a description, then select Next.

  9. 将虚拟磁盘的大小设置为 10GB,然后依次选择“固定大小”和“下一步” 。Set the size of the virtual disk to 10GB and select Fixed size and select Next.

  1. 这是新的目标,因此请依次选择“新 iSCSI 目标”和“下一步” 。Since this is a new target, select New iSCSI target and select Next.

  2. 在“指定目标名称”页上,输入 TARGET1,然后选择“下一步” 。On the Specify target name page, enter TARGET1 and select Next.

  3. 在“指定访问服务器”页上,选择“添加” 。On the Specify access servers page, select Add. 这将打开一个对话框,用于输入将获得授权连接 iSCSI 目标的特定发起程序。This opens a dialog to enter specific initiators that will be authorized to connect to the iSCSI Target.

  4. 在“添加发起程序 ID 窗口”中,选择“输入所选类型的值”,然后确保已在“类型”下选中下拉菜单中的 IQN 。In the Add initiator ID window, select Enter a value for the selected type and under Type ensure IQN is selected in the drop-down menu. 输入 iqn.1991-05.com.microsoft:<computername>,其中 <computername> 是 VM001 的计算机名,然后选择“下一步” 。Enter iqn.1991-05.com.microsoft:<computername> where <computername> is the computer name of VM001 then select Next.

  5. 在“启用身份验证”页上,将框留空,然后选择“下一步” 。On the Enable Authentication page, leave the boxes blank, then select Next.

  6. 确认选择,并选择“创建”,然后关闭。Confirm your selections and select Create, then close. 此时服务器管理器中应显示创建的 iSCSI 虚拟磁盘。You should see your iSCSI virtual disk created in Server Manager.

配置 Windows Server 2019 iSCSI 发起程序和 MPIOConfigure the Windows Server 2019 iSCSI Initiator and MPIO

若要设置 iSCSI 发起程序,请首先在 Azure Stack Hub 系统上重新登录到 Azure Stack Hub 用户门户,然后导航到 VM001 的“概述”边栏选项卡 。To set up the iSCSI Initiator, firstly, log back into the Azure Stack Hub user portal on your Azure Stack Hub system and navigate to the overview blade for VM001.

  1. 建立与 VM001 的 RDP 连接。Establish an RDP connection to VM001. 连接后,打开“服务器管理器”。Once connected, open Server Manager.

  2. 选择“添加角色和功能”,并接受默认值,直到出现“功能”页 。Select Add roles and features, and accept the defaults until you reach the Features page.

  3. 在“功能”页上,添加“多路径 I/O”,然后选择“下一步” 。On the Features page, add Multipath I/O and select Next.

  4. 勾选“必要时自动重启目标服务器”框并选择“安装”,然后选择“关闭” 。Tick the Restart the destination server automatically if required box and select Install, then select Close. 很可能需要重启,完成重启后,请重新连接到 VM001。A reboot will most likely be required, so once completed, reconnect to VM001.

  5. 返回“服务器管理器”,等待 MPIO 安装完成,选择“关闭”,然后依次选择“工具”和“MPIO” 。Back in Server Manager, wait for the MPIO install to complete, select close, then select Tools and select MPIO.

  6. 选择“发现多路径”选项卡,勾选“添加对 iSCSI 设备的支持”框并选择“添加”,然后选择“确定”以重启 VM001 。Select the Discover Multi-Paths tab, and tick the box to Add support for iSCSI devices and select Add, then select Yes to reboot VM001. 如果未弹出窗口,请选择“确定”,然后手动重启。If you don’t receive a window, select OK, then reboot manually.

  7. 重启后,新建与 VM001 的 RDP 连接。Once rebooted, establish a new RDP connection to VM001.

  8. 连接后,打开“服务器管理器”,然后依次选择“工具”和“iSCSI 发起程序” 。Once connected, open Server Manager, select Tools and select iSCSI Initiator.

  9. Microsoft iSCSI 窗口弹出时,选择“确定”以允许默认运行 iSCSI 服务。When a Microsoft iSCSI window pops up, select Yes to allow the iSCSI service to run by default.

  10. 在“iSCSI 发起程序属性”窗口中,选择“发现”选项卡。In the iSCSI Initiator properties window, select the Discovery tab.

  11. 现在要添加 2 个目标,首先选择“发现门户”按钮。You will now add 2 Targets, so first select the Discover Portal button.

  12. 输入 iSCSI 目标服务器的第一个 IP 地址,然后选择“高级”。Enter the first IP address of your iSCSI Target server, and select Advanced.

  13. 在“高级设置”窗口中,选择以下内容,然后选择“确定” 。In the Advanced Settings window, select the following, then select OK.

    a.a. 本地适配器:Microsoft iSCSI 发起程序。Local adapter: Microsoft iSCSI Initiator.

    b.b. 发起程序 IP:10.10.10.4。Initiator IP: 10.10.10.4.

  14. 返回“发现目标门户”窗口,选择“确定” 。Back in the Discover Target Portal window, select OK.

  15. 按以下步骤重复此过程:Repeat the process with the following:

    a.a. IP 地址:第二个 iSCSI 目标 IP 地址。** IP address**: Your second iSCSI Target IP address.

    b.b. 本地适配器:Microsoft iSCSI 发起程序。Local adapter: Microsoft iSCSI Initiator.

    c.c. 发起程序 IP:10.10.11.4。Initiator IP: 10.10.11.4.

  16. 目标门户应如下所示,你自己的 iSCSI 目标 IP 位于“地址”列下。Your target portals should look like this, with your own iSCSI Target IPs under the Address column.

  17. 返回“目标”选项卡,选择窗口中间的 iSCSI 目标,然后选择“连接” 。Back on the Targets tab, select your iSCSI Target from the middle of the window, and select Connect.

  18. 在“连接到目标”窗口中,勾选“启用多路径”框,然后选择“高级” 。In the Connect to target window, select the Enable multi-path tick box, and select Advanced.

  19. 输入以下信息并选择“确定”,然后在“连接到目标”窗口中,选择“确定” 。Enter the following information and select OK, then in the Connect to Target window, select OK.

    a.a. 本地适配器:Microsoft iSCSI 发起程序。Local adapter: Microsoft iSCSI Initiator.

    b.b. 发起程序 IP:10.10.10.4。Initiator IP: 10.10.10.4.

    c.c. 目标门户 IP:<your first iSCSI Target IP / 3260>。Target portal IP: <your first iSCSI Target IP / 3260>.

  1. 对第二个发起程序/目标组合重复此过程。Repeat the process for the second initiator/target combination.

    a.a. 本地适配器:Microsoft iSCSI 发起程序。** Local adapter**: Microsoft iSCSI Initiator.

    b.b. 发起程序 IP:10.10.11.4。Initiator IP: 10.10.11.4.

    c.c. 目标门户 IP:<your second iSCSI Target IP / 3260>。Target portal IP: <your second iSCSI Target IP / 3260>.

    ![](./media/azure-stack-network-howto-extend-datacenter/image21.png)
    
  2. 选择“卷和设备”选项卡,然后选择“自动配置”,此时系统应显示 MPIO 卷 :Select the Volumes and Devices tab, and then select Auto Configure – you should see an MPIO volume presented:

  3. 返回“目标”选项卡,选择“设备”,此时系统应显示与之前创建的单个 iSCSI VHD 相连的 2 个连接 。Back on the Targets tab, select Devices and you should see 2 connections to the single iSCSI VHD you created earlier.

  4. 选择“MPIO 按钮”,以查看有关负载均衡策略和路径的详细信息。Select the MPIO button to see more information about the load-balancing policy and paths.

  5. 选择“确定”三次,以退出 Windows 和 iSCSI 发起程序。Select OK three times to exit the windows and the iSCSI Initiator.

  6. 打开磁盘管理 (diskmgmt.msc),此时系统应显示“初始化磁盘”窗口。Open Disk Management (diskmgmt.msc) and you should be prompted with an Initialize Disk window.

  7. 选择“确定”接受默认值,向下滚动到新磁盘,右键单击,然后选择“新建简单卷” Select OK to accept the defaults, then scroll down to the new disk, right-click, and select New Simple Volume

  8. 接受默认值,逐步完成向导。Walk through the wizard, accepting the defaults. 将卷标签更改为 iSCSIdisk1,然后选择“完成” 。Change the Volume label to iSCSIdisk1 and then select Finish.

  9. 接下来,系统格式化驱动器并显示驱动器号。The drive should then be formatted and presented with a drive letter.

  10. 打开“文件资源管理器”,然后选择“此电脑”以查看附加到 VM001 的新驱动器 。Open File Explorer and select This PC to see your new drive attached to VM001.

测试外部存储连接Testing external storage connectivity

要验证通信并运行基本的文件复制测试,首先在 Azure Stack Hub 系统上重新登录到 Azure Stack Hub 用户门户,然后导航到 VM001 的“概述”边栏选项卡 To validate communication and run a rudimentary file copy test, firstly, log back into the Azure Stack Hub user portal on your Azure Stack Hub system and navigate to the overview blade for VM001

  1. 选择“连接”以便与 VM001 建立 RDP 连接 Select Connect to establish an RDP connection into VM001

  2. 打开“任务管理器”,选择“性能”选项卡,然后将窗口与 RDP 会话的右侧对齐 。Open Task Manager select the Performance tab, and then snap the window to the right-hand side of the RDP session.

  3. 以管理员身份打开 Windows PowerShell ISE,然后将其与 RDP 会话的左侧对齐。Open Windows PowerShell ISE as administrator and snap it to the left-hand side of the RDP session. 关闭 ISE 右侧的“命令”窗格,然后选择“脚本”按钮,以展开 ISE 窗口顶部的白色脚本窗格 。On the right-hand side of the ISE, close the Commands pane, and select the Script button, to expand the white script pane at the top of the ISE window.

  4. 此 VM 中没有用于创建 VHD 的本机 PowerShell 模块,我们将其用作大文件(用于测试向 iSCSI 目标的文件转移)。In this VM, there are no native PowerShell modules to create a VHD, which we will use as a large file to test the file transfer to the iSCSI Target. 在这种情况下,我们将运行 DiskPart 来创建 VHD 文件。In this case, we will run DiskPart to create a VHD file. 在 ISE 中运行以下命名:In the ISE, run the following:

    1. Start-Process Diskpart

    2. 新的 CMD 窗口打开,输入:A new CMD window will open, and then enter:
      **Create vdisk file="c:\\test.vhd" type=fixed maximum=5120**

    1. 创建需要一些时间。This will take a few moments to create. 创建后,若要验证创建的内容,请打开“文件资源管理器”,然后导航到 C:\ - 此时应看到新的 test.vhd,大小为 5GB。Once created, to validate the creation, open File Explorer and navigate to C:\ - you should see the new test.vhd present, and a size of 5GB.

    1. 关闭 CMD 窗口,返回到 ISE,然后在脚本窗口中输入以下命令。Close the CMD window, and return to the ISE, then enter the following command in the script Window. 将 F:\ 替换为之前应用的 iSCSI 目标驱动器号。Replace F:\ with the iSCSI Target drive letter that was applied earlier.

    2. Copy-Item "C:\\test.vhd" -Destination "F:\\"

    3. 在脚本窗口中选择该行,然后按 F8 运行。Select the line in the script window, and press F8 to run.

    4. 运行命令时,请观察两个网络适配器,查看 VM001 中两个网络适配器上发生的数据转移。While the command is running, observe the two network adapters and see the transfer of data taking place across both network adapters in VM001. 还应注意到,各网络适配器均匀地共享负载。You should also notice that each network adapter should share the load evenly.

此方案旨在强调 Azure Stack Hub 上运行的工作负荷与外部存储阵列(在本例中为基于 Windows Server 的 iSCSI 目标)之间的连接性。This scenario was designed to highlight the connectivity between a workload running on Azure Stack Hub, and an external storage array, in this case, a Windows Server-based iSCSI Target. 它不是性能测试,也不反映使用基于 iSCSI 的备用设备时需要执行的步骤,但强调了在 Azure Stack Hub 上部署工作负载并将其连接到 Azure Stack Hub 环境外的存储系统时要考虑的一些主要注意事项。This wasn't designed to be a performance test, nor be reflective of the steps you'd need to perform if you were using an alternative iSCSI-based appliance, however it does highlight some of the core considerations you'd make when deploying workloads on Azure Stack Hub, and connecting them to storage systems outside of the Azure Stack Hub environment.

后续步骤Next steps

Azure Stack Hub 网络的差异和注意事项Differences and considerations for Azure Stack Hub networking