什么是用于 SDN 的软件负载均衡器 (SLB)?What is Software Load Balancer (SLB) for SDN?

适用于:Azure Stack HCI 版本 20H2;Windows Server 2019Applies to: Azure Stack HCI, version 20H2; Windows Server 2019

部署 Azure Stack HCI 中的软件定义的网络 (SDN) 的云服务提供商 (CSP) 和企业可以使用软件负载均衡器 (SLB) 在虚拟网络资源之间平均分配租户和租户客户网络流量。Cloud Service Providers (CSPs) and enterprises that are deploying Software Defined Networking (SDN) in Azure Stack HCI can use Software Load Balancer (SLB) to evenly distribute tenant and tenant customer network traffic among virtual network resources. SLB 允许多台服务器承载相同的工作负载,具有较高的可用性和可扩展性。SLB enables multiple servers to host the same workload, providing high availability and scalability.

软件负载均衡器提供以下功能:Software Load Balancer includes the following capabilities:

  • 适用于北/南和东/西 TCP/UDP 流量的第 4 层 (L4) 负载均衡服务。Layer 4 (L4) load balancing services for north/south and east/west TCP/UDP traffic.

  • 公用网络和内部网络流量负载均衡。Public network and Internal network traffic load balancing.

  • 虚拟局域网 (VLAN) 以及使用 Hyper-V 网络虚拟化创建的虚拟网络上的动态 IP 地址 (DIP) 支持。Dynamic IP addresses (DIPs) support on virtual Local Area Networks (VLANs) and on virtual networks that you create by using Hyper-V Network Virtualization.

  • 运行状况探测支持。Health probe support.

  • 为云缩放做好准备,包括多路复用器和主机代理的横向扩展功能和纵向扩展功能。Ready for cloud scale, including scale-out capability and scale-up capability for multiplexers and host agents.

  • 通过与 SDN 技术(例如 RAS 网关、数据中心防火墙和路由反射器)无缝集成,实现多租户统一边缘。A multitenant unified edge by seamlessly integrating with SDN technologies such as the RAS Gateway, Datacenter Firewall, and Route Reflector.

有关详细信息,请参阅本主题中的软件负载均衡器功能For more information, see Software Load Balancer Features in this topic.

备注

网络控制器不支持 VLAN 的多租户。Multitenancy for VLANs is not supported by Network Controller. 但是,可以将 VLAN 与 SLB 一起用于服务提供商管理的工作负载,例如数据中心基础结构和高密度 Web 服务器。However, you can use VLANs with SLB for service provider managed workloads, such as the datacenter infrastructure and high density Web servers.

使用软件负载均衡器,你可以在用于其他 VM 工作负载的同一个 Hyper-V 计算服务器上使用 SLB 虚拟机 (VM) 横向扩展负载均衡功能。Using Software Load Balancer, you can scale out your load balancing capabilities using SLB virtual machines (VMs) on the same Hyper-V compute servers that you use for your other VM workloads. 因此,软件负载均衡支持对 CSP 操作所需的负载均衡终结点进行快速创建和删除。Because of this, Software Load Balancing supports the rapid creation and deletion of load balancing endpoints that is required for CSP operations. 此外,软件负载均衡支持每个群集数十 GB 的容量,提供简单的预配模型,并且易于横向扩展和缩减。In addition, Software Load Balancing supports tens of gigabytes per cluster, provides a simple provisioning model, and is easy to scale out and in.

软件负载均衡器的工作原理How Software Load Balancer works

软件负载均衡器的工作原理是将虚拟 IP 地址 (VIP) 映射到作为数据中心内云服务资源集一部分的 DIP。Software Load Balancer works by mapping virtual IP addresses (VIPs) to DIPs that are part of a cloud service set of resources in the datacenter.

VIP 是单个 IP 地址,提供对负载均衡的 VM 池的公共访问。VIPs are single IP addresses that provide public access to a pool of load balanced VMs. 例如,VIP 是在 Internet 上公开的 IP 地址,以便租户和租户客户可以连接到云数据中心内的租户资源。For example, VIPs are IP addresses that are exposed on the internet so that tenants and tenant customers can connect to tenant resources in the cloud datacenter.

DIP 是 VIP 后面的负载均衡池的成员 VM 的 IP 地址。DIPs are the IP addresses of the member VMs of a load balanced pool behind the VIP. 在云基础结构中将 DIP 分配给租户资源。DIPs are assigned within the cloud infrastructure to the tenant resources.

VIP 位于 SLB 多路复用器 (MUX) 中。VIPs are located in the SLB Multiplexer (MUX). MUX 由一个或多个 VM 组成。The MUX consists of one or more VMs. 网络控制器为每个 MUX 提供各自的 VIP,每个 MUX 反过来使用边界网关协议 (BGP) 将每个 VIP 作为/32 路由播发到物理网络上的路由器。Network Controller provides each MUX with each VIP, and each MUX in turn uses Border Gateway Protocol (BGP) to advertise each VIP to routers on the physical network as a /32 route. BGP 允许物理网络路由器执行以下操作:BGP allows the physical network routers to:

  • 了解即使 MUX 位于第 3 层网络中的不同子网中,每个 MUX 上都有一个 VIP。Learn that a VIP is available on each MUX, even if the MUXes are on different subnets in a Layer 3 network.

  • 使用相等成本多路径 (ECMP) 路由将每个 VIP 的负载分散到所有可用的 MUX 上。Spread the load for each VIP across all available MUXes using Equal Cost Multi-Path (ECMP) routing.

  • 自动检测 MUX 故障或删除,并停止将流量发送到故障的 MUX。Automatically detect a MUX failure or removal and stop sending traffic to the failed MUX.

  • 将故障或删除的 MUX 的负载分散到正常的 MUX 上。Spread the load from the failed or removed MUX across the healthy MUXes.

当来自 Internet 的公共流量到达时,SLB MUX 会检查流量(其中包含 VIP 作为目标),并映射和重写流量,使其到达单个 DIP。When public traffic arrives from the internet, the SLB MUX examines the traffic, which contains the VIP as a destination, and maps and rewrites the traffic so that it will arrive at an individual DIP. 对于入站网络流量,此事务分两个步骤执行,该过程在 MUX VM 与目标 DIP 所在的 Hyper-V 主机之间进行划分:For inbound network traffic, this transaction is performed in a two-step process that is split between the MUX VMs and the Hyper-V host where the destination DIP is located:

  1. 负载均衡 - MUX 使用 VIP 来选择 DIP,封装数据包,并将流量转发到 DIP 所在的 Hyper-V 主机。Load balance - the MUX uses the VIP to select a DIP, encapsulates the packet, and forwards the traffic to the Hyper-V host where the DIP is located.

  2. 网络地址转换 (NAT) - Hyper-V 主机从数据包中删除封装,将 VIP 转换为 DIP,重新映射端口,然后将数据包转发到 DIP VM。Network Address Translation (NAT) - the Hyper-V host removes encapsulation from the packet, translates the VIP to a DIP, remaps the ports, and forwards the packet to the DIP VM.

由于你使用网络控制器定义负载均衡策略,因此 MUX 知道如何将 VIP 映射到正确的 DIP。The MUX knows how to map VIPs to the correct DIPs because of load balancing policies that you define by using Network Controller. 这些规则包括协议、前端端口、后端端口和分发算法(5 个、3个 或 2 个元组)。These rules include Protocol, Front-end Port, Back-end Port, and distribution algorithm (5, 3, or 2 tuples).

当租户 VM 响应出站网络流量并将其发送回 Internet 或远程租户位置时,由于 NAT 由 Hyper-V 主机执行,因此流量将绕过 MUX 并直接从 Hyper-V 主机传到边缘路由器。When tenant VMs respond and send outbound network traffic back to the internet or remote tenant locations, because the NAT is performed by the Hyper-V host, the traffic bypasses the MUX and goes directly to the edge router from the Hyper-V host. 此 MUX 绕过进程称为直接服务器返回 (DSR)。This MUX bypass process is called Direct Server Return (DSR).

建立初始网络流量流后,入站网络流量会完全绕过 SLB MUX。And after the initial network traffic flow is established, the inbound network traffic bypasses the SLB MUX completely.

在下图中,客户端计算机对公司(在本例中是名为 Contoso 的虚构公司)SharePoint 站点的 IP 地址执行 DNS 查询。In the following illustration, a client computer performs a DNS query for the IP address of a company SharePoint site - in this case, a fictional company named Contoso. 将发生以下过程:The following process occurs:

  1. DNS 服务器将 VIP 107.105.47.60 返回到客户端。The DNS server returns the VIP 107.105.47.60 to the client.

  2. 客户端向 VIP 发送 HTTP 请求。The client sends an HTTP request to the VIP.

  3. 物理网络有多个路径可用于访问位于任何 MUX 上的 VIP。The physical network has multiple paths available to reach the VIP located on any MUX. 在此过程中,每个路由器都使用 ECMP 选择路径的下一段,直到请求到达 MUX。Each router along the way uses ECMP to pick the next segment of the path until the request arrives at a MUX.

  4. 接收请求的 MUX 检查配置的策略,并发现虚拟网络上有两个 DIP(分别为 10.10.10.5 和 10.10.20.5)可用于处理对 VIP 107.105.47.60 的请求The MUX that receives the request checks configured policies, and sees that there are two DIPs available, 10.10.10.5 and 10.10.20.5, on a virtual network to handle the request to the VIP 107.105.47.60

  5. MUX 选择 DIP 10.10.10.5 并使用 VXLAN 封装数据包,以便它可以使用主机的物理网络地址将数据包发送到包含 DIP 的主机。The MUX selects DIP 10.10.10.5 and encapsulates the packets using VXLAN so that it can send it to the host containing the DIP using the host's physical network address.

  6. 主机接收封装的数据包并对其进行检查。The host receives the encapsulated packet and inspects it. 它删除封装并重写数据包,将目标从 VIP 变为 DIP 10.10.10.5,然后将流量发送到 DIP VM。It removes the encapsulation and rewrites the packet so that the destination is now DIP 10.10.10.5 instead of the VIP, and then sends the traffic to DIP VM.

  7. 请求到达服务器场 2 中的 Contoso SharePoint 站点。The request reaches the Contoso SharePoint site in Server Farm 2. 服务器使用自己的 IP 地址作为源,生成响应并将其发送到客户端。The server generates a response and sends it to the client, using its own IP address as the source.

  8. 主机截获虚拟交换机中的传出数据包,该数据包会记住客户端(现在是目标)向 VIP 发送了原始请求。The host intercepts the outgoing packet in the virtual switch which remembers that the client, now the destination, made the original request to the VIP. 主机将数据包的源重写为 VIP,使客户端看不到 DIP 地址。The host rewrites the source of the packet to be the VIP so that the client does not see the DIP address.

  9. 主机会将数据包直接转发到物理网络的默认网关,该网关使用其标准路由表将数据包转发到客户端,客户端最终接收响应。The host forwards the packet directly to the default gateway for the physical network which uses its standard routing table to forward the packet on to the client, which eventually receives the response.

软件负载均衡过程

对内部数据中心流量进行负载均衡Load balancing internal datacenter traffic

当对数据中心内部的网络流量进行负载均衡时(例如在不同服务器上运行且是同一虚拟网络成员的租户资源之间),VM 连接到的 Hyper-V 虚拟交换机执行 NAT。When load balancing network traffic internal to the datacenter, such as between tenant resources that are running on different servers and are members of the same virtual network, the Hyper-V virtual switch to which the VMs are connected performs NAT.

利用内部流量负载均衡,第一个请求将发送到 MUX 并由 MUX 进行处理,MUX 将选择适当的 DIP,然后将流量路由到 DIP。With internal traffic load balancing, the first request is sent to and processed by the MUX, which selects the appropriate DIP, and then routes the traffic to the DIP. 此后,已建立的流量流将绕过 MUX,并直接从 VM 传到 VM。From that point forward, the established traffic flow bypasses the MUX and goes directly from VM to VM.

运行状况探测Health probes

软件负载均衡器包括运行状况探测,用于验证网络基础结构的运行状况,包括以下内容:Software Load Balancer includes health probes to validate the health of the network infrastructure, including the following:

  • 端口的 TCP 探测TCP probe to port

  • 端口和 URL 的 HTTP 探测HTTP probe to port and URL

SLB 探测与传统负载均衡器设备的探测不同:传统设备中探测源自设备,并通过线路传输到 DIP;而 SLB 探测源自 DIP 所在的主机,并直接从 SLB 主机代理传递到 DIP,进一步在主机上分布工作。Unlike a traditional load balancer appliance where the probe originates on the appliance and travels across the wire to the DIP, the SLB probe originates on the host where the DIP is located and goes directly from the SLB host agent to the DIP, further distributing the work across the hosts.

软件负载均衡器基础结构Software Load Balancer Infrastructure

在配置软件负载均衡器之前,必须首先部署网络控制器和一个或多个 SLB MUX VM。Before you can configure Software Load Balancer, you must first deploy Network Controller and one or more SLB MUX VMs.

此外,必须使用已启用 SDN 的 Hyper-V 虚拟交换机配置 Azure Stack HCI 主机,并确保 SLB 主机代理正在运行。In addition, you must configure the Azure Stack HCI hosts with the SDN-enabled Hyper-V virtual switch and ensure that the SLB Host Agent is running. 服务主机的路由器必须支持 ECMP 路由和边界网关协议 (BGP),并且必须将其配置为接受来自 SLB MUX 的 BGP 对等互连请求。The routers that serve the hosts must support ECMP routing and Border Gateway Protocol (BGP), and they must be configured to accept BGP peering requests from the SLB MUXes.

下图提供了 SLB 基础结构的概览。The following figure provides an overview of the SLB infrastructure.

软件负载均衡器基础结构

以下各节介绍了有关软件负载均衡器基础结构的这些元素的详细信息。The following sections provide more information about these elements of the Software Load Balancer infrastructure.

网络控制器Network Controller

网络控制器托管 SLB 管理器,并为软件负载均衡器执行以下操作:Network Controller hosts the SLB Manager and performs the following actions for Software Load Balancer:

  • 处理从 Windows Admin Center、System Center、Windows PowerShell 或其他网络管理应用程序通过 Northbound API 传入的 SLB 命令。Processes SLB commands that come in through the Northbound API from Windows Admin Center, System Center, Windows PowerShell, or another network management application.

  • 计算分发到 Azure Stack HCI 主机和 SLB MUX 的策略。Calculates policy for distribution to Azure Stack HCI hosts and SLB MUXes.

  • 提供软件负载均衡器基础结构的运行状况。Provides the health status of the Software Load Balancer infrastructure.

可以使用 Windows Admin Center 或 Windows PowerShell 安装和配置网络控制器和其他 SLB 基础结构。You can use Windows Admin Center or Windows PowerShell to install and configure Network Controller and other SLB infrastructure.

SLB MUXSLB MUX

SLB MUX 处理入站网络流量,并将 VIP 映射到 DIP,然后将流量转发到正确的 DIP。The SLB MUX processes inbound network traffic and maps VIPs to DIPs, then forwards the traffic to the correct DIP. 每个 MUX 还使用 BGP 将 VIP 路由发布到边缘路由器。Each MUX also uses BGP to publish VIP routes to edge routers. 当 MUX 发生故障时,BGP Keep Alive 会通知 MUX,这使活动的 MUX 在 MUX 发生故障时重新分发负载。BGP Keep Alive notifies MUXes when a MUX fails, which allows active MUXes to redistribute the load in case of a MUX failure. 这实质上是为负载均衡器提供负载均衡。This essentially provides load balancing for the load balancers.

SLB 主机代理SLB Host Agent

部署软件负载均衡器时,必须使用 Windows Admin Center、System Center、Windows PowerShell 或其他管理应用程序在每台主机服务器上部署 SLB 主机代理。When you deploy Software Load Balancer, you must use Windows Admin Center, System Center, Windows PowerShell, or another management application to deploy the SLB Host Agent on every host server.

SLB 主机代理从网络控制器侦听 SLB 策略更新。The SLB Host Agent listens for SLB policy updates from Network Controller. 此外,主机代理将 SLB 的规则编程到本地计算机上配置的已启用 SDN 的 Hyper-V 虚拟交换机中。In addition, the host agent programs rules for SLB into the SDN-enabled Hyper-V virtual switches that are configured on the local computer.

已启用 SDN 的 Hyper-V 虚拟交换机SDN-enabled Hyper-V virtual switch

要使虚拟交换机与 SLB 兼容,必须在虚拟交换机上启用虚拟筛选平台 (VFP) 扩展。For a virtual switch to be compatible with SLB, the Virtual Filtering Platform (VFP) extension must be enabled on the virtual switch. 此操作由 SDN 部署 PowerShell 脚本、Windows Admin Center 部署向导和 System Center Virtual Machine Manager (SCVMM) 部署自动完成。This is done automatically by the SDN deployment PowerShell scripts, Windows Admin Center deployment wizard, and System Center Virtual Machine Manager (SCVMM) deployment.

有关在虚拟交换机上启用 VFP 的信息,请参阅 Windows PowerShell 命令 Get-VMSystemSwitchExtensionEnable-VMSwitchExtensionFor information on enabling VFP on virtual switches, see the Windows PowerShell commands Get-VMSystemSwitchExtension and Enable-VMSwitchExtension.

已启用 SDN 的 Hyper-V 虚拟交换机为 SLB 执行以下操作:The SDN-enabled Hyper-V virtual switch performs the following actions for SLB:

  • 处理 SLB 的数据路径。Processes the data path for SLB.

  • 接收来自 MUX 的入站网络流量。Receives inbound network traffic from the MUX.

  • 为出站网络流量绕过 MUX,使用 DSR 将其发送到路由器。Bypasses the MUX for outbound network traffic, sending it to the router using DSR.

BGP 路由器BGP router

BGP 路由器为软件负载均衡器执行以下操作:The BGP router performs the following actions for Software Load Balancer:

  • 使用 ECMP 将入站流量路由到 MUX。Routes inbound traffic to the MUX using ECMP.

  • 对于出站网络流量,使用主机提供的路由。For outbound network traffic, uses the route provided by the host.

  • 侦听 SLB MUX 的 VIP 的路由更新。Listens for route updates for VIPs from SLB MUX.

  • 如果 Keep Alive 故障,从 SLB 轮换中删除 SLB MUX。Removes SLB MUXes from the SLB rotation if Keep Alive fails.

软件负载均衡器功能Software Load Balancer Features

以下部分介绍软件负载均衡器的一些特性和功能。The following sections describe some of the features and capabilities of Software Load Balancer.

核心功能Core functionality

  • SLB 提供适用于北/南和东/西 TCP/UDP 流量的第 4 层负载均衡服务。SLB provides Layer 4 load balancing services for north/south and east/west TCP/UDP traffic.

  • 可以在基于 Hyper-V 网络虚拟化的网络上使用 SLB。You can use SLB on a Hyper-V Network Virtualization-based network.

  • 可以将 SLB 与基于 VLAN 的网络一起用于连接到已启用 SDN 的 Hyper-V 虚拟交换机的 DIP VM。You can use SLB with a VLAN-based network for DIP VMs connected to a SDN Enabled Hyper-V virtual switch.

  • 一个 SLB 实例可以处理多个租户。One SLB instance can handle multiple tenants.

  • SLB 和 DIP 支持由 DSR 实现的可缩放且低延迟的返回路径。SLB and DIP support a scalable and low-latency return path, as implemented by DSR.

  • SLB 在使用交换机嵌入式组合 (SET) 或单根输入/输出虚拟化 (SR-IOV) 时发挥作用。SLB functions when you are also using Switch Embedded Teaming (SET) or Single Root Input/Output Virtualization (SR-IOV).

  • SLB 包含 Internet 协议版本 6 (IPv6) 和版本 4 (IPv4) 支持。SLB includes Internet Protocol version 6 (IPv6) and version 4 (IPv4) support.

  • 对于站点到站点网关方案,SLB 提供了 NAT 功能,以使所有站点到站点连接都可以使用单个公共 IP。For site-to-site gateway scenarios, SLB provides NAT functionality to enable all site-to-site connections to utilize a single public IP.

缩放和性能Scale and performance

  • 为云缩放做好准备,包括 MUX 和主机代理的横向扩展和纵向扩展功能。Ready for cloud scale, including scale-out and scale-up capability for MUXes and Host Agents.

  • 一个活动的 SLB 管理器网络控制器模块可以支持 8 个 MUX 实例。One active SLB Manager Network Controller module can support eight MUX instances.

高可用性High availability

  • 在主动配置中,可以将 SLB 部署到两个以上的节点。You can deploy SLB to more than two nodes in an active/active configuration.

  • 可以在 MUX 池中添加和删除 MUX,而不会影响 SLB 服务。MUXes can be added and removed from the MUX pool without impacting the SLB service. 这会在修补单个 MUX 时维持 SLB 可用性。This maintains SLB availability when individual MUXes are being patched.

  • 单个 MUX 实例的运行时间为 99%。Individual MUX instances have an uptime of 99 percent.

  • 运行状况监视数据可用于管理实体。Health monitoring data is available to management entities.

后续步骤Next steps

如需相关信息,另请参阅:For related information, see also: