什么是 Azure 负载均衡器?What is Azure Load Balancer?

“负载均衡”是指在一组后端资源或服务器之间有效地分配负载或传入的网络流量。 Load balancing refers to efficiently distributing load or incoming network traffic across a group of backend resources or servers. Azure 提供多种负载均衡选项让你根据需要进行选择。Azure offers a variety of load balancing options that you can choose from based on your need. 本文档介绍 Azure 负载均衡器。This document covers the Azure Load Balancer.

Azure 负载均衡器在开放式系统互连 (OSI) 模型的第四层上运行。Azure Load Balancer operates at layer four of the Open Systems Interconnection (OSI) model. 它是客户端的单一联系点。It is the single point of contact for clients. 负载均衡器根据指定的负载均衡规则和运行状况探测,将抵达负载均衡器前端的新入站流量分配到后端池实例。Load Balancer distributes new inbound flows that arrive at the Load Balancer's front end to back-end pool instances, according to specified load balancing rules and health probes. 后端池实例可以是 Azure 虚拟机,或虚拟机规模集中的实例。The back-end pool instances can be Azure Virtual Machines or instances in a virtual machine scale set.

使用 Azure 负载均衡器可以缩放应用程序,并创建高度可用的服务。With Azure Load Balancer, you can scale your applications and create high available services. 负载均衡器支持入站和出站方案、提供低延迟和高吞吐量,并可以纵向扩展,以处理所有 TCP 和 UDP 应用程序产生数以百万计的流量。Load Balancer supports both, inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all TCP and UDP applications.

公共负载均衡器 还可将虚拟网络中虚拟机 (VM) 的专用 IP 地址转换为公共 IP 地址,从而为这些虚拟机提供出站连接。A public Load Balancer can provide outbound connections for virtual machines (VMs) inside your virtual network by translating their private IP addresses to public IP addresses. 公共负载均衡器用于对传入 VM 的 Internet 流量进行负载均衡。Public Load Balancers are used to load balancer internet traffic to your VMs.

内部(或专用)负载均衡器 可用于前端只需要专用 IP 地址的方案。An internal (or private) Load Balancer can be used for scenarios where only private IP addresses are needed at the frontend. 内部负载均衡器用于对虚拟网络内部的流量进行负载均衡。Internal Load Balancers are used to load balance traffic inside a virtual network. 还可以在混合场景中从本地网络访问负载均衡器前端。You can also reach a Load Balancer frontend from an on-premises network in a hybrid scenario.

负载均衡器组件Load Balancer components

  • 前端 IP 配置:负载均衡器的 IP 地址。Frontend IP configurations: The IP address of the Load Balancer. 此地址是客户端的联系点。It is the point of contact for clients. 此地址可以是公共或专用 IP 地址,分别用于创建公共负载均衡器或内部负载均衡器。These can be either public or private IP addresses, thus creating either Public or Internal Load Balancers respectively.
  • 后端池:虚拟机组或虚拟机规模集中的实例,用于为传入的请求提供服务。Backend pool: The group of Virtual Machines or instances in the Virtual Machine Scale Set that are going to serve the incoming request. 若要以经济高效的方式进行缩放以满足较高传入流量的计算需求,最佳做法通常建议在后端池中添加更多的实例。To scale cost-effectively in order to meet high volumes of incoming traffic computing best practices generally recommends adding more instances to the backend pool. 纵向扩展或缩减实例时,负载均衡器可即时通过自动重新配置来重新配置自身。Load Balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. 在后端池中添加或删除 VM 后,会重新配置负载均衡器,无需针对负载均衡器资源执行其他操作。Adding or removing VMs from the backend pool reconfigures the Load Balancer without additional operations on the Load Balancer resource.
  • 运行状况探测:运行状况探测用于确定后端池中实例的运行状况。Health probes: A health probe is used to determine the health of the instances in the backend pool. 可以定义运行状况探测的不正常阈值。You can define the unhealthy threshold for your health probes. 当探测无法响应时,负载均衡器会停止向状况不良的实例发送新连接。When a probe fails to respond, the Load Balancer stops sending new connections to the unhealthy instances. 探测失败不会影响现有连接。A probe failure doesn't affect existing connections. 连接将继续,直到应用程序终止流、发生空闲超时或 VM 关闭。The connection continues until the application terminates the flow, an idle timeout occurs, or the VM shuts down. 负载均衡器为 TCP、HTTP 和 HTTPS 终结点提供了不同的运行状况探测类型。Load Balancer provides different health probe types for TCP, HTTP, and HTTPS endpoints. 有关详细信息,请参阅探测类型For more information, see Probe types.
  • 负载均衡规则:负载均衡规则告知负载均衡器何时需要执行何种操作。Load balancing rules: Load Balancing rules are the ones that tell the Load Balancer what needs to be done when.
  • 入站 NAT 规则:入站 NAT 规则将来自特定前端 IP 地址的特定端口的流量转发到虚拟网络中特定后端实例的特定端口。Inbound NAT rules: An Inbound NAT rule forwards traffic from a specific port of a specific frontend IP address to a specific port of a specific backend instance inside the virtual network. 可以通过与负载均衡相同的基于哈希的分配来实现此端口转发。Port forwarding is done by the same hash-based distribution as load balancing. 此功能的常见应用方案是与 Azure 虚拟网络中的单个 VM 实例建立远程桌面协议 (RDP) 或安全外壳 (SSH) 会话。Common scenarios for this capability are Remote Desktop Protocol (RDP) or Secure Shell (SSH) sessions to individual VM instances inside an Azure Virtual Network. 可将多个内部终结点映射到相同前端 IP 地址上的端口。You can map multiple internal endpoints to ports on the same front-end IP address. 可以使用前端 IP 地址远程管理 VM,而无需额外配置跳转盒。You can use the front-end IP addresses to remotely administer your VMs without an additional jump box.
  • 出站规则:出站规则为后端池识别的、要转换为前端的所有虚拟机或实例配置出站网络地址转换 (NAT)。Outbound rules: An outbound rule configures outbound Network Address Translation (NAT) for all virtual machines or instances identified by the backend pool to be translated to the frontend.

负载均衡器的概念Load Balancer concepts

负载均衡器为 TCP 和 UDP 应用程序提供以下基本功能:Load Balancer provides the following fundamental capabilities for TCP and UDP applications:

  • 负载均衡算法:使用 Azure 负载均衡器可以创建负载均衡规则,以便将抵达前端的流量分配到后端池实例。Load balancing algorithm: With Azure Load Balancer, you can create a load-balancing rule to distribute traffic that arrives at the frontend to backend pool instances. 负载均衡器使用哈希算法来分配入站流,并重写发往后端池实例的流的标头。Load Balancer uses a hashing algorithm for distribution of inbound flows and rewrites the headers of flows to backend pool instances. 当运行状况探测指示后端终结点正常时,可以使用一个服务器来接收新流量。A server is available to receive new flows when a health probe indicates a healthy back-end endpoint. 默认情况下,负载均衡器使用 5 元组哈希。By default, Load Balancer uses a 5-tuple hash. 哈希包括源 IP 地址、源端口、目标 IP 地址、目标端口和 IP 协议编号,以将流映射到可用服务器。The hash includes source IP address, source port, destination IP address, destination port, and IP protocol number to map flows to available servers. 可以启用给定规则的 2 元组或 3 元组哈希,来与源 IP 地址创建关联。You can create affinity to a source IP address by using a 2- or 3-tuple hash for a given rule. 同一数据包流量的所有数据包将会抵达负载均衡前端后面的同一实例。All packets of the same packet flow arrive on the same instance behind the load-balanced front end. 当客户端从同一源 IP 发起新流时,源端口将会更改。When the client initiates a new flow from the same source IP, the source port is changed. 因此,5 元组哈希可能导致流量定向到不同的后端终结点。As a result, the 5-tuple hash might cause the traffic to go to a different backend endpoint. 有关详细信息,请参阅配置 Azure 负载均衡器的分配模式For more information, see Configure the distribution mode for Azure Load Balancer. 下图显示了基于哈希的分配:The following image displays the hash-based distribution:

    基于哈希的分发

    图:基于哈希的分发Figure: Hash-based distribution

  • 应用程序独立性和透明度:负载均衡器不直接与 TCP、UDP 或应用层交互。Application independence and transparency: Load Balancer doesn't directly interact with TCP or UDP or the application layer. 可以支持任何 TCP 或 UDP 应用程序方案。Any TCP or UDP application scenario can be supported. 负载均衡器不会终止或产生流,与流的有效负载交互,也不提供任何应用层网关功能。Load Balancer doesn't terminate or originate flows, interact with the payload of the flow, or provide any application layer gateway function. 协议握手始终直接在客户端与后端池实例之间进行。Protocol handshakes always occur directly between the client and the back-end pool instance. 对入站流做出的响应始终是来自虚拟机的响应。A response to an inbound flow is always a response from a virtual machine. 当流抵达虚拟机时,也会保留原始的源 IP 地址。When the flow arrives on the virtual machine, the original source IP address is also preserved.

    • 每个终结点仅由某个 VM 应答。Every endpoint is only answered by a VM. 例如,TCP 握手始终在客户端与选定的后端 VM 之间发生。For example, a TCP handshake always occurs between the client and the selected back-end VM. 对前端请求做出的响应是后端 VM 生成的响应。A response to a request to a front end is a response generated by a back-end VM. 成功验证与前端的连接后,将会验证与至少一个后端虚拟机的端到端连接。When you successfully validate connectivity to a front end, you're validating the end-to-end connectivity to at least one back-end virtual machine.
    • 应用程序有效负载对于负载均衡器是透明的。Application payloads are transparent to Load Balancer. 可以支持任何 UDP 或 TCP 应用程序。Any UDP or TCP application can be supported.
    • 由于负载均衡器不会与 TCP 有效负载进行交互并提供 TLS 卸载,因此你可以构建端到端的加密方案。Because Load Balancer doesn't interact with the TCP payload and provide TLS offload, you can build end-to-end encrypted scenarios. 使用负载均衡器,通过在 VM 本身上终止 TLS 连接,对 TLS 应用程序进行大规模扩展。Using Load Balancer gains large scale-out for TLS applications by terminating the TLS connection on the VM itself. 例如,将会根据添加到后端池的 VM 类型和数目限制 TLS 会话密钥容量。For example, your TLS session keying capacity is only limited by the type and number of VMs you add to the back-end pool.
  • 出站连接 (SNAT) :从虚拟网络中的专用 IP 地址发往 Internet 上的公共 IP 地址的所有出站流可以转换为负载均衡器的前端 IP 地址。Outbound connections (SNAT): All outbound flows from private IP addresses inside your virtual network to public IP addresses on the internet can be translated to a front-end IP address of the Load Balancer. 通过负载均衡规则将公共前端绑定到后端 VM 后,Azure 会将出站连接设定为公共前端的 IP 地址。When a public front end is tied to a back-end VM by way of a load-balancing rule, Azure translates outbound connections to the public front-end IP address. 此配置具有以下优点:This configuration has the following advantages:

    • 可以轻松地对服务进行升级和灾难恢复操作,因为前端可以动态映射到服务的其他实例。Easy upgrade and disaster recovery of services, because the front end can be dynamically mapped to another instance of the service.
    • 简化了访问控制列表 (ACL) 管理。Easier access control list (ACL) management. 以前端 IP 表示的 ACL 不会随着服务的缩放或重新部署而更改。ACLs expressed as front-end IPs don't change as services scale up or down or get redeployed. 将出站连接转换为较小数量的 IP 地址而不是计算机,可以减少实施安全收件人列表的负担。Translating outbound connections to a smaller number of IP addresses than machines reduces the burden of implementing safe recipient lists. 有关详细信息,请参阅 Azure 中的出站连接For more information, see Outbound connections in Azure. 除这些基本功能以外,标准负载均衡器还提供其他特定于 SKU 的功能,如下所述。Standard Load Balancer has additional SKU-specific capabilities beyond these fundamentals, as described below.

负载均衡器 SKU 的比较Load Balancer SKU comparison

负载均衡器支持基本和标准 SKU。Load Balancer supports both Basic and Standard SKUs. 这些 SKU 在场景规模、功能和定价方面有差异。These SKUs differ in scenario scale, features, and pricing. 使用基本负载均衡器可以实现的任何场景可以通过标准负载均衡器来创建。Any scenario that's possible with Basic Load Balancer can be created with Standard Load Balancer. 这两个 SKU 的 API 类似,都可以通过 SKU 的规范来调用。The APIs for both SKUs are similar and are invoked through the specification of a SKU. 2017-08-01 API 开始,提供了支持负载均衡器和公共 IP 的 SKU 的 API。The API for supporting SKUs for Load Balancer and the public IP is available starting with the 2017-08-01 API. 这两个 SKU 具有相同的常规 API 和结构。Both SKUs share the same general API and structure.

根据 SKU,完整的方案配置可能略有不同。The complete scenario configuration might differ slightly depending on SKU. 如果某篇文章仅适用于特定的 SKU,负载均衡器文档中会做出相应的标识。Load Balancer documentation calls out when an article applies only to a specific SKU. 请参阅下表来比较和了解差别。To compare and understand the differences, see the following table. 有关详细信息,请参阅 Azure 标准负载均衡器概述For more information, see Azure Standard Load Balancer overview.

Note

Azure 建议使用标准负载均衡器。Azure reccomends Standard Load Balancer. 独立 VM、可用性集和虚拟机规模集只能连接到一个 SKU,永远无法同时连接到两个 SKU。Standalone VMs, availability sets, and virtual machine scale sets can be connected to only one SKU, never both. 与公共 IP 地址配合使用时,负载均衡器和公共 IP 地址 SKU 必须匹配。Load Balancer and the public IP address SKU must match when you use them with public IP addresses. 负载均衡器和公共 IP SKU 不可变。Load Balancer and public IP SKUs aren't mutable.

标准 SKUStandard SKU 基本 SKUBasic SKU
后端池大小Backend pool size 最多支持 1000 个实例。Supports up to 1000 instances. 最多支持 100 个实例。Supports up to 100 instances.
后端池终结点Backend pool endpoints 单个虚拟网络中的任何虚拟机,包括虚拟机、可用性集和虚拟机规模集的混合。Any virtual machine in a single virtual network, including blend of virtual machines, availability sets, virtual machine scale sets. 单个可用性集或虚拟机规模集中的虚拟机。Virtual machines in a single availability set or virtual machine scale set.
运行状况探测Health probes TCP、HTTP、HTTPSTCP, HTTP, HTTPS TCP、HTTPTCP, HTTP
运行状况探测停止行为Health probe down behavior TCP 连接在实例探测停止时以及在所有探测停止时保持活动状态。TCP connections stay alive on instance probe down and on all probes down. TCP 连接在实例探测停止时保持活动状态。TCP connections stay alive on instance probe down. 所有 TCP 连接在所有探测停止时都会终止。All TCP connections terminate on all probes are down.
诊断Diagnostics Azure Monitor、多维度指标(包括字节和数据包计数器)、运行状况探测状态、出站连接运行状况(SNAT 成功和失败流)。Azure Monitor, multi-dimensional metrics including byte and packet counters, health probe status, outbound connection health (SNAT successful and failed flows) 不可用Not Available
HA 端口HA Ports Internal 负载均衡器(内部负载均衡器)Internal Load Balancer 不可用。Not available.
默认保护Secure by default 公共 IP、公共负载均衡器终结点、内部负载均衡器终结点会阻止入站流,除非入站流已由某个网络安全组列入允许列表。Public IP, public Load Balancer endpoints, internal Load Balancer endpoints are closed to inbound flows unless whitelisted by a network security group. 默认打开,网络安全组可选。Open by default, network security group optional.
出站连接Outbound connections 可以使用出站规则显式定义基于池的出站 NAT。You can explicitly define pool-based outbound NAT with outbound rules. 可以在每个负载均衡规则选择退出时使用多个前端。_必须_显式创建出站方案,虚拟机、可用性集、虚拟机规模集才能使用出站连接。You can use multiple frontends with per load balancing rule opt-out. An outbound scenario must be explicitly created for the virtual machine, availability set, virtual machine scale set to use outbound connectivity. 虚拟网络服务终结点无需定义出站连接便可访问,且不会计入已处理的数据。Virtual Network Service Endpoints can be reached without defining outbound connectivity and don't count towards data processed. 任何公共 IP 地址(包括不作为 VNet 服务终结点提供的 Azure PaaS 服务)必须通过出站连接才能访问,且计入处理的数据。Any public IP addresses, including Azure PaaS services not available as VNet Service Endpoints, must be reached via outbound connectivity and count towards data processed. 如果只有一个内部负载均衡器为虚拟机、可用性集或虚拟机规模集提供服务,则经由默认 SNAT 的出站连接将不可用,请改用出站规则When only an internal Load Balancer is serving a virtual machine, availability set, or virtual machine scale set, outbound connections via default SNAT aren't available; use outbound rules instead. 出站 SNAT 编程特定于传输协议,并以入站负载均衡规则的协议为基础。Outbound SNAT programming is transport protocol specific based on protocol of the inbound load balancing rule. 单个前端,存在多个前端时随机选择。Single frontend, selected at random when multiple frontends are present. 如果只有内部负载均衡器为虚拟机、可用性集或虚拟机规模集提供服务,则会使用默认 SNAT。When only internal Load Balancer is serving a virtual machine, availability set, or virtual machine scale set, default SNAT is used.
出站规则Outbound Rules 使用公共 IP 地址或公共 IP 前缀或以上两者、可配置出站空闲超时(4-120 分钟)或自定义 SNAT 端口分配的声明性出站 NAT 配置Declarative outbound NAT configuration, using public IP addresses or public IP prefixes or both, configurable outbound idle timeout (4-120 minutes), custom SNAT port allocation 不可用。Not available.
在空闲时重置 TCPTCP Reset on Idle 对任何规则启用空闲超时时重置 TCP (TCP RST)Enable TCP Reset (TCP RST) on Idle Timeout on any rule 不可用Not available
多个前端Multiple frontends 入站和出站Inbound and outbound 仅限入站Inbound only
管理操作Management Operations 大多数操作都小于 30 秒Most operations < 30 seconds 通常为 60 - 90 多秒。60-90+ seconds typical.
SLASLA 对拥有两个正常运行的虚拟机的数据路径为 99.99%。99.99% for data path with two healthy virtual machines. 不适用。Not applicable.
定价Pricing 基于规则数、与资源关联且经过入站和出站处理的数据量进行计费。Charged based on number of rules, data processed inbound and outbound associated with resource. 免费。No charge.

有关详细信息,请参阅负载均衡器限制For more information, see Load balancer limits. 对于标准负载均衡器,请参阅概述定价SLAFor Standard Load Balancer details, see overview, pricing, and SLA.

负载均衡器的类型Load Balancer types

公共负载均衡器Public Load Balancer

公共负载均衡器将传入流量的公共 IP 地址和端口映射到 VM 的专用 IP 地址和端口。A public Load Balancer maps the public IP address and port of incoming traffic to the private IP address and port of the VM. 负载均衡器将来自 VM 的响应流量映射到另一个方向。Load Balancer maps traffic the other way around for the response traffic from the VM. 你可以通过应用负载均衡规则,在多个 VM 或服务之间分配特定类型的流量。You can distribute specific types of traffic across multiple VMs or services by applying load-balancing rules. 例如,可将 Web 请求流量负载分配到多个 Web 服务器。For example, you can spread the load of web request traffic across multiple web servers.

Note

每个可用性集只能实现一个公共负载均衡器和一个内部负载均衡器。You can implement only one public Load Balancer and one internal Load Balancer per availability set.

下图显示了公共端口和 TCP 端口 80 之间的 Web 流量的负载均衡终结点,该流量由三个 VM 共享。The following figure shows a load-balanced endpoint for web traffic that is shared among three VMs for the public and TCP port 80. 三个 VM 位于负载均衡集中。These three VMs are in a load-balanced set.

公共负载均衡器示例

图:使用公共负载均衡器对 Web 流量进行均衡Figure: Balancing web traffic by using a public Load Balancer

Internet 客户端将网页请求发送到 TCP 端口 80 上 Web 应用的公共 IP 地址。Internet clients send webpage requests to the public IP address of a web app on TCP port 80. Azure 负载均衡器在负载平衡集内的三个 VM 之间分配请求。Azure Load Balancer distributes the requests across the three VMs in the load-balanced set. 有关负载均衡器算法的详细信息,请参阅负载均衡器的概念For more information about Load Balancer algorithms, see Load Balancer concepts.

默认情况下,Azure 负载均衡器在多个 VM 实例之间平均分发网络流量。Azure Load Balancer distributes network traffic equally among multiple VM instances by default. 还可以配置会话关联。You can also configure session affinity. 有关详细信息,请参阅配置 Azure 负载均衡器的分配模式For more information, see Configure the distribution mode for Azure Load Balancer.

内部负载均衡器。Internal Load Balancer

与公共负载均衡器相比,内部负载均衡器仅将流量定向到虚拟网络中的资源,或定向到使用 VPN 访问 Azure 基础结构的资源。An internal Load Balancer directs traffic only to resources that are inside a virtual network or that use a VPN to access Azure infrastructure, in contrast to a public Load Balancer. Azure 基础结构会限制对虚拟网络的负载均衡前端 IP 地址的访问。Azure infrastructure restricts access to the load-balanced front-end IP addresses of a virtual network. 前端 IP 地址和虚拟网络不会直接在 Internet 终结点上公开。Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. 内部业务线应用程序可在 Azure 中运行,并可从 Azure 内或从本地资源访问这些应用程序。Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.

内部负载均衡器支持以下类型的负载均衡:An internal Load Balancer enables the following types of load balancing:

  • 在虚拟网络中:从虚拟网络中的 VM 负载均衡到同一虚拟网络中的一组 VM。Within a virtual network: Load balancing from VMs in the virtual network to a set of VMs that are in the same virtual network.
  • 对于跨界虚拟网络:从本地计算机负载均衡到同一虚拟网络中的一组 VM。For a cross-premises virtual network: Load balancing from on-premises computers to a set of VMs that are in the same virtual network.
  • 对于多层应用程序:针对面向 Internet 的多层应用程序进行负载均衡,其中后端层不面向 Internet。For multi-tier applications: Load balancing for internet-facing multi-tier applications where the back-end tiers aren't internet-facing. 后端层需要针对面向 Internet 的层发出的流量进行负载均衡。The back-end tiers require traffic load balancing from the internet-facing tier. 请参阅下一个图。See the next figure.
  • 对于业务线应用程序:使托管在 Azure 中的业务线应用程序实现负载均衡,而无需其他负载均衡器硬件或软件。For line-of-business applications: Load balancing for line-of-business applications that are hosted in Azure without additional load balancer hardware or software. 此方案将本地服务器包含在一组流量已实现负载均衡的计算机中。This scenario includes on-premises servers that are in the set of computers whose traffic is load balanced.

内部负载均衡器示例

图:使用公共和内部负载均衡器对多层应用程序进行均衡Figure: Balancing multi-tier applications by using both public and internal Load Balancer

定价Pricing

使用标准负载均衡器是收费的。Standard Load Balancer usage is charged.

  • 已配置的负载均衡和出站规则的数目。Number of configured load-balancing and outbound rules. 入站 NAT 规则不会计入规则总数。Inbound NAT rules don't count in the total number of rules.
  • 处理的入站和出站数据的数量,与规则无关。Amount of data processed inbound and outbound independent of rules.

有关标准负载均衡器的定价信息,请参阅负载均衡器定价For Standard Load Balancer pricing information, see Load Balancer pricing.

基本负载均衡器是免费提供的。Basic Load Balancer is offered at no charge.

SLASLA

有关标准负载均衡器 SLA 的信息,请参阅负载均衡器的 SLAFor information about the Standard Load Balancer SLA, see SLA for Load Balancer.

限制Limitations

  • 负载均衡器提供特定 TCP 或 UDP 协议的负载均衡和端口转发。Load Balancer provides load balancing and port forwarding for specific TCP or UDP protocols. 负载均衡规则和入站 NAT 规则支持 TCP 和 UDP,但不支持其他 IP 协议(包括 ICMP)。Load-balancing rules and inbound NAT rules support TCP and UDP, but not other IP protocols including ICMP.

    负载均衡器不会终止、响应 UDP 或 TCP 流的有效负载,也不与之交互。Load Balancer doesn't terminate, respond, or otherwise interact with the payload of a UDP or TCP flow. 它不是一个代理。It's not a proxy. 成功验证到前端的连接必须采用负载平衡或入站 NAT 规则中使用的同一协议。Successful validation of connectivity to a front end must take place in-band with the same protocol used in a load balancing or inbound NAT rule. 至少一个虚拟机必须为客户端生成响应,才能查看前端的响应。At least one of your virtual machines must generate a response for a client to see a response from a front end.

    未从前端负载均衡器收到带内响应表明没有任何虚拟机能够做出响应。Not receiving an in-band response from the Load Balancer front end indicates no virtual machines could respond. 在虚拟机都不能做出响应的情况下,无法与负载均衡器前端交互。Nothing can interact with a Load Balancer front end without a virtual machine able to respond. 此原则也适用于出站连接,其中的端口伪装 SNAT 仅支持 TCP 和 UDP。This principle also applies to outbound connections where port masquerade SNAT is only supported for TCP and UDP. 其他任何 IP 协议(包括 ICMP)都会失败。Any other IP protocols, including ICMP, fail. 分配实例级公共 IP 地址即可缓解此问题。Assign an instance-level public IP address to mitigate this issue. 有关详细信息,请参阅了解 SNAT 和 PATFor more information, see Understanding SNAT and PAT.

  • 内部负载均衡器不会将出站发起连接转换为内部负载均衡器的前端,因为两者都位于专用 IP 地址空间中。Internal Load Balancers don't translate outbound originated connections to the front end of an internal Load Balancer because both are in private IP address space. 公共负载均衡器提供从虚拟网络内部专用 IP 地址到公共 IP 地址的出站连接Public Load Balancers provide outbound connections from private IP addresses inside the virtual network to public IP addresses. 对于内部负载均衡器,此方法可以避免不需要转换的唯一内部 IP 地址空间内发生 SNAT 端口耗尽。For internal Load Balancers, this approach avoids potential SNAT port exhaustion inside a unique internal IP address space, where translation isn't required.

    负面影响是,如果来自后端池中 VM 的出站流尝试流向该 VM 所在池中内部负载均衡器的前端,并映射回到自身,则这两个流的分支不会匹配。 A side effect is that if an outbound flow from a VM in the back-end pool attempts a flow to front end of the internal Load Balancer in its pool and is mapped back to itself, the two legs of the flow don't match. 由于它们不匹配,因此流会失败。Because they don't match, the flow fails. 如果流未映射回到后端池中的同一 VM(在前端中创建了流的 VM),则该流将会成功。The flow succeeds if the flow didn't map back to the same VM in the back-end pool that created the flow to the front end.

    如果流映射回到自身,则出站流显示为源自 VM 并发往前端,并且相应的入站流显示为源自 VM 并发往自身。When the flow maps back to itself, the outbound flow appears to originate from the VM to the front end and the corresponding inbound flow appears to originate from the VM to itself. 从来宾操作系统的角度看,同一流的入站和出站部分在虚拟机内部不匹配。From the guest operating system's point of view, the inbound and outbound parts of the same flow don't match inside the virtual machine. TCP 堆栈不会将同一流的这两半看作是同一流的组成部分。The TCP stack won't recognize these halves of the same flow as being part of the same flow. 源和目标不匹配。The source and destination don't match. 当流映射到后端池中的任何其他 VM 时,流的两半将会匹配,且 VM 可以响应流。When the flow maps to any other VM in the back-end pool, the halves of the flow do match and the VM can respond to the flow.

    此方案的缺点在于,当流返回到发起该流的同一后端时将出现间歇性的连接超时。The symptom for this scenario is intermittent connection timeouts when the flow returns to the same back end that originated the flow. 常见的解决方法包括:在内部负载均衡器后插入代理层并使用直接服务器返回 (DSR) 样式规则。Common workarounds include insertion of a proxy layer behind the internal Load Balancer and using Direct Server Return (DSR) style rules. 有关详细信息,请参阅 Azure 负载均衡器的多个前端For more information, see Multiple Front ends for Azure Load Balancer.

    可以将内部负载均衡器与任何第三方代理相结合,或使用内部应用程序网关替代 HTTP/HTTPS 的代理方案。You can combine an internal Load Balancer with any third-party proxy or use internal Application Gateway for proxy scenarios with HTTP/HTTPS. 尽管可以使用公共负载均衡器来缓解此问题,但最终的方案很容易导致 SNAT 耗尽While you could use a public Load Balancer to mitigate this issue, the resulting scenario is prone to SNAT exhaustion. 除非有精心的管理,否则应避免此第二种方法。Avoid this second approach unless carefully managed.

  • 通常,负载平衡规则不支持转发 IP 片段。In general, forwarding IP fragments isn't supported on load-balancing rules. 负载均衡规则不支持 UDP 和 TCP 数据包的 IP 片段。IP fragmentation of UDP and TCP packets isn't supported on load-balancing rules. 高可用性端口负载均衡规则可用于转发现有 IP 片段。High availability ports load-balancing rules can be used to forward existing IP fragments. 有关详细信息,请参阅高可用性端口概述For more information, see High availability ports overview.

后续步骤Next steps

请参阅创建公共标准负载均衡器以开始使用负载均衡器:在已安装自定义 IIS 扩展的情况下创建 VM,然后对 VM 之间的 Web 应用进行负载均衡。See Create a public Standard Load Balancer to get started with using a Load Balancer: create one, create VMs with a custom IIS extension installed, and load balance the web app between the VMs.