Service Fabric 网络模式Service Fabric networking patterns

可将 Azure Service Fabric 群集与其他 Azure 网络功能集成。You can integrate your Azure Service Fabric cluster with other Azure networking features. 本文说明如何创建使用以下功能的群集:In this article, we show you how to create clusters that use the following features:

Service Fabric 在标准的虚拟机规模集中运行。Service Fabric runs in a standard virtual machine scale set. 可在虚拟机规模集中使用的任何功能同样可在 Service Fabric 群集中使用。Any functionality that you can use in a virtual machine scale set, you can use with a Service Fabric cluster. 虚拟机规模集与 Service Fabric 的 Azure Resource Manager 模板的网络部分是相同的。The networking sections of the Azure Resource Manager templates for virtual machine scale sets and Service Fabric are identical. 部署到现有虚拟网络后,可以轻松地整合其他网络功能,例如 Azure ExpressRoute、Azure VPN 网关、网络安全组和虚拟网络对等互连。After you deploy to an existing virtual network, it's easy to incorporate other networking features, like Azure ExpressRoute, Azure VPN Gateway, a network security group, and virtual network peering.

允许 Service Fabric 资源提供程序查询群集Allowing the Service Fabric resource provider to query your cluster

与其他网络功能相比,Service Fabric 的独特之处体现在一个方面。Service Fabric is unique from other networking features in one aspect. Azure 门户在内部使用 Service Fabric 资源提供程序连接到群集,以获取有关节点和应用程序的信息。The Azure portal internally uses the Service Fabric resource provider to call to a cluster to get information about nodes and applications. Service Fabric 资源提供程序需要对管理终结点上的 HTTP 网关端口(默认为 19080)具有可公开访问的入站访问权限。The Service Fabric resource provider requires publicly accessible inbound access to the HTTP gateway port (port 19080, by default) on the management endpoint. Service Fabric Explorer 使用该管理终结点来管理群集。Service Fabric Explorer uses the management endpoint to manage your cluster. Service Fabric 资源提供程序还使用此端口来查询有关群集的信息,以便在 Azure 门户中显示。The Service Fabric resource provider also uses this port to query information about your cluster, to display in the Azure portal.

如果无法通过 Service Fabric 资源提供程序访问端口 19080,门户中会显示一条类似于“找不到节点” 的消息,并且节点和应用程序列表显示为空。If port 19080 is not accessible from the Service Fabric resource provider, a message like Nodes Not Found appears in the portal, and your node and application list appears empty. 如果想要在 Azure 门户中查看群集,负载均衡器必须公开一个公共 IP 地址,并且网络安全组必须允许端口 19080 上的传入流量。If you want to see your cluster in the Azure portal, your load balancer must expose a public IP address, and your network security group must allow incoming port 19080 traffic. 如果设置不满足这些要求,Azure 门户不会显示群集的状态。If your setup does not meet these requirements, the Azure portal does not display the status of your cluster.

备注

本文进行了更新,以便使用新的 Azure PowerShell Az 模块。This article has been updated to use the new Azure PowerShell Az module. 你仍然可以使用 AzureRM 模块,至少在 2020 年 12 月之前,它将继续接收 bug 修补程序。You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. 若要详细了解新的 Az 模块和 AzureRM 兼容性,请参阅新 Azure Powershell Az 模块简介To learn more about the new Az module and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. 有关 Az 模块安装说明,请参阅安装 Azure PowerShellFor Az module installation instructions, see Install Azure PowerShell.

模板Templates

所有 Service Fabric 模板都位于 GitHub 中。All Service Fabric templates are in GitHub. 使用以下 PowerShell 命令应可按原样部署模板。You should be able to deploy the templates as-is by using the following PowerShell commands. 若要部署现有的 Azure 虚拟网络模板或静态公共 IP 模板,请先阅读本文的初始设置部分。If you are deploying the existing Azure Virtual Network template or the static public IP template, first read the Initial setup section of this article.

初始设置Initial setup

现有虚拟网络Existing virtual network

在以下示例中,我们从 ExistingRG 资源组中名为 ExistingRG-vnet 的现有虚拟网络着手。In the following example, we start with an existing virtual network named ExistingRG-vnet, in the ExistingRG resource group. 子网命名为 default。The subnet is named default. 这些默认资源是在使用 Azure 门户创建标准虚拟机 (VM) 时创建的。These default resources are created when you use the Azure portal to create a standard virtual machine (VM). 可以只创建虚拟网络和子网而不创建 VM,但是,将群集添加到现有虚拟网络的主要目的是提供与其他 VM 之间的网络连接。You could create the virtual network and subnet without creating the VM, but the main goal of adding a cluster to an existing virtual network is to provide network connectivity to other VMs. 创建 VM 可以很好地示范现有虚拟网络的典型用法。Creating the VM gives a good example of how an existing virtual network typically is used. 如果 Service Fabric 群集仅使用不带公共 IP 地址的内部负载均衡器,则可以将 VM 及其公共 IP 用作安全的转接盒If your Service Fabric cluster uses only an internal load balancer, without a public IP address, you can use the VM and its public IP as a secure jump box.

静态公共 IP 地址Static public IP address

静态公共 IP 地址通常是一个专用资源,与其所分配的 VM 分开管理。A static public IP address generally is a dedicated resource that's managed separately from the VM or VMs it's assigned to. 它在专用网络资源组中(而不是在 Service Fabric 群集资源组本身中)预配。It's provisioned in a dedicated networking resource group (as opposed to in the Service Fabric cluster resource group itself). 使用 Azure 门户或 PowerShell 在同一个 ExistingRG 资源组中创建名为 staticIP1 的静态公共 IP 地址:Create a static public IP address named staticIP1 in the same ExistingRG resource group, either in the Azure portal or by using PowerShell:

PS C:\Users\user> New-AzPublicIpAddress -Name staticIP1 -ResourceGroupName ExistingRG -Location chinanorth -AllocationMethod Static -DomainNameLabel sfnetworking

Name                     : staticIP1
ResourceGroupName        : ExistingRG
Location                 : chinanorth
Id                       : /subscriptions/1237f4d2-3dce-1236-ad95-123f764e7123/resourceGroups/ExistingRG/providers/Microsoft.Network/publicIPAddresses/staticIP1
Etag                     : W/"fc8b0c77-1f84-455d-9930-0404ebba1b64"
ResourceGuid             : 77c26c06-c0ae-496c-9231-b1a114e08824
ProvisioningState        : Succeeded
Tags                     :
PublicIpAllocationMethod : Static
IpAddress                : 40.83.182.110
PublicIpAddressVersion   : IPv4
IdleTimeoutInMinutes     : 4
IpConfiguration          : null
DnsSettings              : {
                             "DomainNameLabel": "sfnetworking",
                             "Fqdn": "sfnetworking.chinanorth.cloudapp.chinacloudapi.cn"
                           }

Service Fabric 模板Service Fabric template

本文中的示例使用 Service Fabric template.json。In the examples in this article, we use the Service Fabric template.json. 在创建群集之前,可以使用标准门户向导下载该模板。You can use the standard portal wizard to download the template from the portal before you create a cluster. 也可以使用示例模板之一,例如保护五节点 Service Fabric 群集You also can use one of the sample templates, like the secure five-node Service Fabric cluster.

现有虚拟网络或子网Existing virtual network or subnet

  1. 将子网参数更改为现有子网的名称,并添加两个新参数以引用现有的虚拟网络:Change the subnet parameter to the name of the existing subnet, and then add two new parameters to reference the existing virtual network:

    "subnet0Name": {
            "type": "string",
            "defaultValue": "default"
        },
        "existingVNetRGName": {
            "type": "string",
            "defaultValue": "ExistingRG"
        },
    
        "existingVNetName": {
            "type": "string",
            "defaultValue": "ExistingRG-vnet"
        },
        /*
        "subnet0Name": {
            "type": "string",
            "defaultValue": "Subnet-0"
        },
        "subnet0Prefix": {
            "type": "string",
            "defaultValue": "10.0.0.0/24"
        },*/
    
  2. 注释掉 Microsoft.Compute/virtualMachineScaleSetsnicPrefixOverride 属性,因为你使用的是现有子网,并且已在步骤 1 中禁用了此变量。Comment out nicPrefixOverride attribute of Microsoft.Compute/virtualMachineScaleSets, because you are using existing subnet and you have disabled this variable in step 1.

    /*"nicPrefixOverride": "[parameters('subnet0Prefix')]",*/
    
  3. vnetID 变量更改为指向现有虚拟网络:Change the vnetID variable to point to the existing virtual network:

    /*old "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',parameters('virtualNetworkName'))]",*/
    "vnetID": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', parameters('existingVNetRGName'), '/providers/Microsoft.Network/virtualNetworks/', parameters('existingVNetName'))]",
    
  4. 从资源中删除 Microsoft.Network/virtualNetworks,使 Azure 不会创建新的虚拟网络:Remove Microsoft.Network/virtualNetworks from your resources, so Azure does not create a new virtual network:

    /*{
    "apiVersion": "[variables('vNetApiVersion')]",
    "type": "Microsoft.Network/virtualNetworks",
    "name": "[parameters('virtualNetworkName')]",
    "location": "[parameters('computeLocation')]",
    "properties": {
        "addressSpace": {
            "addressPrefixes": [
                "[parameters('addressPrefix')]"
            ]
        },
        "subnets": [
            {
                "name": "[parameters('subnet0Name')]",
                "properties": {
                    "addressPrefix": "[parameters('subnet0Prefix')]"
                }
            }
        ]
    },
    "tags": {
        "resourceType": "Service Fabric",
        "clusterName": "[parameters('clusterName')]"
    }
    },*/
    
  5. Microsoft.Compute/virtualMachineScaleSetsdependsOn 属性中注释掉虚拟网络,避免非得要创建新的虚拟网络:Comment out the virtual network from the dependsOn attribute of Microsoft.Compute/virtualMachineScaleSets, so you don't depend on creating a new virtual network:

    "apiVersion": "[variables('vmssApiVersion')]",
    "type": "Microsoft.Computer/virtualMachineScaleSets",
    "name": "[parameters('vmNodeType0Name')]",
    "location": "[parameters('computeLocation')]",
    "dependsOn": [
        /*"[concat('Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'))]",
        */
        "[Concat('Microsoft.Storage/storageAccounts/', variables('uniqueStringArray0')[0])]",
    
    
  6. 部署模板:Deploy the template:

    New-AzResourceGroup -Name sfnetworkingexistingvnet -Location chinanorth
    New-AzResourceGroupDeployment -Name deployment -ResourceGroupName sfnetworkingexistingvnet -TemplateFile C:\SFSamples\Final\template\_existingvnet.json
    

    部署后,虚拟网络应包含新的规模集 VM。After deployment, your virtual network should include the new scale set VMs. 虚拟机规模集节点类型应显示现有虚拟网络和子网。The virtual machine scale set node type should show the existing virtual network and subnet. 还可以使用远程桌面协议 (RDP) 访问虚拟网络中已有的 VM,并对新规模集 VM 执行 ping 操作:You also can use Remote Desktop Protocol (RDP) to access the VM that was already in the virtual network, and to ping the new scale set VMs:

    C:>\Users\users>ping 10.0.0.5 -n 1
    C:>\Users\users>ping NOde1000000 -n 1
    

请参阅并非特定于 Service Fabric 的另一个示例For another example, see one that is not specific to Service Fabric.

静态公共 IP 地址Static public IP address

  1. 添加现有静态 IP 资源组名称、名称和完全限定的域名 (FQDN) 的参数:Add parameters for the name of the existing static IP resource group, name, and fully qualified domain name (FQDN):

    "existingStaticIPResourceGroup": {
                "type": "string"
            },
            "existingStaticIPName": {
                "type": "string"
            },
            "existingStaticIPDnsFQDN": {
                "type": "string"
    }
    
  2. 删除 dnsName 参数。Remove the dnsName parameter. (静态 IP 已有名称。)(The static IP address already has one.)

    /*
    "dnsName": {
        "type": "string"
    },
    */
    
  3. 添加一个变量来引用现有的静态 IP 地址:Add a variable to reference the existing static IP address:

    "existingStaticIP": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', parameters('existingStaticIPResourceGroup'), '/providers/Microsoft.Network/publicIPAddresses/', parameters('existingStaticIPName'))]",
    
  4. 从资源中删除 Microsoft.Network/publicIPAddresses,使 Azure 不会创建新的 IP 地址:Remove Microsoft.Network/publicIPAddresses from your resources, so Azure does not create a new IP address:

    /*
    {
        "apiVersion": "[variables('publicIPApiVersion')]",
        "type": "Microsoft.Network/publicIPAddresses",
        "name": "[concat(parameters('lbIPName'),)'-', '0')]",
        "location": "[parameters('computeLocation')]",
        "properties": {
            "dnsSettings": {
                "domainNameLabel": "[parameters('dnsName')]"
            },
            "publicIPAllocationMethod": "Dynamic"        
        },
        "tags": {
            "resourceType": "Service Fabric",
            "clusterName": "[parameters('clusterName')]"
        }
    }, */
    
  5. Microsoft.Network/loadBalancersdependsOn 属性中注释掉 IP 地址,避免非得要创建新的 IP 地址:Comment out the IP address from the dependsOn attribute of Microsoft.Network/loadBalancers, so you don't depend on creating a new IP address:

    "apiVersion": "[variables('lbIPApiVersion')]",
    "type": "Microsoft.Network/loadBalancers",
    "name": "[concat('LB', '-', parameters('clusterName'), '-', parameters('vmNodeType0Name'))]",
    "location": "[parameters('computeLocation')]",
    /*
    "dependsOn": [
        "[concat('Microsoft.Network/publicIPAddresses/', concat(parameters('lbIPName'), '-', '0'))]"
    ], */
    "properties": {
    
  6. Microsoft.Network/loadBalancers 资源中,将 frontendIPConfigurationspublicIPAddress 元素更改为引用现有的静态 IP 地址而不是新建的 IP 地址:In the Microsoft.Network/loadBalancers resource, change the publicIPAddress element of frontendIPConfigurations to reference the existing static IP address instead of a newly created one:

    "frontendIPConfigurations": [
            {
                "name": "LoadBalancerIPConfig",
                "properties": {
                    "publicIPAddress": {
                        /*"id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(parameters('lbIPName'),'-','0'))]"*/
                        "id": "[variables('existingStaticIP')]"
                    }
                }
            }
        ],
    
  7. Microsoft.ServiceFabric/clusters 资源中,将 managementEndpoint 更改为静态 IP 地址的 DNS FQDN。In the Microsoft.ServiceFabric/clusters resource, change managementEndpoint to the DNS FQDN of the static IP address. 如果使用安全群集,请确保将 http:// 更改为 https://If you are using a secure cluster, make sure you change http:// to https://. (请注意,此步骤仅适用于 Service Fabric 群集。(Note that this step applies only to Service Fabric clusters. 如果使用虚拟机规模集,请跳过此步骤。)If you are using a virtual machine scale set, skip this step.)

    "fabricSettings": [],
    /*"managementEndpoint": "[concat('http://',reference(concat(parameters('lbIPName'),'-','0')).dnsSettings.fqdn,':',parameters('nt0fabricHttpGatewayPort'))]",*/
    "managementEndpoint": "[concat('http://',parameters('existingStaticIPDnsFQDN'),':',parameters('nt0fabricHttpGatewayPort'))]",
    
  8. 部署模板:Deploy the template:

    New-AzResourceGroup -Name sfnetworkingstaticip -Location chinanorth
    
    $staticip = Get-AzPublicIpAddress -Name staticIP1 -ResourceGroupName ExistingRG
    
    $staticip
    
    New-AzResourceGroupDeployment -Name deployment -ResourceGroupName sfnetworkingstaticip -TemplateFile C:\SFSamples\Final\template\_staticip.json -existingStaticIPResourceGroup $staticip.ResourceGroupName -existingStaticIPName $staticip.Name -existingStaticIPDnsFQDN $staticip.DnsSettings.Fqdn
    

部署后,可以看到负载均衡器已绑定到其他资源组中的公共静态 IP 地址。After deployment, you can see that your load balancer is bound to the public static IP address from the other resource group. Service Fabric 客户端连接终结点和 Service Fabric Explorer 终结点指向静态 IP 地址的 DNS FQDN。The Service Fabric client connection endpoint and Service Fabric Explorer endpoint point to the DNS FQDN of the static IP address.

仅限内部的负载均衡器Internal-only load balancer

本方案用仅限内部的负载均衡器替代默认 Service Fabric 模板中的外部负载均衡器。This scenario replaces the external load balancer in the default Service Fabric template with an internal-only load balancer. 有关对 Azure 门户和 Service Fabric 资源提供程序的影响,请参阅本文前面部分See earlier in the article for implications for the Azure portal and for the Service Fabric resource provider.

  1. 删除 dnsName 参数。Remove the dnsName parameter. (不需要此参数。)(It's not needed.)

    /*
    "dnsName": {
        "type": "string"
    },
    */
    
  2. (可选)如果使用静态分配方法,可添加静态 IP 地址参数。Optionally, if you use a static allocation method, you can add a static IP address parameter. 如果使用动态分配方法,则无需执行此步骤。If you use a dynamic allocation method, you do not need to do this step.

    "internalLBAddress": {
        "type": "string",
        "defaultValue": "10.0.0.250"
    }
    
  3. 从资源中删除 Microsoft.Network/publicIPAddresses,使 Azure 不会创建新的 IP 地址:Remove Microsoft.Network/publicIPAddresses from your resources, so Azure does not create a new IP address:

    /*
    {
        "apiVersion": "[variables('publicIPApiVersion')]",
        "type": "Microsoft.Network/publicIPAddresses",
        "name": "[concat(parameters('lbIPName'),)'-', '0')]",
        "location": "[parameters('computeLocation')]",
        "properties": {
            "dnsSettings": {
                "domainNameLabel": "[parameters('dnsName')]"
            },
            "publicIPAllocationMethod": "Dynamic"        
        },
        "tags": {
            "resourceType": "Service Fabric",
            "clusterName": "[parameters('clusterName')]"
        }
    }, */
    
  4. 删除 Microsoft.Network/loadBalancers 的 IP 地址 dependsOn 属性,避免非得要创建新的 IP 地址。Remove the IP address dependsOn attribute of Microsoft.Network/loadBalancers, so you don't depend on creating a new IP address. 添加虚拟网络 dependsOn 属性,因为负载均衡器现在依赖于虚拟网络中的子网:Add the virtual network dependsOn attribute because the load balancer now depends on the subnet from the virtual network:

    "apiVersion": "[variables('lbApiVersion')]",
    "type": "Microsoft.Network/loadBalancers",
    "name": "[concat('LB','-', parameters('clusterName'),'-',parameters('vmNodeType0Name'))]",
    "location": "[parameters('computeLocation')]",
    "dependsOn": [
        /*"[concat('Microsoft.Network/publicIPAddresses/',concat(parameters('lbIPName'),'-','0'))]"*/
        "[concat('Microsoft.Network/virtualNetworks/',parameters('virtualNetworkName'))]"
    ],
    
  5. 将负载均衡器的 frontendIPConfigurations 设置从使用 publicIPAddress 更改为使用子网和 privateIPAddressChange the load balancer's frontendIPConfigurations setting from using a publicIPAddress, to using a subnet and privateIPAddress. privateIPAddress 使用预定义的静态内部 IP 地址。privateIPAddress uses a predefined static internal IP address. 要使用动态 IP 地址,请删除 privateIPAddress 元素,然后将 privateIPAllocationMethod 更改为 DynamicTo use a dynamic IP address, remove the privateIPAddress element, and then change privateIPAllocationMethod to Dynamic.

    "frontendIPConfigurations": [
            {
                "name": "LoadBalancerIPConfig",
                "properties": {
                    /*
                    "publicIPAddress": {
                        "id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(parameters('lbIPName'),'-','0'))]"
                    } */
                    "subnet" :{
                        "id": "[variables('subnet0Ref')]"
                    },
                    "privateIPAddress": "[parameters('internalLBAddress')]",
                    "privateIPAllocationMethod": "Static"
                }
            }
        ],
    
  6. Microsoft.ServiceFabric/clusters 资源中,将 managementEndpoint 更改为指向内部负载均衡器地址。In the Microsoft.ServiceFabric/clusters resource, change managementEndpoint to point to the internal load balancer address. 如果使用安全群集,请确保将 http:// 更改为 https://If you use a secure cluster, make sure you change http:// to https://. (请注意,此步骤仅适用于 Service Fabric 群集。(Note that this step applies only to Service Fabric clusters. 如果使用虚拟机规模集,请跳过此步骤。)If you are using a virtual machine scale set, skip this step.)

    "fabricSettings": [],
    /*"managementEndpoint": "[concat('http://',reference(concat(parameters('lbIPName'),'-','0')).dnsSettings.fqdn,':',parameters('nt0fabricHttpGatewayPort'))]",*/
    "managementEndpoint": "[concat('http://',reference(variables('lbID0')).frontEndIPConfigurations[0].properties.privateIPAddress,':',parameters('nt0fabricHttpGatewayPort'))]",
    
  7. 部署模板:Deploy the template:

    New-AzResourceGroup -Name sfnetworkinginternallb -Location chinanorth
    
    New-AzResourceGroupDeployment -Name deployment -ResourceGroupName sfnetworkinginternallb -TemplateFile C:\SFSamples\Final\template\_internalonlyLB.json
    

部署后,负载均衡器使用专用静态 IP 地址 10.0.0.250。After deployment, your load balancer uses the private static 10.0.0.250 IP address. 如果同一虚拟网络中还有其他计算机,可以转到内部 Service Fabric Explorer 终结点。If you have another machine in that same virtual network, you can go to the internal Service Fabric Explorer endpoint. 可以看到,该终结点已连接到负载均衡器后面的某个节点。Note that it connects to one of the nodes behind the load balancer.

内部和外部负载均衡器Internal and external load balancer

本方案从现有的单节点类型外部负载均衡器着手,添加一个相同节点类型的内部负载均衡器。In this scenario, you start with the existing single-node type external load balancer, and add an internal load balancer for the same node type. 附加到后端地址池的后端端口只能分配给单个负载均衡器。A back-end port attached to a back-end address pool can be assigned only to a single load balancer. 选择哪个负载均衡器使用应用程序端口,哪个负载均衡器使用管理终结点(端口 19000 和 19080)。Choose which load balancer will have your application ports, and which load balancer will have your management endpoints (ports 19000 and 19080). 如果将管理终结点放在内部负载均衡器上,请记住前文所述的 Service Fabric 资源提供程序限制。If you put the management endpoints on the internal load balancer, keep in mind the Service Fabric resource provider restrictions discussed earlier in the article. 本示例将管理终结点保留在外部负载均衡器上。In the example we use, the management endpoints remain on the external load balancer. 还需要添加一个端口号为 80 的应用程序端口,并将其放在内部负载均衡器上。You also add a port 80 application port, and place it on the internal load balancer.

在双节点类型的群集中,一个节点类型位于外部负载均衡器上。In a two-node-type cluster, one node type is on the external load balancer. 另一个节点类型位于内部负载均衡器上。The other node type is on the internal load balancer. 要使用双节点类型的群集,请在门户创建的双节点类型模板(附带两个负载均衡器)中,将第二个负载均衡器切换为内部负载均衡器。To use a two-node-type cluster, in the portal-created two-node-type template (which comes with two load balancers), switch the second load balancer to an internal load balancer. 有关详细信息,请参阅仅限内部的负载均衡器部分。For more information, see the Internal-only load balancer section.

  1. 添加静态内部负载均衡器 IP 地址参数。Add the static internal load balancer IP address parameter. (有关使用动态 IP 地址的说明,请参阅本文的前面部分。)(For notes related to using a dynamic IP address, see earlier sections of this article.)

    "internalLBAddress": {
        "type": "string",
        "defaultValue": "10.0.0.250"
    }
    
  2. 添加应用程序端口 80 参数。Add an application port 80 parameter.

  3. 若要添加现有网络变量的内部版本,请复制并粘贴这些变量,并在名称中添加“-Int”:To add internal versions of the existing networking variables, copy and paste them, and add "-Int" to the name:

    /* Add internal load balancer networking variables */
    "lbID0-Int": "[resourceId('Microsoft.Network/loadBalancers', concat('LB','-', parameters('clusterName'),'-',parameters('vmNodeType0Name'), '-Internal'))]",
    "lbIPConfig0-Int": "[concat(variables('lbID0-Int'),'/frontendIPConfigurations/LoadBalancerIPConfig')]",
    "lbPoolID0-Int": "[concat(variables('lbID0-Int'),'/backendAddressPools/LoadBalancerBEAddressPool')]",
    "lbProbeID0-Int": "[concat(variables('lbID0-Int'),'/probes/FabricGatewayProbe')]",
    "lbHttpProbeID0-Int": "[concat(variables('lbID0-Int'),'/probes/FabricHttpGatewayProbe')]",
    "lbNatPoolID0-Int": "[concat(variables('lbID0-Int'),'/inboundNatPools/LoadBalancerBEAddressNatPool')]",
    /* Internal load balancer networking variables end */
    
  4. 如果从使用应用程序端口 80、由门户生成的模板开始,则默认门户模板会在外部负载均衡器上添加 AppPort1(端口 80)。If you start with the portal-generated template that uses application port 80, the default portal template adds AppPort1 (port 80) on the external load balancer. 在此情况下,请将 AppPort1 从外部负载均衡器 loadBalancingRules 和探测中删除,以便将其添加到内部负载均衡器中:In this case, remove AppPort1 from the external load balancer loadBalancingRules and probes, so you can add it to the internal load balancer:

    "loadBalancingRules": [
        {
            "name": "LBHttpRule",
            "properties":{
                "backendAddressPool": {
                    "id": "[variables('lbPoolID0')]"
                },
                "backendPort": "[parameters('nt0fabricHttpGatewayPort')]",
                "enableFloatingIP": "false",
                "frontendIPConfiguration": {
                    "id": "[variables('lbIPConfig0')]"            
                },
                "frontendPort": "[parameters('nt0fabricHttpGatewayPort')]",
                "idleTimeoutInMinutes": "5",
                "probe": {
                    "id": "[variables('lbHttpProbeID0')]"
                },
                "protocol": "tcp"
            }
        } /* Remove AppPort1 from the external load balancer.
        {
            "name": "AppPortLBRule1",
            "properties": {
                "backendAddressPool": {
                    "id": "[variables('lbPoolID0')]"
                },
                "backendPort": "[parameters('loadBalancedAppPort1')]",
                "enableFloatingIP": "false",
                "frontendIPConfiguration": {
                    "id": "[variables('lbIPConfig0')]"            
                },
                "frontendPort": "[parameters('loadBalancedAppPort1')]",
                "idleTimeoutInMinutes": "5",
                "probe": {
                    "id": "[concate(variables('lbID0'), '/probes/AppPortProbe1')]"
                },
                "protocol": "tcp"
            }
        }*/
    
    ],
    "probes": [
        {
            "name": "FabricGatewayProbe",
            "properties": {
                "intervalInSeconds": 5,
                "numberOfProbes": 2,
                "port": "[parameters('nt0fabricTcpGatewayPort')]",
                "protocol": "tcp"
            }
        },
        {
            "name": "FabricHttpGatewayProbe",
            "properties": {
                "intervalInSeconds": 5,
                "numberOfProbes": 2,
                "port": "[parameters('nt0fabricHttpGatewayPort')]",
                "protocol": "tcp"
            }
        } /* Remove AppPort1 from the external load balancer.
        {
            "name": "AppPortProbe1",
            "properties": {
                "intervalInSeconds": 5,
                "numberOfProbes": 2,
                "port": "[parameters('loadBalancedAppPort1')]",
                "protocol": "tcp"
            }
        } */
    
    ],
    "inboundNatPools": [
    
  5. 添加第二个 Microsoft.Network/loadBalancers 资源。Add a second Microsoft.Network/loadBalancers resource. 该资源与仅限内部的负载均衡器部分中创建的内部负载均衡器类似,不过它使用的是“-Int”负载均衡器变量,并且仅实现应用程序端口 80。It looks similar to the internal load balancer created in the Internal-only load balancer section, but it uses the "-Int" load balancer variables, and implements only the application port 80. 这样做还会删除 inboundNatPools,以便将 RDP 终结点保留在公共负载均衡器上。This also removes inboundNatPools, to keep RDP endpoints on the public load balancer. 如果要将 RDP 放在内部负载均衡器上,请将 inboundNatPools 从外部负载均衡器移到此内部负载均衡器:If you want RDP on the internal load balancer, move inboundNatPools from the external load balancer to this internal load balancer:

    /* Add a second load balancer, configured with a static privateIPAddress and the "-Int" load balancer variables. */
    {
        "apiVersion": "[variables('lbApiVersion')]",
        "type": "Microsoft.Network/loadBalancers",
        /* Add "-Internal" to the name. */
        "name": "[concat('LB','-', parameters('clusterName'),'-',parameters('vmNodeType0Name'), '-Internal')]",
        "location": "[parameters('computeLocation')]",
        "dependsOn": [
            /* Remove public IP dependsOn, add vnet dependsOn
            "[concat('Microsoft.Network/publicIPAddresses/',concat(parameters('lbIPName'),'-','0'))]"
            */
            "[concat('Microsoft.Network/virtualNetworks/',parameters('virtualNetworkName'))]"
        ],
        "properties": {
            "frontendIPConfigurations": [
                {
                    "name": "LoadBalancerIPConfig",
                    "properties": {
                        /* Switch from Public to Private IP address
                        */
                        "publicIPAddress": {
                            "id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(parameters('lbIPName'),'-','0'))]"
                        }
                        */
                        "subnet" :{
                            "id": "[variables('subnet0Ref')]"
                        },
                        "privateIPAddress": "[parameters('internalLBAddress')]",
                        "privateIPAllocationMethod": "Static"
                    }
                }
            ],
            "backendAddressPools": [
                {
                    "name": "LoadBalancerBEAddressPool",
                    "properties": {}
                }
            ],
            "loadBalancingRules": [
                /* Add the AppPort rule. Be sure to reference the "-Int" versions of backendAddressPool, frontendIPConfiguration, and the probe variables. */
                {
                    "name": "AppPortLBRule1",
                    "properties": {
                        "backendAddressPool": {
                            "id": "[variables('lbPoolID0-Int')]"
                        },
                        "backendPort": "[parameters('loadBalancedAppPort1')]",
                        "enableFloatingIP": "false",
                        "frontendIPConfiguration": {
                            "id": "[variables('lbIPConfig0-Int')]"
                        },
                        "frontendPort": "[parameters('loadBalancedAppPort1')]",
                        "idleTimeoutInMinutes": "5",
                        "probe": {
                            "id": "[concat(variables('lbID0-Int'),'/probes/AppPortProbe1')]"
                        },
                        "protocol": "tcp"
                    }
                }
            ],
            "probes": [
            /* Add the probe for the app port. */
            {
                    "name": "AppPortProbe1",
                    "properties": {
                        "intervalInSeconds": 5,
                        "numberOfProbes": 2,
                        "port": "[parameters('loadBalancedAppPort1')]",
                        "protocol": "tcp"
                    }
                }
            ],
            "inboundNatPools": [
            ]
        },
        "tags": {
            "resourceType": "Service Fabric",
            "clusterName": "[parameters('clusterName')]"
        }
    },
    
  6. Microsoft.Compute/virtualMachineScaleSets 资源的 networkProfile 中,添加内部后端地址池:In networkProfile for the Microsoft.Compute/virtualMachineScaleSets resource, add the internal back-end address pool:

    "loadBalancerBackendAddressPools": [
        {
            "id": "[variables('lbPoolID0')]"
        },
        {
            /* Add internal BE pool */
            "id": "[variables('lbPoolID0-Int')]"
        }
    ],
    
  7. 部署模板:Deploy the template:

    New-AzResourceGroup -Name sfnetworkinginternalexternallb -Location chinanorth
    
    New-AzResourceGroupDeployment -Name deployment -ResourceGroupName sfnetworkinginternalexternallb -TemplateFile C:\SFSamples\Final\template\_internalexternalLB.json
    

部署后,可在资源组中看到两个负载均衡器。After deployment, you can see two load balancers in the resource group. 如果浏览这两个负载均衡器,可以看到公共 IP 地址和分配给公共 IP 地址的管理终结点(端口 19000 和 19080)。If you browse the load balancers, you can see the public IP address and management endpoints (ports 19000 and 19080) assigned to the public IP address. 此外,还会看到静态内部 IP 地址和分配给内部负载均衡器的应用程序终结点(端口 80)。You also can see the static internal IP address and application endpoint (port 80) assigned to the internal load balancer. 这两个负载均衡器使用同一个虚拟机规模集后端池。Both load balancers use the same virtual machine scale set back-end pool.

生产工作负荷的说明Notes for production workloads

以上 GitHub 模板设计为使用 Azure 标准负载均衡器 (SLB) 的默认 SKU(基本 SKU)。The above GitHub templates are designed to work with the default SKU for Azure Standard Load Balancer (SLB), the Basic SKU. 此 SLB 没有 SLA,因此对于生产工作负荷,应使用标准 SKU。This SLB has no SLA, so for production workloads the Standard SKU should be used. 有关此内容的详细信息,请参阅 Azure 标准负载均衡器概述For more on this, see the Azure Standard Load Balancer overview. 使用 SLB 的标准 SKU 的任何 Service Fabric 群集都需要确保每种节点类型都有一条规则,允许端口 443 上的出站流量。Any Service Fabric cluster using the Standard SKU for SLB needs to ensure that each node type has a rule allowing outbound traffic on port 443. 这是完成群集设置所必需的,没有此类规则的任何部署都将失败。This is necessary to complete cluster setup, and any deployment without such a rule will fail. 在上面的“仅内部”负载均衡器示例中,必须使用允许端口 443 出站流量的规则,将附加的外部负载均衡器添加到模板。In the above example of an "internal only" load balancer, an additional external load balancer must be added to the template with a rule allowing outbound traffic for port 443.

后续步骤Next steps

创建群集Create a cluster

部署后,可在资源组中看到两个负载均衡器。After deployment, you can see two load balancers in the resource group. 如果浏览这两个负载均衡器,可以看到公共 IP 地址和分配给公共 IP 地址的管理终结点(端口 19000 和 19080)。If you browse the load balancers, you can see the public IP address and management endpoints (ports 19000 and 19080) assigned to the public IP address. 此外,还会看到静态内部 IP 地址和分配给内部负载均衡器的应用程序终结点(端口 80)。You also can see the static internal IP address and application endpoint (port 80) assigned to the internal load balancer. 这两个负载均衡器使用同一个虚拟机规模集后端池。Both load balancers use the same virtual machine scale set back-end pool.