在 Azure Kubernetes 服务 (AKS) 中配置 Azure CNI 网络Configure Azure CNI networking in Azure Kubernetes Service (AKS)

默认情况下,AKS 群集使用 kubenet,系统会为你创建虚拟网络和子网。By default, AKS clusters use kubenet, and a virtual network and subnet are created for you. 使用 kubenet,节点从虚拟网络子网获取 IP 地址。With kubenet, nodes get an IP address from a virtual network subnet. 然后会在节点上配置网络地址转换 (NAT),并且 Pod 将接收“隐藏”在节点 IP 背后的 IP 地址。Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. 这种方法减少了需要在网络空间中保留供 Pod 使用的 IP 地址数量。This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.

借助 Azure 容器网络接口 (CNI),每个 Pod 都可以从子网获得 IP 地址,并且可供直接访问。With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. 这些 IP 地址在网络空间中必须唯一,并且必须事先计划。These IP addresses must be unique across your network space, and must be planned in advance. 每个节点都有一个配置参数来表示它支持的最大 Pod 数。Each node has a configuration parameter for the maximum number of pods that it supports. 这样,就会为每个节点预留相应的 IP 地址数。The equivalent number of IP addresses per node are then reserved up front for that node. 使用此方法需要经过更详细的规划,并且经常会耗尽 IP 地址,或者在应用程序需求增长时需要在更大的子网中重建群集。This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.

本文展示了如何使用 Azure CNI 网络来创建和使用 AKS 群集的虚拟网络子网。This article shows you how to use Azure CNI networking to create and use a virtual network subnet for an AKS cluster. 有关网络选项和注意事项的详细信息,请参阅 Kubernetes 和 AKS 的网络概念For more information on network options and considerations, see Network concepts for Kubernetes and AKS.

先决条件Prerequisites

  • AKS 群集的虚拟网络必须允许出站 Internet 连接。The virtual network for the AKS cluster must allow outbound internet connectivity.
  • AKS 群集可能不会使用 Kubernetes 服务地址范围的 169.254.0.0/16172.30.0.0/16172.31.0.0/16192.0.2.0/24AKS clusters may not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range.
  • AKS 群集使用的服务主体在虚拟网络中的子网上必须至少具有网络参与者权限。The service principal used by the AKS cluster must have at least Network Contributor permissions on the subnet within your virtual network. 如果希望定义自定义角色而不是使用内置的网络参与者角色,则需要以下权限:If you wish to define a custom role instead of using the built-in Network Contributor role, the following permissions are required:
    • Microsoft.Network/virtualNetworks/subnets/join/action
    • Microsoft.Network/virtualNetworks/subnets/read
  • 你可以使用系统分配的托管标识来获得权限,而不是使用服务主体。Instead of a service principal, you can use the system assigned managed identity for permissions. 有关详细信息,请参阅使用托管标识For more information, see Use managed identities.
  • 分配给 AKS 节点池的子网不能是委托子网The subnet assigned to the AKS node pool cannot be a delegated subnet.

规划群集的 IP 地址Plan IP addressing for your cluster

使用 Azure CNI 网络配置的群集需要额外的规划。Clusters configured with Azure CNI networking require additional planning. 虚拟网络及其子网的大小必须适应计划运行的 Pod 数以及群集的节点数。The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster.

Pod 和群集节点的 IP 地址是从虚拟网络中指定的子网分配的。IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the virtual network. 每个节点都配置了主 IP 地址。Each node is configured with a primary IP address. 默认情况下,Azure CNI 预先配置了 30 个额外的 IP 地址,这些地址被分配给安排在节点上的 Pod。By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node. 横向扩展群集时,将使用子网中的 IP 地址以类似方式配置每个节点。When you scale out your cluster, each node is similarly configured with IP addresses from the subnet. 还可以查看每个节点的最大 Pod 数You can also view the maximum pods per node.

重要

应在考虑到升级和缩放操作的基础上确定所需的 IP 地址数。The number of IP addresses required should include considerations for upgrade and scaling operations. 如果设置的 IP 地址范围仅支持固定数量的节点,则无法升级或缩放群集。If you set the IP address range to only support a fixed number of nodes, you cannot upgrade or scale your cluster.

  • 升级 AKS 群集时,会将一个新节点部署到该群集中。When you upgrade your AKS cluster, a new node is deployed into the cluster. 服务和工作负荷开始在新节点上运行,旧节点将从群集中删除。Services and workloads begin to run on the new node, and an older node is removed from the cluster. 这种滚动升级过程要求至少有一个额外的 IP 地址块可用。This rolling upgrade process requires a minimum of one additional block of IP addresses to be available. 那么,节点计数是 n + 1Your node count is then n + 1.

    • 使用 Windows Server 节点池时,此注意事项尤其重要。This consideration is particularly important when you use Windows Server node pools. AKS 中的 Windows Server 节点不会自动应用 Windows 更新,相反,你需要在节点池上执行升级。Windows Server nodes in AKS do not automatically apply Windows Updates, instead you perform an upgrade on the node pool. 此升级使用最新的 Window Server 2019 基本节点映像和安全修补程序部署新节点。This upgrade deploys new nodes with the latest Window Server 2019 base node image and security patches. 有关升级 Windows Server 节点池的详细信息,请参阅升级 AKS 中的节点池For more information on upgrading a Windows Server node pool, see Upgrade a node pool in AKS.
  • 缩放 AKS 群集时,会将一个新节点部署到该群集中。When you scale an AKS cluster, a new node is deployed into the cluster. 服务和工作负荷开始在新节点上运行。Services and workloads begin to run on the new node. 确定 IP 地址范围时需要考虑到如何纵向扩展群集可以支持的节点和 Pod 数目。Your IP address range needs to take into considerations how you may want to scale up the number of nodes and pods your cluster can support. 此外,应该为升级操作包含一个额外的节点。One additional node for upgrade operations should also be included. 那么,节点计数是 n + number-of-additional-scaled-nodes-you-anticipate + 1Your node count is then n + number-of-additional-scaled-nodes-you-anticipate + 1.

如果预期节点将会运行最大数目的 Pod,并且会定期销毁和部署 Pod,则还应该考虑为每个节点分配一些额外的 IP 地址。If you expect your nodes to run the maximum number of pods, and regularly destroy and deploy pods, you should also factor in some additional IP addresses per node. 分配这些额外的 IP 地址是考虑到删除某个服务以及为了部署新服务并获取地址而释放 IP 地址可能需要几秒钟时间。These additional IP addresses take into consideration it may take a few seconds for a service to be deleted and the IP address released for a new service to be deployed and acquire the address.

AKS 群集 IP 地址计划包括虚拟网络、至少一个节点和 Pod 子网以及 Kubernetes 服务地址范围。The IP address plan for an AKS cluster consists of a virtual network, at least one subnet for nodes and pods, and a Kubernetes service address range.

地址范围 / Azure 资源Address range / Azure resource 限制和调整大小Limits and sizing
虚拟网络Virtual network Azure 虚拟网络的大小可以为 /8,但仅限于 65,536 个已配置的 IP 地址。The Azure virtual network can be as large as /8, but is limited to 65,536 configured IP addresses. 在配置地址空间之前,请考虑所有网络需求(包括与其他虚拟网络中的服务进行通信)。Consider all your networking needs, including communicating with services in other virtual networks, before configuring your address space. 例如,如果配置的地址空间太大,则可能会遇到与网络中的其他地址空间重叠的问题。For example, if you configure too large of an address space, you may run into issues with overlapping other address spaces within your network.
子网Subnet 大小必须足以容纳群集中可能预配的节点、Pod 以及所有 Kubernetes 和 Azure 资源。Must be large enough to accommodate the nodes, pods, and all Kubernetes and Azure resources that might be provisioned in your cluster. 例如,如果部署内部 Azure 负载均衡器,其前端 IP 分配自群集子网(而不是公共 IP)。For example, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. 子网大小还应考虑到帐户升级操作或将来的缩放需求。The subnet size should also take into account upgrade operations or future scaling needs.

若要计算最小子网大小,包括用于升级操作的其他节点:(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)To calculate the minimum subnet size including an additional node for upgrade operations: (number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)

50 个节点群集的示例:(51) + (51 * 30 (default)) = 1,581(/21 或更大)Example for a 50 node cluster: (51) + (51 * 30 (default)) = 1,581 (/21 or larger)

50 节点群集的示例,其中还包括纵向扩展额外 10 个节点的预配:(61) + (61 * 30 (default)) = 1,891(/21 或更大)Example for a 50 node cluster that also includes provision to scale up an additional 10 nodes: (61) + (61 * 30 (default)) = 1,891 (/21 or larger)

如果在创建群集时没有指定每个节点的最大 Pod 数,则每个节点的最大 Pod 数将设置为 30。If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to 30. 所需的最小 IP 地址数取决于该值。The minimum number of IP addresses required is based on that value. 如果基于不同的最大值计算最小 IP 地址要求,请参阅如何配置每个节点的最大 Pod 数,以便在部署群集时设置此值。If you calculate your minimum IP address requirements on a different maximum value, see how to configure the maximum number of pods per node to set this value when you deploy your cluster.

Kubernetes 服务地址范围Kubernetes service address range 此范围不应由此虚拟网络上或连接到此虚拟网络的任何网络元素使用。This range should not be used by any network element on or connected to this virtual network. 服务地址 CIDR 必须小于 /12。Service address CIDR must be smaller than /12. 可以在不同 AKS 群集中重复使用此范围。You can reuse this range across different AKS clusters.
Kubernetes DNS 服务 IP 地址Kubernetes DNS service IP address Kubernetes 服务地址范围内的 IP 地址将由群集服务发现 (kube-dns) 使用。IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). 请勿使用地址范围内的第一个 IP 地址,例如 1。Don't use the first IP address in your address range, such as .1. 子网范围内的第一个地址用于 kubernetes.default.svc.cluster.local 地址。The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.
Docker 桥地址Docker bridge address Docker 桥网络地址表示所有 Docker 安装中都存在的默认 docker0 桥网络地址。The Docker bridge network address represents the default docker0 bridge network address present in all Docker installations. 虽然 AKS 群集或 Pod 本身不使用 docker0 桥,但必须设置此地址以继续支持 AKS 群集内的 docker build 等方案。While docker0 bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as docker build within the AKS cluster. 需要为 Docker 桥网络地址选择 CIDR,否则 Docker 会自动选择一个可能与其他 CIDR 冲突的子网。It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs. 必须选择一个不与网络上其他 CIDR(包括群集的服务 CIDR 和 Pod CIDR)冲突的地址空间。You must pick an address space that does not collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. 默认地址为 172.17.0.1/16。Default of 172.17.0.1/16. 可以在不同 AKS 群集中重复使用此范围。You can reuse this range across different AKS clusters.

每个节点的最大 Pod 数Maximum pods per node

AKS 群集中每个节点的最大 Pod 数为 250。The maximum number of pods per node in an AKS cluster is 250. 每个节点的默认最大 Pod 数因 kubenetAzure CNI 网络以及群集部署方法而异。The default maximum number of pods per node varies between kubenet and Azure CNI networking, and the method of cluster deployment.

部署方法Deployment method Kubenet 默认值Kubenet default Azure CNI 默认值Azure CNI default 可在部署时配置Configurable at deployment
Azure CLIAzure CLI 110110 3030 是(最大 250)Yes (up to 250)
Resource Manager 模板Resource Manager template 110110 3030 是(最大 250)Yes (up to 250)
门户Portal 110110 3030 No

配置最大值 - 新群集Configure maximum - new clusters

可以在群集部署时或在添加新节点池时配置每个节点的最大 Pod 数。You're able to configure the maximum number of pods per node at cluster deployment time or as you add new node pools. 如果使用 Azure CLI 或资源管理器模板进行部署,则可以设置每个节点的最大 Pod 数,最高可以设置为 250。If you deploy with the Azure CLI or with a Resource Manager template, you can set the maximum pods per node value as high as 250.

如果在创建新节点池时未指定 maxPod,则会收到 Azure CNI 的默认值 30。If you don't specify maxPods when creating new node pools, you receive a default value of 30 for Azure CNI.

强制执行每个节点最大 Pod 的最小值,以保证对于群集运行状况而言至关重要的系统 Pod 空间。A minimum value for maximum pods per node is enforced to guarantee space for system pods critical to cluster health. 当且仅当每个节点池的配置有至少 30 个 Pod 的空间时,可以为每个节点的最大 Pod 数设置的最小值为 10。The minimum value that can be set for maximum pods per node is 10 if and only if the configuration of each node pool has space for a minimum of 30 pods. 例如,如果将每个节点的最大 Pod 数设置为最少 10 个,则要求每个单独的节点池至少有 3 个节点。For example, setting the maximum pods per node to the minimum of 10 requires each individual node pool to have a minimum of 3 nodes. 此要求也适用于创建的每个新节点池,因此,如果将每个节点的最大 Pod 数定义为 10,则后续添加的每个节点池必须至少有 3 个节点。This requirement applies for each new node pool created as well, so if 10 is defined as maximum pods per node each subsequent node pool added must have at least 3 nodes.

网络Networking 最小值Minimum 最大值Maximum
Azure CNIAzure CNI 1010 250250
KubenetKubenet 1010 110110

备注

上表中的最小值由 AKS 服务严格强制实施。The minimum value in the table above is strictly enforced by the AKS service. 不能将 maxPods 值设置为低于所示的最小值,因为这样做可能会阻止群集启动。You can not set a maxPods value lower than the minimum shown as doing so can prevent the cluster from starting.

  • Azure CLI:使用 az aks create 命令部署群集时,请指定 --max-pods 参数。Azure CLI: Specify the --max-pods argument when you deploy a cluster with the az aks create command. 最大值为 250。The maximum value is 250.

  • 资源管理器模板:使用资源管理器模板部署群集时,在 ManagedClusterAgentPoolProfile 对象中指定 maxPods 属性。Resource Manager template: Specify the maxPods property in the ManagedClusterAgentPoolProfile object when you deploy a cluster with a Resource Manager template. 最大值为 250。The maximum value is 250.

  • Azure 门户:使用 Azure 门户部署群集时,不能更改每个节点的最大 Pod 数。Azure portal: You can't change the maximum number of pods per node when you deploy a cluster with the Azure portal. 使用 Azure 门户部署时,Azure CNI 网络群集中每个节点的 Pod 数限制为 30 个。Azure CNI networking clusters are limited to 30 pods per node when you deploy using the Azure portal.

配置最大值 - 现有群集Configure maximum - existing clusters

创建新节点池时,可以定义“每个节点的 maxPod”设置。The maxPod per node setting can be defined when you create a new node pool. 如果需要增加现有群集的“每个节点的 maxPod”设置,请使用新的所需 maxPod 计数添加新的节点池。If you need to increase the maxPod per node setting on an existing cluster, add a new node pool with the new desired maxPod count. 将 Pod 迁移到新池后,请删除旧池。After migrating your pods to the new pool, delete the older pool. 若要删除群集中的任何旧池,请确保按系统节点池文档中的定义设置节点池模式。To delete any older pool in a cluster, ensure you are setting node pool modes as defined in the system node pools document.

部署参数Deployment parameters

创建 AKS 群集时,可为 Azure CNI 网络配置以下参数:When you create an AKS cluster, the following parameters are configurable for Azure CNI networking:

虚拟网络:要将 Kubernetes 群集部署到的虚拟网络。Virtual network: The virtual network into which you want to deploy the Kubernetes cluster. 要为群集创建新的虚拟网络,请选择“新建”,并按照“创建虚拟网络”部分中的步骤操作 。If you want to create a new virtual network for your cluster, select Create new and follow the steps in the Create virtual network section. 有关 Azure 虚拟网络的限制和配额的信息,请参阅 Azure 订阅和服务限制、配额和约束For information about the limits and quotas for an Azure virtual network, see Azure subscription and service limits, quotas, and constraints.

子网:要将群集部署到的虚拟网络中的子网。Subnet: The subnet within the virtual network where you want to deploy the cluster. 若要在虚拟网络中为群集创建新的子网,请选择“新建”,并按照“创建子网”部分中的步骤操作 。If you want to create a new subnet in the virtual network for your cluster, select Create new and follow the steps in the Create subnet section. 对于混合连接,地址范围不应与环境中的其他任何虚拟网络重叠。For hybrid connectivity, the address range shouldn't overlap with any other virtual networks in your environment.

Kubernetes 服务地址范围:这是 Kubernetes 分配给群集中的内部服务的一组虚拟 IP。Kubernetes service address range: This is the set of virtual IPs that Kubernetes assigns to internal services in your cluster. 可以使用任何专用地址范围,只要其符合以下要求即可:You can use any private address range that satisfies the following requirements:

  • 不得在群集的虚拟网络 IP 地址范围内Must not be within the virtual network IP address range of your cluster
  • 不得与群集虚拟网络对等互连的任何其他虚拟网络重叠Must not overlap with any other virtual networks with which the cluster virtual network peers
  • 不得与任何本地 IP 重叠Must not overlap with any on-premises IPs
  • 不得在范围 169.254.0.0/16172.30.0.0/16172.31.0.0/16192.0.2.0/24Must not be within the ranges 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24

虽然从技术上来说可以在群集所在的虚拟网络中指定一个服务地址范围,但建议不要这样做。Although it's technically possible to specify a service address range within the same virtual network as your cluster, doing so is not recommended. 如果使用重叠的 IP 范围,则可能导致不可预测的行为。Unpredictable behavior can result if overlapping IP ranges are used. 有关详细信息,请参阅本文中的常见问题解答部分。For more information, see the FAQ section of this article. 有关 Kubernetes 服务的详细信息,请参阅 Kubernetes 文档中的服务For more information on Kubernetes services, see Services in the Kubernetes documentation.

Kubernetes DNS 服务 IP 地址:群集的 DNS 服务的 IP 地址。Kubernetes DNS service IP address: The IP address for the cluster's DNS service. 此地址必须在 Kubernetes 服务地址范围内。This address must be within the Kubernetes service address range. 请勿使用地址范围内的第一个 IP 地址,例如 1。Don't use the first IP address in your address range, such as .1. 子网范围内的第一个地址用于 kubernetes.default.svc.cluster.local 地址。The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.

Docker 桥地址:Docker 桥网络地址表示所有 Docker 安装中都存在的默认 docker0 桥网络地址。Docker Bridge address: The Docker bridge network address represents the default docker0 bridge network address present in all Docker installations. 虽然 AKS 群集或 Pod 本身不使用 docker0 桥,但必须设置此地址以继续支持 AKS 群集内的 docker build 等方案。While docker0 bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as docker build within the AKS cluster. 需要为 Docker 桥网络地址选择 CIDR,否则 Docker 会自动选择一个可能与其他 CIDR 冲突的子网。It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs. 必须选择一个不与网络上其他 CIDR(包括群集的服务 CIDR 和 Pod CIDR)冲突的地址空间。You must pick an address space that does not collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR.

配置网络 - CLIConfigure networking - CLI

使用 Azure CLI 创建 AKS 群集时,还可以配置 Azure CNI 网络。When you create an AKS cluster with the Azure CLI, you can also configure Azure CNI networking. 可以使用以下命令在启用了 Azure CNI 网络的情况下创建新的 AKS 群集。Use the following commands to create a new AKS cluster with Azure CNI networking enabled.

首先,将现有子网的子网资源 ID 加入到 AKS 群集将加入的子网资源 ID:First, get the subnet resource ID for the existing subnet into which the AKS cluster will be joined:

$ az network vnet subnet list \
    --resource-group myVnet \
    --vnet-name myVnet \
    --query "[0].id" --output tsv

/subscriptions/<guid>/resourceGroups/myVnet/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/default

使用带有 --network-plugin azure 参数的 az aks create 命令创建具有高级网络的群集。Use the az aks create command with the --network-plugin azure argument to create a cluster with advanced networking. 使用上一步中收集的子网 ID 更新 --vnet-subnet-id 值:Update the --vnet-subnet-id value with the subnet ID collected in the previous step:

az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --network-plugin azure \
    --vnet-subnet-id <subnet-id> \
    --docker-bridge-address 172.17.0.1/16 \
    --dns-service-ip 10.2.0.10 \
    --service-cidr 10.2.0.0/24 \
    --generate-ssh-keys

配置网络 - 门户Configure networking - portal

以下 Azure 门户屏幕截图显示了在创建 AKS 群集过程中配置这些设置的示例:The following screenshot from the Azure portal shows an example of configuring these settings during AKS cluster creation:

Azure 门户中的高级网络配置

常见问题Frequently asked questions

以下问题和解答适用于 Azure CNI 网络配置。The following questions and answers apply to the Azure CNI networking configuration.

  • 是否可以在群集子网中部署 VM?Can I deploy VMs in my cluster subnet?

    是的。Yes.

  • 外部系统查看什么源 IP 来获取源自某个支持 Azure CNI 的 Pod 的流量?What source IP do external systems see for traffic that originates in an Azure CNI-enabled pod?

    与 AKS 群集处于同一虚拟网络中的系统将 Pod IP 视为来自 Pod 的任何流量的源地址。Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. AKS 群集虚拟网络外部的系统将节点 IP 视为来自 Pod 的任何流量的源地址。Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod.

  • 是否可以配置基于 Pod 的网络策略?Can I configure per-pod network policies?

    是的,Kubernetes 网络策略在 AKS 中可用。Yes, Kubernetes network policy is available in AKS. 若要开始使用,请参阅在 AKS 中使用网络策略保护 Pod 之间的流量To get started, see Secure traffic between pods by using network policies in AKS.

  • 可部署到节点的 Pod 数上限是否可配置?Is the maximum number of pods deployable to a node configurable?

    是的,使用 Azure CLI 或资源管理器模板部署群集时可配置。Yes, when you deploy a cluster with the Azure CLI or a Resource Manager template. 请参阅每个节点的最大 Pod 数See Maximum pods per node.

    无法在现有群集上更改每个节点的最大 Pod 数。You can't change the maximum number of pods per node on an existing cluster.

  • 如何配置创建 AKS 群集期间创建的子网的其他属性?例如服务终结点。How do I configure additional properties for the subnet that I created during AKS cluster creation? For example, service endpoints.

    可以在 Azure 门户的标准虚拟网络配置页中,配置创建 AKS 群集期间创建的虚拟网络和子网的完整属性列表。The complete list of properties for the virtual network and subnets that you create during AKS cluster creation can be configured in the standard virtual network configuration page in the Azure portal.

  • 是否可以在我的群集虚拟网络中将另一子网用于 Kubernetes 服务地址范围?Can I use a different subnet within my cluster virtual network for the Kubernetes service address range?

    此配置是可以的,但建议不要这样做。It's not recommended, but this configuration is possible. 该服务地址范围是 Kubernetes 分配给群集中的内部服务的虚拟 IP (VIP) 的集合。The service address range is a set of virtual IPs (VIPs) that Kubernetes assigns to internal services in your cluster. Azure 网络无法查看 Kubernetes 群集的服务 IP 范围。Azure Networking has no visibility into the service IP range of the Kubernetes cluster. 由于无法查看群集的服务地址范围,因此有可能以后会在群集虚拟网络中创建新的子网,该子网与服务地址范围重叠。Because of the lack of visibility into the cluster's service address range, it's possible to later create a new subnet in the cluster virtual network that overlaps with the service address range. 如果出现这种形式的重叠,则 Kubernetes 为服务分配的 IP 可能是子网中另一资源正在使用的,导致不可预测的行为或故障。If such an overlap occurs, Kubernetes could assign a service an IP that's already in use by another resource in the subnet, causing unpredictable behavior or failures. 如果能够确保所用地址范围不在群集的虚拟网络中,则可避免这种重叠风险。By ensuring you use an address range outside the cluster's virtual network, you can avoid this overlap risk.

后续步骤Next steps

通过以下文章详细了解 AKS 中的网络:Learn more about networking in AKS in the following articles:

AKS 引擎AKS Engine

Azure Kubernetes 服务引擎(AKS 引擎)是一个开源项目,它能够生成 Azure 资源管理器模板用于在 Azure 上部署 Kubernetes 群集。Azure Kubernetes Service Engine (AKS Engine) is an open-source project that generates Azure Resource Manager templates you can use for deploying Kubernetes clusters on Azure.

使用 AKS 引擎创建的 Kubernetes 群集支持 kubenetAzure CNI 插件。Kubernetes clusters created with AKS Engine support both the kubenet and Azure CNI plugins. 因此,AKS 引擎同时支持这两种网络方案。As such, both networking scenarios are supported by AKS Engine.