Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Azure CNI Powered by Cilium combines the robust control plane of Azure Container Networking Interface (CNI) with the data plane of Cilium to provide high-performance networking and security.
Azure CNI Powered by Cilium provides the following benefits by making use of eBPF programs loaded into the Linux kernel and a more efficient API object structure:
- Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
- Improved service routing
- More efficient network policy enforcement
- Better observability of cluster traffic
- Support for larger clusters (more nodes, pods, and services)
IP Address Management (IPAM) with Azure CNI Powered by Cilium
You can deploy Azure CNI Powered by Cilium with two different methods for assigning pod IPs:
- Assign IP addresses from an overlay network (similar to Azure CNI Overlay mode)
- Assign IP addresses from a virtual network (similar to existing Azure CNI with Dynamic Pod IP Assignment)
If you aren't sure which option to select, read Choose a network model
Versions
| Kubernetes Version | Minimum Cilium Version |
|---|---|
| 1.29 (LTS) | 1.14.19 |
| 1.30 | 1.14.19 |
| 1.31 | 1.16.6 |
| 1.32 | 1.17.0 |
| 1.33 | 1.17.0 |
For more information on AKS versioning and release timelines, see Supported Kubernetes Versions.
Network Policy Enforcement
Cilium enforces network policies to allow or deny traffic between pods. With Cilium, you don't need to install a separate network policy engine such as Azure Network Policy Manager or Calico.
Limitations
Azure CNI powered by Cilium currently has the following limitations:
- Available only for Linux and not for Windows.
- Network policies can't use
ipBlockto allow access to node or pod IPs. For details and recommended workarounds, see frequently asked questions. - For Cilium versions 1.16 or earlier, multiple Kubernetes services can't use the same host port with different protocols (for example, TCP or UDP) (Cilium issue #14287).
- Network policies aren't applied to pods using host networking (
spec.hostNetwork: true) because these pods use the host identity instead of having individual identities. - Cilium Endpoint Slices are supported in Kubernetes version 1.32 and above. Cilium Endpoint Slices don't support configuration of how Cilium Endpoints are grouped. Priority namespace through
cilium.io/ces-namespaceisn't supported. - L7 policy isn't supported by
CiliumClusterwideNetworkPolicy(CCNP).
Prerequisites
- Azure CLI version 2.48.1 or later. Run
az --versionto see the currently installed version. If you need to install or upgrade, see Install Azure CLI. - If you're using ARM templates or the REST API, the AKS API version must be
2022-09-02-previewor later.
Previous AKS API versions (2022-09-02preview to 2023-01-02preview) used the field networkProfile.ebpfDataplane=cilium. AKS API versions since 2023-02-02preview use the field networkProfile.networkDataplane=cilium to enable Azure CNI Powered by Cilium.
Create a new AKS Cluster with Azure CNI Powered by Cilium
Option 1: Assign IP addresses from an overlay network
Use the following commands to create a cluster with an overlay network and Cilium. Replace the values for <clusterName>, <resourceGroupName>, and <location>:
az aks create \
--name <clusterName> \
--resource-group <resourceGroupName> \
--location <location> \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
--network-dataplane cilium \
--generate-ssh-keys
The --network-dataplane cilium flag replaces the deprecated --enable-ebpf-dataplane flag used in earlier versions of the aks-preview CLI extension.
Option 2: Assign IP addresses from a virtual network
Run the following commands to create a resource group and virtual network with a subnet for nodes and a subnet for pods.
# Create the resource group
az group create --name <resourceGroupName> --location <location>
# Create a virtual network with a subnet for nodes and a subnet for pods
az network vnet create --resource-group <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
az network vnet subnet create --resource-group <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
az network vnet subnet create --resource-group <resourceGroupName> --vnet-name <vnetName> --name podsubnet --address-prefixes <address prefix, example: 10.241.0.0/16> -o none
Create the cluster using --network-dataplane cilium:
az aks create \
--name <clusterName> \
--resource-group <resourceGroupName> \
--location <location> \
--max-pods 250 \
--network-plugin azure \
--vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
--pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/podsubnet \
--network-dataplane cilium \
--generate-ssh-keys
Frequently asked questions
Can I customize Cilium configuration?
No, AKS manages the Cilium configuration and it can't be modified. We recommend that customers who require more control use AKS BYO CNI and install Cilium manually.
Can I use
CiliumClusterwideNetworkPolicy?Yes,
CiliumClusterwideNetworkPolicyis supported. The following sample policy YAML shows configuring an L4 rule:apiVersion: "cilium.io/v2" kind: CiliumClusterwideNetworkPolicy metadata: name: "l4-rule-ingress-backend-frontend" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend toPorts: - ports: - port: "80" protocol: TCPWhich Cilium features are supported in Azure managed CNI? Which of those require Advanced Container Networking Services?
Supported Feature w/o ACNS w/ ACNS Cilium Endpoint Slices ✔️ ✔️ K8s Network Policies ✔️ ✔️ Cilium L3/L4 Network Policies ✔️ ✔️ Cilium Clusterwide Network Policy ✔️ ✔️ FQDN Filtering ❌ ✔️ L7 Network Policies (HTTP/gRPC/Kafka) ❌ ✔️ Container Network Observability (Metrics and Flow logs) ❌ ✔️ eBPF Host Routing ❌ ✔️ Why is traffic being blocked when the
NetworkPolicyhas anipBlockthat allows the IP address?A limitation of Azure CNI Powered by Cilium is that a
NetworkPolicyipBlockcan't select pod or node IPs.For example, this
NetworkPolicyhas anipBlockthat allows all egress to0.0.0.0/0:apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: example-ipblock spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 # This will still block pod and node IPs.However, when this
NetworkPolicyis applied, Cilium blocks egress to pod and node IPs even though the IPs are within theipBlockCIDR.As a workaround, you can add
namespaceSelectorandpodSelectorto select pods. This example selects all pods in all namespaces:apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: example-ipblock spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 - namespaceSelector: {} - podSelector: {}It isn't currently possible to specify a
NetworkPolicywith anipBlockto allow traffic to node IPs.Does AKS configure CPU or memory limits on the Cilium
daemonset?No, AKS doesn't configure CPU or memory limits on the Cilium
daemonsetbecause Cilium is a critical system component for pod networking and network policy enforcement.Does Azure CNI powered by Cilium use kube-proxy?
No, AKS clusters created with network data plane as Cilium don't use
kube-proxy. If the AKS clusters are on Azure CNI Overlay or Azure CNI with dynamic IP allocation and are upgraded to AKS clusters running Azure CNI powered by Cilium, new nodes workloads are created withoutkube-proxy. Older workloads are also migrated to run withoutkube-proxyas a part of this upgrade process.
Dual-stack networking with Azure CNI Powered by Cilium
You can deploy your dual-stack AKS clusters with Azure CNI Powered by Cilium. This feature also allows you to control your IPv6 traffic with the Cilium Network Policy engine.
You must have Kubernetes version 1.29 or greater.
Set up Overlay clusters with Azure CNI Powered by Cilium
Create a cluster with Azure CNI Overlay using the [az aks create][az-aks-create] command. Make sure to use the argument --network-dataplane cilium to specify the Cilium data plane.
clusterName="myOverlayCluster"
resourceGroup="myResourceGroup"
location="westcentralus"
az aks create \
--name $clusterName \
--resource-group $resourceGroup \
--location $location \
--network-plugin azure \
--network-plugin-mode overlay \
--network-dataplane cilium \
--ip-families ipv4,ipv6 \
--generate-ssh-keys
Next steps
Learn more about networking in AKS in the following articles: