Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes
To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don't need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run more pods. Virtual nodes are only supported with Linux pods and nodes.
The virtual nodes add on for AKS is based on the open source project Virtual Kubelet.
This article gives you an overview of the region availability and networking requirements for using virtual nodes, and the known limitations.
Regional availability
All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more information, see Resource availability for Azure Container Instances in Azure regions.
For available CPU and memory SKUs in each region, review Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups
Network requirements
Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To support this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using advanced networking (Azure CNI). By default, AKS clusters are created with basic networking (kubenet).
Pods running in Azure Container Instances (ACI) need access to the AKS API server endpoint, in order to configure networking.
Limitations
Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the quotas and limits for Azure Container Instances, the following are scenarios not supported with virtual nodes or are deployment considerations:
- Using service principal to pull ACR images. Workaround is to use Kubernetes secrets
- Virtual Network Limitations including VNet peering, Kubernetes network policies, and outbound traffic to the internet with network security groups.
- Init containers
- Host aliases
- Arguments for exec in ACI
- DaemonSets won't deploy pods to the virtual nodes
- To schedule Windows Server containers to ACI, you need to manually install the open source Virtual Kubelet ACI provider.
- Virtual nodes require AKS clusters with Azure CNI networking.
- Using API server authorized ip ranges for AKS.
- Volume mounting Azure Files share support General-purpose V2 and General-purpose V1. However, virtual nodes currently don't support Persistent Volumes and Persistent Volume Claims. Follow the instructions for mounting a volume with Azure Files share as an inline volume.
- Using IPv6 isn't supported.
- Virtual nodes don't support the Container hooks feature.
Next steps
Configure virtual nodes for your clusters:
- Create virtual nodes using Azure CLI
- Create virtual nodes using the portal in Azure Kubernetes Services (AKS)
Virtual nodes are often one component of a scaling solution in AKS. For more information on scaling solutions, see the following articles: