Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article explains how to enable or disable node auto-provisioning (NAP) in Azure Kubernetes Service (AKS) using the Azure CLI or Azure Resource Manager (ARM) templates.
If you want to create a NAP-enabled AKS cluster with a custom virtual network (VNet) and subnets, see Create a node auto-provisioning (NAP) cluster in a custom virtual network.
Before you begin
Before you begin, review the Overview of node auto-provisioning (NAP) in AKS article, which details how NAP works, prerequisites and limitations.
Enable node auto-provisioning (NAP) on an AKS cluster
The following sections explain how to enable NAP on a new or existing AKS cluster:
Enable NAP on a new cluster
Enable node auto-provisioning on a new cluster using the
az aks createcommand with the--node-provisioning-modeflag set toAuto. The following command also sets the--network-plugintoazure,--network-plugin-modetooverlay, and--network-dataplanetocilium.az aks create \ --name $CLUSTER_NAME \ --resource-group $RESOURCE_GROUP \ --node-provisioning-mode Auto \ --network-plugin azure \ --network-plugin-mode overlay \ --network-dataplane cilium \ --generate-ssh-keys
Create a file named
nap.jsonand add the following ARM template configuration with theproperties.nodeProvisioningProfile.modefield set toAuto, which enables NAP. (The default setting isManual.){ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": {}, "parameters": {}, "resources": [ { "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2025-05-01", "sku": { "name": "Base", "tier": "Standard" }, "name": "napcluster", "location": "chinanorth3", "identity": { "type": "SystemAssigned" }, "properties": { "networkProfile": { "networkPlugin": "azure", "networkPluginMode": "overlay", "networkPolicy": "cilium", "networkDataplane":"cilium", "loadBalancerSku": "Standard" }, "dnsPrefix": "napcluster", "agentPoolProfiles": [ { "name": "agentpool", "count": 3, "vmSize": "standard_d2s_v3", "osType": "Linux", "mode": "System" } ], "nodeProvisioningProfile": { "mode": "Auto" } } } ] }Enable node auto-provisioning on a new cluster using the
az deployment group createcommand with the--template-fileflag set to the path of the ARM template file.az deployment group create --resource-group $RESOURCE_GROUP --template-file ./nap.json
Enable NAP on an existing cluster
Enable node auto-provisioning on an existing cluster using the
az aks updatecommand with the--node-provisioning-modeflag set toAuto.az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --node-provisioning-mode Auto
Disable node auto-provisioning (NAP) on an AKS cluster
Important
You can only disable NAP on a cluster if the following conditions are met:
- There are no existing NAP nodes. You can use the
kubectl get nodes -l karpenter.sh/nodepoolcommand to check for existing NAP-managed nodes. - All existing Karpenter
NodePoolshave theirspec.limits.cpufield set to0. This action prevents new nodes from being created, but doesn't disrupt currently running nodes.
Set the
spec.limits.cpufield to0for every existing KarpenterNodePool. For example:apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: default spec: limits: cpu: 0Important
If you don't want to ensure that every pod previously running on a NAP node is safely migrated to a non-NAP node before disabling NAP, you can skip steps 2 and 3 and instead use the
kubectl delete nodecommand for each NAP-managed node. However, we don't recommend skipping these steps, as it might leave some pods pending and doesn't honor Pod Disruption Budgets (PDBs).When using the
kubectl delete nodecommand, be careful to only delete NAP-managed nodes. You can identify NAP-managed nodes using thekubectl get nodes -l karpenter.sh/nodepoolcommand.Add the
karpenter.azure.com/disable:NoScheduletaint to every KarpenterNodePool. For example:apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: default spec: template: spec: ... taints: - key: karpenter.azure.com/disable effect: NoScheduleThis action starts the process of migrating the workloads on the NAP-managed nodes to non-NAP nodes, honoring PDBs and disruption limits. Pods migrate to non-NAP nodes if they can fit. If there isn't enough fixed-size capacity, some node NAP-managed nodes remain.
Scale up existing fixed-size
ManagedClusterAgentPoolsor create new fixed-sizeAgentPoolsto take the load from the node NAP-managed nodes. As these nodes are added to the cluster, the node NAP-managed nodes are drained, and work is migrated to the fixed-size nodes.Delete all NAP-managed nodes using the
kubectl get nodes -l karpenter.sh/nodepoolcommand. If NAP-managed nodes still exist, the cluster likely lacks fixed-size capacity. In this case, you should add more nodes so the remaining workloads can be migrated.
Update the NAP mode to
Manualusing theaz aks updateAzure CLI command with the--node-provisioning-modeflag set toManual.az aks update \ --name $CLUSTER_NAME \ --resource-group $RESOURCE_GROUP \ --node-provisioning-mode Manual
Update the
properties.nodeProvisioningProfile.modefield toManualin your ARM template and redeploy it.{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": {}, "parameters": {}, "resources": [ { "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2025-05-01", "sku": { "name": "Base", "tier": "Standard" }, "name": "napcluster", "location": "chinanorth3", "identity": { "type": "SystemAssigned" }, "properties": { "networkProfile": { "networkPlugin": "azure", "networkPluginMode": "overlay", "networkPolicy": "cilium", "networkDataplane":"cilium", "loadBalancerSku": "Standard" }, "dnsPrefix": "napcluster", "agentPoolProfiles": [ { "name": "agentpool", "count": 3, "vmSize": "standard_d2s_v3", "osType": "Linux", "mode": "System" } ], "nodeProvisioningProfile": { "mode": "Manual" } } } ] }
Next steps
For more information on node auto-provisioning in AKS, see the following articles: