Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster

A Spot node pool is a node pool backed by an Azure Spot Virtual machine scale set. Using Spot VMs for nodes with your AKS cluster allows you to take advantage of unutilized capacity in Azure at a significant cost savings. The amount of available unutilized capacity will vary based on many factors, including node size, region, and time of day.

When you deploy a Spot node pool, Azure will allocate the Spot nodes if there's capacity available. There's no SLA for the Spot nodes. A Spot scale set that backs the Spot node pool is deployed in a single fault domain and offers no high availability guarantees. At any time when Azure needs the capacity back, the Azure infrastructure will evict Spot nodes.

Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to schedule on a Spot node pool.

In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.

Before you begin

  • This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see Kubernetes core concepts for Azure Kubernetes Service (AKS).
  • If you don't have an Azure subscription, create a trial subscription before you begin.
  • When you create a cluster to use a Spot node pool, the cluster must use Virtual Machine Scale Sets for node pools and the Standard SKU load balancer. You must also add another node pool after you create your cluster, which is covered in this tutorial.
  • This article requires that you're running the Azure CLI version 2.14 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Limitations

The following limitations apply when you create and manage AKS clusters with a Spot node pool:

  • A Spot node pool can't be a default node pool, it can only be used as a secondary pool.
  • The control plane and node pools can't be upgraded at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time.
  • A Spot node pool must use Virtual Machine Scale Sets.
  • You can't change ScaleSetPriority or SpotMaxPrice after creation.
  • When setting SpotMaxPrice, the value must be -1 or a positive value with up to five decimal places.
  • A Spot node pool will have the kubernetes.azure.com/scalesetpriority:spot label, the taint kubernetes.azure.com/scalesetpriority=spot:NoSchedule, and the system pods will have anti-affinity.
  • You must add a corresponding toleration and affinity to schedule workloads on a Spot node pool.

Add a Spot node pool to an AKS cluster

When adding a Spot node pool to an existing cluster, it must be a cluster with multiple node pools enabled. When you create an AKS cluster with multiple node pools enabled, you create a node pool with a priority of Regular by default. To add a Spot node pool, you must specify Spot as the value for priority. For more details on creating an AKS cluster with multiple node pools, see use multiple node pools.

  • Create a node pool with a priority of Spot using the az aks nodepool add command.

    az aks nodepool add \
        --resource-group myResourceGroup \
        --cluster-name myAKSCluster \
        --name spotnodepool \
        --priority Spot \
        --eviction-policy Delete \
        --spot-max-price -1 \
        --enable-cluster-autoscaler \
        --min-count 1 \
        --max-count 3 \
        --no-wait
    

In the previous command, the priority of Spot makes the node pool a Spot node pool. The eviction-policy parameter is set to Delete, which is the default value. When you set the eviction policy to Delete, nodes in the underlying scale set of the node pool are deleted when they're evicted.

You can also set the eviction policy to Deallocate, which means that the nodes in the underlying scale set are set to the stopped-deallocated state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with cluster scaling or upgrading. The priority and eviction-policy values can only be set during node pool creation. Those values can't be updated later.

The previous command also enables the cluster autoscaler, which we recommend using with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales the number of nodes up and down. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the maxCount value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to 0 and require manual operation to receive any additional Spot nodes.

Important

Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. We recommend you set up taints and tolerations on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command adds a taint of kubernetes.azure.com/scalesetpriority=spot:NoSchedule, so only pods with a corresponding toleration are scheduled on this node.

Verify the Spot node pool

To verify your node pool has been added as a Spot node pool:

az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool

Schedule a pod to run on the Spot node

To schedule a pod to run on a Spot node, you can add a toleration and node affinity that corresponds to the taint applied to your Spot node.

The following example shows a portion of a YAML file that defines a toleration corresponding to the kubernetes.azure.com/scalesetpriority=spot:NoSchedule taint and a node affinity corresponding to the kubernetes.azure.com/scalesetpriority=spot label used in the previous step.

spec:
  containers:
  - name: spot-example
  tolerations:
  - key: "kubernetes.azure.com/scalesetpriority"
    operator: "Equal"
    value: "spot"
    effect: "NoSchedule"
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: "kubernetes.azure.com/scalesetpriority"
            operator: In
            values:
            - "spot"
   ...

When you deploy a pod with this toleration and node affinity, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.

Upgrade a Spot node pool

When you upgrade a Spot node pool, AKS internally issues a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, the behavior when upgrading Spot node pools is consistent with that of other node pool types.

For more information on upgrading, see Upgrade an AKS cluster.

Next steps

In this article, you learned how to add a Spot node pool to an AKS cluster. For more information about how to control pods across node pools, see Best practices for advanced scheduler features in AKS.