Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using an ARM template
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) using an ARM template.
Important
Your cluster Kubernetes version determines what KEDA version will be installed on your AKS cluster. To see which KEDA version maps to each AKS version, see the AKS managed add-ons column of the Kubernetes component version table.
For GA Kubernetes versions, AKS offers full support of the corresponding KEDA minor version in the table. Kubernetes preview versions and the latest KEDA patch are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
Note
KEDA version 2.15 introduces a breaking change that removes pod identity support. We recommend moving over to workload identity for your authentication if you're using pod identity. While the KEDA managed add-on doesn't currently run KEDA version 2.15, it will begin running it in the AKS preview version 1.31.
For more information on how to securely scale your applications with workload identity, please read our tutorial. To view KEDA's breaking change/deprecation policy, please read their official documentation.
Before you begin
- You need an Azure subscription. If you don't have an Azure subscription, you can create a Trial.
- You need the Azure CLI installed.
- This article assumes you have an existing Azure resource group. If you don't have an existing resource group, you can create one using the
az group create
command. - Ensure you have firewall rules configured to allow access to the Kubernetes API server. For more information, see Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters.
- Create an SSH key pair.
Note
If you're using Microsoft Entra Workload ID and you enable KEDA before Workload ID, you need to restart the KEDA operator pods so the proper environment variables can be injected:
Restart the pods by running
kubectl rollout restart deployment keda-operator -n kube-system
.Obtain KEDA operator pods using
kubectl get pod -n kube-system
and finding pods that begin withkeda-operator
.Verify successful injection of the environment variables by running
kubectl describe pod <keda-operator-pod> -n kube-system
. UnderEnvironment
, you should see values forAZURE_TENANT_ID
,AZURE_FEDERATED_TOKEN_FILE
, andAZURE_AUTHORITY_HOST
.
Create an SSH key pair
Navigate to the Azure Cli.
Create an SSH key pair using the
az sshkey create
command.az sshkey create --name <sshkey-name> --resource-group <resource-group-name>
Enable the KEDA add-on with an ARM template
Deploy the ARM template for an AKS cluster.
Select Edit template.
Enable the KEDA add-on by specifying the
workloadAutoScalerProfile
field in the ARM template, as shown in the following example:"workloadAutoScalerProfile": { "keda": { "enabled": true } }
Select Save.
Update the required values for the ARM template:
- Subscription: Select the Azure subscription to use for the deployment.
- Resource group: Select the resource group to use for the deployment.
- Region: Select the region to use for the deployment.
- Dns Prefix: Enter a unique DNS name to use for the cluster.
- Linux Admin Username: Enter a username for the cluster.
- SSH public key source: Select Use existing key stored in Azure.
- Store Keys: Select the key pair you created earlier in the article.
Select Review + create > Create.
Connect to your AKS cluster
To connect to the Kubernetes cluster from your local device, you use kubectl, the Kubernetes command-line client.
You can also install it locally using the [az aks install-cli][] command.
- Configure
kubectl
to connect to your Kubernetes cluster, use the az aks get-credentials command. The following example gets credentials for the AKS cluster named MyAKSCluster in the MyResourceGroup:
az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
Example deployment
The following snippet is a sample deployment that creates a cluster with KEDA enabled with a single node pool comprised of three DS2_v5
nodes.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"apiVersion": "2023-03-01",
"dependsOn": [],
"type": "Microsoft.ContainerService/managedClusters",
"location": "westcentralus",
"name": "myAKSCluster",
"properties": {
"kubernetesVersion": "1.27",
"enableRBAC": true,
"dnsPrefix": "myAKSCluster",
"agentPoolProfiles": [
{
"name": "agentpool",
"osDiskSizeGB": 200,
"count": 3,
"enableAutoScaling": false,
"vmSize": "Standard_D2S_v5",
"osType": "Linux",
"type": "VirtualMachineScaleSets",
"mode": "System",
"maxPods": 110,
"availabilityZones": [],
"nodeTaints": [],
"enableNodePublicIP": false
}
],
"networkProfile": {
"loadBalancerSku": "standard",
"networkPlugin": "kubenet"
},
"workloadAutoScalerProfile": {
"keda": {
"enabled": true
}
}
},
"identity": {
"type": "SystemAssigned"
}
}
]
}
Start scaling apps with KEDA
You can autoscale your apps with KEDA using custom resource definitions (CRDs). For more information, see the KEDA documentation.
Remove resources
Remove the resource group and all related resources using the
az group delete
command.az group delete --name <resource-group-name>
Next steps
This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can deploy a sample application to start scaling apps.
For information on KEDA troubleshooting, see Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on.
To learn more, view the upstream KEDA docs.