Deploy and score a machine learning model by using an online endpoint

APPLIES TO: Azure CLI ml extension v2 (current) Python SDK azure-ai-ml v2 (current)

In this article, you learn to deploy your model to an online endpoint for use in real-time inferencing. You begin by deploying a model on your local machine to debug any errors. Then, you deploy and test the model in Azure, view the deployment logs, and monitor the service-level agreement (SLA). By the end of this article, you'll have a scalable HTTPS/REST endpoint that you can use for real-time inference.

Online endpoints are endpoints that are used for real-time inferencing. There are two types of online endpoints: managed online endpoints and Kubernetes online endpoints. For more information on endpoints and differences between managed online endpoints and Kubernetes online endpoints, see What are Azure Machine Learning endpoints?

Managed online endpoints help to deploy your machine learning models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure.

The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document that are inline with the managed online endpoint discussion.

Prerequisites

APPLIES TO: Azure CLI ml extension v2 (current)

Before following the steps in this article, make sure you have the following prerequisites:

  • Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. If you use the studio to create/manage online endpoints/deployments, you'll need an extra permission "Microsoft.Resources/deployments/write" from the resource group owner. For more information, see Manage access to an Azure Machine Learning workspace.

  • (Optional) To deploy locally, you must install Docker Engine on your local computer. We highly recommend this option, so it's easier to debug issues.

  • Ensure that you have enough virtual machine (VM) quota allocated for deployment. Azure Machine Learning reserves 20% of your compute resources for performing upgrades on some VM SKUs. For example, if you request 10 instances in a deployment, you must have a quota for 12 for each number of cores for the VM SKU. Failure to account for the extra compute resources results in an error. There are some VM SKUs that are exempt from the extra quota reservation. For more information on quota allocation, see virtual machine quota allocation for deployment.

  • Alternatively, you could use quota from Azure Machine Learning's shared quota pool for a limited time. Azure Machine Learning provides a shared quota pool from which users across various regions can access quota to perform testing for a limited time, depending upon availability. When you use the studio to deploy Llama-2, Phi, Nemotron, Mistral, Dolly, and Deci-DeciLM models from the model catalog to a managed online endpoint, Azure Machine Learning allows you to access its shared quota pool for a short time so that you can perform testing. For more information on the shared quota pool, see Azure Machine Learning shared quota.

Prepare your system

Set environment variables

If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:

az account set --subscription <subscription ID>
az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>

Clone the examples repository

To follow along with this article, first clone the examples repository (azureml-examples). Then, run the following code to go to the repository's cli/ directory:

git clone --depth 1 https://github.com/Azure/azureml-examples
cd azureml-examples
cd cli

Tip

Use --depth 1 to clone only the latest commit to the repository, which reduces time to complete the operation.

The commands in this tutorial are in the files deploy-local-endpoint.sh and deploy-managed-online-endpoint.sh in the cli directory, and the YAML configuration files are in the endpoints/online/managed/sample/ subdirectory.

Note

The YAML configuration files for Kubernetes online endpoints are in the endpoints/online/kubernetes/ subdirectory.

Define the endpoint

To define an online endpoint, specify the endpoint name and authentication mode. For more information on managed online endpoints, see Online endpoints.

Set an endpoint name

To set your endpoint name, run the following command. Replace YOUR_ENDPOINT_NAME with a name that's unique in the Azure region. For more information on the naming rules, see endpoint limits.

For Linux, run this command:

export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"

Configure the endpoint

The following snippet shows the endpoints/online/managed/sample/endpoint.yml file:

$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
name: my-endpoint
auth_mode: key

The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the online endpoint YAML reference. For information about limits related to managed endpoints, see limits for online endpoints.

Key Description
$schema (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding code snippet in a browser.
name The name of the endpoint.
auth_mode Use key for key-based authentication.
Use aml_token for Azure Machine Learning token-based authentication.
Use aad_token for Microsoft Entra token-based authentication (preview).
For more information on authenticating, see Authenticate clients for online endpoints.

Define the deployment

A deployment is a set of resources required for hosting the model that does the actual inferencing. For this example, you deploy a scikit-learn model that does regression and use a scoring script score.py to execute the model upon a given input request.

To learn about the key attributes of a deployment, see Online deployments.

Configure a deployment

Your deployment configuration uses the location of the model that you wish to deploy.

The following snippet shows the endpoints/online/managed/sample/blue-deployment.yml file, with all the required inputs to configure a deployment:

blue-deployment.yml

$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: blue
endpoint_name: my-endpoint
model:
  path: ../../model-1/model/
code_configuration:
  code: ../../model-1/onlinescoring/
  scoring_script: score.py
environment: 
  conda_file: ../../model-1/environment/conda.yaml
  image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest
instance_type: Standard_DS3_v2
instance_count: 1

The blue-deployment.yml file specifies the following deployment attributes:

  • model - specifies the model properties inline, using the path (where to upload files from). The CLI automatically uploads the model files and registers the model with an autogenerated name.
  • environment - using inline definitions that include where to upload files from, the CLI automatically uploads the conda.yaml file and registers the environment. Later, to build the environment, the deployment uses the image (in this example, it's mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest) for the base image, and the conda_file dependencies are installed on top of the base image.
  • code_configuration - during deployment, the local files such as the Python source for the scoring model, are uploaded from the development environment.

For more information about the YAML schema, see the online endpoint YAML reference.

Note

To use Kubernetes endpoints instead of managed online endpoints as a compute target:

  1. Create and attach your Kubernetes cluster as a compute target to your Azure Machine Learning workspace by using Azure Machine Learning studio.
  2. Use the endpoint YAML to target Kubernetes, instead of the managed endpoint YAML. You need to edit the YAML to change the value of compute to the name of your registered compute target. You can use this deployment.yaml that has additional properties applicable to a Kubernetes deployment.

All the commands that are used in this article for managed online endpoints also apply to Kubernetes endpoints, except for the following capabilities that don't apply to Kubernetes endpoints:

Understand the scoring script

Tip

The format of the scoring script for online endpoints is the same format that's used in the preceding version of the CLI and in the Python SDK.

The scoring script specified in code_configuration.scoring_script must have an init() function and a run() function.

This example uses the score.py file: score.py

import os
import logging
import json
import numpy
import joblib


def init():
    """
    This function is called when the container is initialized/started, typically after create/update of the deployment.
    You can write the logic here to perform init operations like caching the model in memory
    """
    global model
    # AZUREML_MODEL_DIR is an environment variable created during deployment.
    # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
    # Please provide your model's folder name if there is one
    model_path = os.path.join(
        os.getenv("AZUREML_MODEL_DIR"), "model/sklearn_regression_model.pkl"
    )
    # deserialize the model file back into a sklearn model
    model = joblib.load(model_path)
    logging.info("Init complete")


def run(raw_data):
    """
    This function is called for every invocation of the endpoint to perform the actual scoring/prediction.
    In the example we extract the data from the json input and call the scikit-learn model's predict()
    method and return the result back
    """
    logging.info("model 1: request received")
    data = json.loads(raw_data)["data"]
    data = numpy.array(data)
    result = model.predict(data)
    logging.info("Request processed")
    return result.tolist()

The init() function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. The init function is the place to write logic for global initialization operations like caching the model in memory (as shown in this score.py file).

The run() function is called every time the endpoint is invoked, and it does the actual scoring and prediction. In this score.py file, the run() function extracts data from a JSON input, calls the scikit-learn model's predict() method, and then returns the prediction result.

Deploy and debug locally by using a local endpoint

We highly recommend that you test-run your endpoint locally to validate and debug your code and configuration before you deploy to Azure. Azure CLI and Python SDK support local endpoints and deployments, while Azure Machine Learning studio and ARM template don't.

To deploy locally, Docker Engine must be installed and running. Docker Engine typically starts when the computer starts. If it doesn't, you can troubleshoot Docker Engine.

Tip

You can use Azure Machine Learning inference HTTP server Python package to debug your scoring script locally without Docker Engine. Debugging with the inference server helps you to debug the scoring script before deploying to local endpoints so that you can debug without being affected by the deployment container configurations.

For more information on debugging online endpoints locally before deploying to Azure, see Online endpoint debugging.

Deploy the model locally

First create an endpoint. Optionally, for a local endpoint, you can skip this step and directly create the deployment (next step), which will, in turn, create the required metadata. Deploying models locally is useful for development and testing purposes.

az ml online-endpoint create --local -n $ENDPOINT_NAME -f endpoints/online/managed/sample/endpoint.yml

Now, create a deployment named blue under the endpoint.

az ml online-deployment create --local -n blue --endpoint $ENDPOINT_NAME -f endpoints/online/managed/sample/blue-deployment.yml

The --local flag directs the CLI to deploy the endpoint in the Docker environment.

Tip

Use Visual Studio Code to test and debug your endpoints locally. For more information, see debug online endpoints locally in Visual Studio Code.

Verify that the local deployment succeeded

Check the deployment status to see whether the model was deployed without error:

az ml online-endpoint show -n $ENDPOINT_NAME --local

The output should appear similar to the following JSON. The provisioning_state is Succeeded.

{
  "auth_mode": "key",
  "location": "local",
  "name": "docs-endpoint",
  "properties": {},
  "provisioning_state": "Succeeded",
  "scoring_uri": "http://localhost:49158/score",
  "tags": {},
  "traffic": {}
}

The following table contains the possible values for provisioning_state:

Value Description
Creating The resource is being created.
Updating The resource is being updated.
Deleting The resource is being deleted.
Succeeded The create/update operation succeeded.
Failed The create/update/delete operation failed.

Invoke the local endpoint to score data by using your model

Invoke the endpoint to score the model by using the invoke command and passing query parameters that are stored in a JSON file:

az ml online-endpoint invoke --local --name $ENDPOINT_NAME --request-file endpoints/online/model-1/sample-request.json

If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run az ml online-endpoint show --local -n $ENDPOINT_NAME. In the returned data, find the scoring_uri attribute.

Review the logs for output from the invoke operation

In the example score.py file, the run() method logs some output to the console.

You can view this output by using the get-logs command:

az ml online-deployment get-logs --local -n blue --endpoint $ENDPOINT_NAME

Deploy your online endpoint to Azure

Next, deploy your online endpoint to Azure. As a best practice for production, we recommend that you register the model and environment that you'll use in your deployment.

Register your model and environment

We recommend that you register your model and environment before deployment to Azure so that you can specify their registered names and versions during deployment. Registering your assets allows you to reuse them without the need to upload them every time you create deployments, thereby increasing reproducibility and traceability.

Note

Unlike deployment to Azure, local deployment doesn't support using registered models and environments. Rather, local deployment uses local model files and uses environments with local files only. For deployment to Azure, you can use either local or registered assets (models and environments). In this section of the article, the deployment to Azure uses registered assets, but you have the option of using local assets instead. For an example of a deployment configuration that uploads local files to use for local deployment, see Configure a deployment.

To register the model and environment, use the form model: azureml:my-model:1 or environment: azureml:my-env:1. For registration, you can extract the YAML definitions of model and environment into separate YAML files and use the commands az ml model create and az ml environment create. To learn more about these commands, run az ml model create -h and az ml environment create -h.

  1. Create a YAML definition for the model:

    $schema: https://azuremlschemas.azureedge.net/latest/model.schema.json
    name: my-model
    path: ../../model-1/model/
    
  2. Register the model:

    az ml model create -n my-model -v 1 -f ./model.yaml
    
  3. Create a YAML definition for the environment:

    $schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
    name: my-env
    image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest
    conda_file: ../../model-1/environment/conda.yaml
    
  4. Register the environment:

    az ml environment create -n my-env -v 1 -f ./environment.yaml
    

For more information on registering your model as an asset, see Register your model as an asset in Machine Learning by using the CLI. For more information on creating an environment, see Manage Azure Machine Learning environments with the CLI & SDK (v2).

Configure a deployment that uses registered assets

Your deployment configuration uses the registered model that you wish to deploy and your registered environment..

Use the registered assets (model and environment) in your deployment definition. The following snippet shows the endpoints/online/managed/sample/blue-deployment-with-registered-assets.yml file, with all the required inputs to configure a deployment:

blue-deployment-with-registered-assets.yml

$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: blue
endpoint_name: my-endpoint
model: azureml:my-model:1
code_configuration:
  code: ../../model-1/onlinescoring/
  scoring_script: score.py
environment: azureml:my-env:1
instance_type: Standard_DS3_v2
instance_count: 1

Use different CPU and GPU instance types and images

You can specify the CPU or GPU instance types and images in your deployment definition for both local deployment and deployment to Azure.

Your deployment definition in the blue-deployment-with-registered-assets.yml file used a general-purpose type Standard_DS3_v2 instance and a non-GPU Docker image mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest. For GPU compute, choose a GPU compute type SKU and a GPU Docker image.

For supported general-purpose and GPU instance types, see Managed online endpoints supported VM SKUs. For a list of Azure Machine Learning CPU and GPU base images, see Azure Machine Learning base images.

Note

To use Kubernetes, instead of managed endpoints, as a compute target, see Introduction to Kubernetes compute target.

Next, deploy your online endpoint to Azure.

Deploy to Azure

  1. Create the endpoint in the Azure cloud.

     az ml online-endpoint create --name $ENDPOINT_NAME -f endpoints/online/managed/sample/endpoint.yml
    
  2. Create the deployment named blue under the endpoint.

    az ml online-deployment create -name blue --endpoint $ENDPOINT_NAME -f endpoints/online/managed/sample/blue-deployment-with-registered-assets.yml --all-traffic
    

    The deployment creation can take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment are processed faster.

    Tip

    • If you prefer not to block your CLI console, you can add the flag --no-wait to the command. However, this option will stop the interactive display of the deployment status.

    Important

    The --all-traffic flag in the code az ml online-deployment create that's used to create the deployment allocates 100% of the endpoint traffic to the newly created blue deployment. Though this is helpful for development and testing purposes, for production, you might want to route traffic to the new deployment through an explicit command. For example, az ml online-endpoint update -n $ENDPOINT_NAME --traffic "blue=100".

To debug errors in your deployment, see Troubleshooting online endpoint deployments.

Check the status of the endpoint

  1. Use the show command to display information in the provisioning_state for the endpoint and deployment:

     az ml online-endpoint show -n $ENDPOINT_NAME
    
  2. List all the endpoints in the workspace in a table format by using the list command:

    az ml online-endpoint list --output table
    

Check the status of the online deployment

Check the logs to see whether the model was deployed without error.

  1. To see log output from a container, use the following CLI command:

     az ml online-deployment get-logs --name blue --endpoint $ENDPOINT_NAME
    

    By default, logs are pulled from the inference server container. To see logs from the storage initializer container, add the --container storage-initializer flag. For more information on deployment logs, see Get container logs.

Invoke the endpoint to score data by using your model

  1. Use either the invoke command or a REST client of your choice to invoke the endpoint and score some data:

     az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/model-1/sample-request.json
    
  2. Get the key used to authenticate to the endpoint:

    Tip

    You can control which Microsoft Entra security principals can get the authentication key by assigning them to a custom role that allows Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action and Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action. For more information on managing authorization to workspaces, see Manage access to an Azure Machine Learning workspace.

     ENDPOINT_KEY=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME -o tsv --query primaryKey)
     SCORING_URI=$(az ml online-endpoint show -n $ENDPOINT_NAME -o tsv --query scoring_uri)
    
     curl --request POST "$SCORING_URI" --header "Authorization: Bearer $ENDPOINT_KEY" --header 'Content-Type: application/json' --data @endpoints/online/model-1/sample-request.json
    
  3. Use curl to score data.

     SCORING_URI=$(az ml online-endpoint show -n $ENDPOINT_NAME -o tsv --query scoring_uri)
    
     curl --request POST "$SCORING_URI" --header "Authorization: Bearer $ENDPOINT_KEY" --header 'Content-Type: application/json' --data @endpoints/online/model-1/sample-request.json
    

    Notice you use show and get-credentials commands to get the authentication credentials. Also notice that you're using the --query flag to filter only the attributes that are needed. To learn more about the --query flag, see Query Azure CLI command output.

  4. To see the invocation logs, run get-logs again.

(Optional) Update the deployment

If you want to update the code, model, or environment, update the YAML file, and then run the az ml online-endpoint update command.

Note

If you update instance count (to scale your deployment) along with other model settings (such as code, model, or environment) in a single update command, the scaling operation will be performed first, then the other updates will be applied. It's a good practice to perform these operations separately in a production environment.

To understand how update works:

  1. Open the file online/model-1/onlinescoring/score.py.

  2. Change the last line of the init() function: After logging.info("Init complete"), add logging.info("Updated successfully").

  3. Save the file.

  4. Run this command:

    az ml online-deployment update -n blue --endpoint $ENDPOINT_NAME -f endpoints/online/managed/sample/blue-deployment-with-registered-assets.yml
    

    Note

    Updating by using YAML is declarative. That is, changes in the YAML are reflected in the underlying Azure Resource Manager resources (endpoints and deployments). A declarative approach facilitates GitOps: All changes to endpoints and deployments (even instance_count) go through the YAML.

    Tip

    • You can use generic update parameters, such as the --set parameter, with the CLI update command to override attributes in your YAML or to set specific attributes without passing them in the YAML file. Using --set for single attributes is especially valuable in development and test scenarios. For example, to scale up the instance_count value for the first deployment, you could use the --set instance_count=2 flag. However, because the YAML isn't updated, this technique doesn't facilitate GitOps.
    • Specifying the YAML file is NOT mandatory. For example, if you wanted to test different concurrency setting for a given deployment, you can try something like az ml online-deployment update -n blue -e my-endpoint --set request_settings.max_concurrent_requests_per_instance=4 environment_variables.WORKER_COUNT=4. This will keep all existing configuration but update only the specified parameters.
  5. Because you modified the init() function, which runs when the endpoint is created or updated, the message Updated successfully will be in the logs. Retrieve the logs by running:

    az ml online-deployment get-logs --name blue --endpoint $ENDPOINT_NAME
    

The update command also works with local deployments. Use the same az ml online-deployment update command with the --local flag.

Note

The update to the deployment in this section is an example of an in-place rolling update.

  • For a managed online endpoint, the deployment is updated to the new configuration with 20% nodes at a time. That is, if the deployment has 10 nodes, 2 nodes at a time are updated.
  • For a Kubernetes online endpoint, the system iteratively creates a new deployment instance with the new configuration and deletes the old one.
  • For production usage, you should consider blue-green deployment, which offers a safer alternative for updating a web service.

(Optional) Configure autoscaling

Autoscale automatically runs the right amount of resources to handle the load on your application. Managed online endpoints support autoscaling through integration with the Azure monitor autoscale feature. To configure autoscaling, see How to autoscale online endpoints.

(Optional) Monitor SLA by using Azure Monitor

To view metrics and set alerts based on your SLA, complete the steps that are described in Monitor online endpoints.

(Optional) Integrate with Log Analytics

The get-logs command for CLI or the get_logs method for SDK provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see Monitor online endpoints.

Delete the endpoint and the deployment

Delete the endpoint and all its underlying deployments:

az ml online-endpoint delete --name $ENDPOINT_NAME --yes --no-wait