Deploy MLflow models to online endpoints

APPLIES TO: Azure CLI ml extension v2 (current)

In this article, learn how to deploy your MLflow model to an online endpoint for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to specify a scoring script or an environment—this functionality is known as no-code deployment.

For no-code-deployment, Azure Machine Learning:

  • Dynamically installs Python packages provided in the conda.yaml file. Hence, dependencies get installed during container runtime.
  • Provides an MLflow base image/curated environment that contains the following items:

Tip

Workspaces without public network access: Before you can deploy MLflow models to online endpoints without egress connectivity, you have to package the models (preview). By using model packaging, you can avoid the need for an internet connection, which Azure Machine Learning would otherwise require to dynamically install necessary Python packages for the MLflow models.

About the example

The example shows how you can deploy an MLflow model to an online endpoint to perform predictions. The example uses an MLflow model that's based on the Diabetes dataset. This dataset contains 10 baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline.

The model was trained using a scikit-learn regressor, and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.

The information in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to cli, if you're using the Azure CLI. If you're using the Azure Machine Learning SDK for Python, change directories to sdk/python/endpoints/online/mlflow.

git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli

Follow along in Jupyter Notebook

You can follow the steps for using the Azure Machine Learning Python SDK by opening the Deploy MLflow model to online endpoints notebook in the cloned repository.

Prerequisites

Before following the steps in this article, make sure you have the following prerequisites:

  • An Azure subscription. If you don't have an Azure subscription, create a Trial before you begin. Try the trial subscription.

  • Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. For more information on roles, see Manage access to an Azure Machine Learning workspace.

  • You must have an MLflow model registered in your workspace. This article registers a model trained for the Diabetes dataset in the workspace.

  • Also, you need to:


Connect to your workspace

First, connect to the Azure Machine Learning workspace where you'll work.

az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>

Register the model

You can deploy only registered models to online endpoints. In this case, you already have a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you're trying to deploy is already registered.

MODEL_NAME='sklearn-diabetes'
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "endpoints/online/ncd/sklearn-diabetes/model"

What if your model was logged inside of a run?

If your model was logged inside of a run, you can register it directly.

To register the model, you need to know the location where it is stored. If you're using MLflow's autolog feature, the path to the model depends on the model type and framework. You should check the jobs output to identify the name of the model's folder. This folder contains a file named MLModel.

If you're using the log_model method to manually log your models, then pass the path to the model as the argument to the method. For example, if you log the model, using mlflow.sklearn.log_model(my_model, "classifier"), then the path where the model is stored is called classifier.

Use the Azure Machine Learning CLI v2 to create a model from a training job output. In the following example, a model named $MODEL_NAME is registered using the artifacts of a job with ID $RUN_ID. The path where the model is stored is $MODEL_PATH.

az ml model create --name $MODEL_NAME --path azureml://jobs/$RUN_ID/outputs/artifacts/$MODEL_PATH

Note

The path $MODEL_PATH is the location where the model has been stored in the run.

Deploy an MLflow model to an online endpoint

  1. Configure the endpoint where the model will be deployed. The following example configures the name and authentication mode of the endpoint:

Set an endpoint name by running the following command (replace YOUR_ENDPOINT_NAME with a unique name):

set -e

# <set_endpoint_name>
export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
# </set_endpoint_name>

#  endpoint name
export ENDPOINT_NAME=endpt-ncd-`echo $RANDOM`

AML_SKLEARN_MODEL_NAME=mir-sample-sklearn-ncd-model
echo $AML_SKLEARN_MODEL_NAME

AML_LIGHTGBM_MODEL_NAME=mir-sample-lightgbm-ncd-model
echo $AML_LIGHTGBM_MODEL_NAME

# <create_endpoint>
az ml online-endpoint create --name $ENDPOINT_NAME -f endpoints/online/ncd/create-endpoint.yaml
# </create_endpoint>

# check if create was successful
endpoint_status=`az ml online-endpoint show --name $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $endpoint_status
if [[ $endpoint_status == "Succeeded" ]]
then
  echo "Endpoint created successfully"
else
  echo "Endpoint creation failed"
  exit 1
fi

# cleanup of existing models
model_archive=$(az ml model archive -n $AML_SKLEARN_MODEL_NAME --version 1 || true)
model_archive=$(az ml model archive -n $AML_LIGHTGBM_MODEL_NAME --version 1 || true)


# <create_sklearn_deployment>
az ml online-deployment create --name sklearn-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/ncd/sklearn-deployment.yaml --all-traffic
# </create_sklearn_deployment>

deploy_status=`az ml online-deployment show --name sklearn-deployment --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
  echo "Deployment completed successfully"
else
  echo "Deployment failed"
  exit 1
fi

# <test_sklearn_deployment>
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/ncd/sample-request-sklearn.json
# </test_sklearn_deployment>

# <create_lightgbm_deployment>
az ml online-deployment create --name lightgbm-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/ncd/lightgbm-deployment.yaml
# </create_lightgbm_deployment>

deploy_status=`az ml online-deployment show --name lightgbm-deployment --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
  echo "Deployment completed successfully"
else
  echo "Deployment failed"
  exit 1
fi

# <test_lightgbm_deployment>
az ml online-endpoint invoke --name $ENDPOINT_NAME --deployment lightgbm-deployment --request-file endpoints/online/ncd/sample-request-lightgbm.json
# </test_lightgbm_deployment>

# cleanup of models
model_archive=$(az ml model archive -n $AML_SKLEARN_MODEL_NAME --version 1 || true)
model_archive=$(az ml model archive -n $AML_LIGHTGBM_MODEL_NAME --version 1 || true)

# <delete_endpoint>
az ml online-endpoint delete --name $ENDPOINT_NAME --yes 
# </delete_endpoint>

Configure the endpoint:

create-endpoint.yaml

$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
name: my-endpoint
auth_mode: key

The response will be similar to the following text:

[ 
  11633.100167144921,
  8522.117402884991
]

Important

For MLflow no-code-deployment, testing via local endpoints is currently not supported.

Customize MLflow model deployments

You don't have to specify a scoring script in the deployment definition of an MLflow model to an online endpoint. However, you can opt to do so and customize how inference gets executed.

You'll typically want to customize your MLflow model deployment when:

  • The model doesn't have a PyFunc flavor on it.
  • You need to customize the way the model is run, for instance, to use a specific flavor to load the model, using mlflow.<flavor>.load_model().
  • You need to do pre/post processing in your scoring routine when it's not done by the model itself.
  • The output of the model can't be nicely represented in tabular data. For instance, it's a tensor representing an image.

Important

If you choose to specify a scoring script for an MLflow model deployment, you'll also have to specify the environment where the deployment will run.

Steps

To deploy an MLflow model with a custom scoring script:

  1. Identify the folder where your MLflow model is located.

    a. Go to the Azure Machine Learning studio.

    b. Go to the Models section.

    c. Select the model you're trying to deploy and go to its Artifacts tab.

    d. Take note of the folder that is displayed. This folder was specified when the model was registered.

    Screenshot showing the folder where the model artifacts are placed.

  2. Create a scoring script. Notice how the folder name model that you previously identified is included in the init() function.

    score.py

import logging
import os
import json
import mlflow
from io import StringIO
from mlflow.pyfunc.scoring_server import infer_and_parse_json_input, predictions_to_json


def init():
    global model
    global input_schema
    # "model" is the path of the mlflow artifacts when the model was registered. For automl
    # models, this is generally "mlflow-model".
    model_path = os.path.join(os.getenv("AZUREML_MODEL_DIR"), "model")
    model = mlflow.pyfunc.load_model(model_path)
    input_schema = model.metadata.get_input_schema()


def run(raw_data):
    json_data = json.loads(raw_data)
    if "input_data" not in json_data.keys():
        raise Exception("Request must contain a top level key named 'input_data'")

    serving_input = json.dumps(json_data["input_data"])
    data = infer_and_parse_json_input(serving_input, input_schema)
    predictions = model.predict(data)

    result = StringIO()
    predictions_to_json(predictions, result)
    return result.getvalue()

Tip

The following scoring script is provided as an example about how to perform inference with an MLflow model. You can adapt this script to your needs or change any of its parts to reflect your scenario.

score.py

Warning

MLflow 2.0 advisory: The provided scoring script will work with both MLflow 1.X and MLflow 2.X. However, be advised that the expected input/output formats on those versions might vary. Check the environment definition used to ensure you're using the expected MLflow version. Notice that MLflow 2.0 is only supported in Python 3.8+.

  1. Create an environment where the scoring script can be executed. Since the model is an MLflow model, the conda requirements are also specified in the model package. For more details about the files included in an MLflow model see The MLmodel format. You'll then build the environment using the conda dependencies from the file. However, you need to also include the package azureml-inference-server-http, which is required for online deployments in Azure Machine Learning.

    The conda definition file is as follows:

    conda.yml

     channels:
     - conda-forge
     dependencies:
     - python=3.9
     - pip
     - pip:
       - mlflow
       - scikit-learn==1.2.2
       - cloudpickle==2.2.1
       - psutil==5.9.4
       - pandas==2.0.0
       - azureml-inference-server-http
     name: mlflow-env
    

    Note

    The azureml-inference-server-http package has been added to the original conda dependencies file.

    You'll use this conda dependencies file to create the environment:

    The environment will be created inline in the deployment configuration.

  2. Create the deployment:

    Create a deployment configuration file deployment.yml:

    $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
    name: sklearn-diabetes-custom
    endpoint_name: my-endpoint
    model: azureml:sklearn-diabetes@latest
    environment: 
      image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04
      conda_file: sklearn-diabetes/environment/conda.yml
    code_configuration:
      code: sklearn-diabetes/src
      scoring_script: score.py
    instance_type: Standard_F2s_v2
    instance_count: 1
    

    Create the deployment:

    az ml online-deployment create -f deployment.yml
    
  3. Once your deployment completes, it is ready to serve requests. One way to test the deployment is by using a sample request file along with the invoke method.

    sample-request-sklearn.json

     {"input_data": {
         "columns": [
           "age",
           "sex",
           "bmi",
           "bp",
           "s1",
           "s2",
           "s3",
           "s4",
           "s5",
           "s6"
         ],
         "data": [
           [ 1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0 ],
           [ 10.0,2.0,9.0,8.0,7.0,6.0,5.0,4.0,3.0,2.0]
         ],
         "index": [0,1]
       }}
    

    Submit a request to the endpoint as follows:

set -e

# <set_endpoint_name>
export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
# </set_endpoint_name>

#  endpoint name
export ENDPOINT_NAME=endpt-ncd-`echo $RANDOM`

AML_SKLEARN_MODEL_NAME=mir-sample-sklearn-ncd-model
echo $AML_SKLEARN_MODEL_NAME

AML_LIGHTGBM_MODEL_NAME=mir-sample-lightgbm-ncd-model
echo $AML_LIGHTGBM_MODEL_NAME

# <create_endpoint>
az ml online-endpoint create --name $ENDPOINT_NAME -f endpoints/online/ncd/create-endpoint.yaml
# </create_endpoint>

# check if create was successful
endpoint_status=`az ml online-endpoint show --name $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $endpoint_status
if [[ $endpoint_status == "Succeeded" ]]
then
  echo "Endpoint created successfully"
else
  echo "Endpoint creation failed"
  exit 1
fi

# cleanup of existing models
model_archive=$(az ml model archive -n $AML_SKLEARN_MODEL_NAME --version 1 || true)
model_archive=$(az ml model archive -n $AML_LIGHTGBM_MODEL_NAME --version 1 || true)


# <create_sklearn_deployment>
az ml online-deployment create --name sklearn-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/ncd/sklearn-deployment.yaml --all-traffic
# </create_sklearn_deployment>

deploy_status=`az ml online-deployment show --name sklearn-deployment --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
  echo "Deployment completed successfully"
else
  echo "Deployment failed"
  exit 1
fi

# <test_sklearn_deployment>
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/ncd/sample-request-sklearn.json
# </test_sklearn_deployment>

# <create_lightgbm_deployment>
az ml online-deployment create --name lightgbm-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/ncd/lightgbm-deployment.yaml
# </create_lightgbm_deployment>

deploy_status=`az ml online-deployment show --name lightgbm-deployment --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
  echo "Deployment completed successfully"
else
  echo "Deployment failed"
  exit 1
fi

# <test_lightgbm_deployment>
az ml online-endpoint invoke --name $ENDPOINT_NAME --deployment lightgbm-deployment --request-file endpoints/online/ncd/sample-request-lightgbm.json
# </test_lightgbm_deployment>

# cleanup of models
model_archive=$(az ml model archive -n $AML_SKLEARN_MODEL_NAME --version 1 || true)
model_archive=$(az ml model archive -n $AML_LIGHTGBM_MODEL_NAME --version 1 || true)

# <delete_endpoint>
az ml online-endpoint delete --name $ENDPOINT_NAME --yes 
# </delete_endpoint>

The response will be similar to the following text:

{
    "predictions": [ 
    11633.100167144921,
    8522.117402884991
    ]
}

Warning

MLflow 2.0 advisory: In MLflow 1.X, the predictions key will be missing.

Clean up resources

Once you're done using the endpoint, delete its associated resources:

set -e

# <set_endpoint_name>
export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
# </set_endpoint_name>

#  endpoint name
export ENDPOINT_NAME=endpt-ncd-`echo $RANDOM`

AML_SKLEARN_MODEL_NAME=mir-sample-sklearn-ncd-model
echo $AML_SKLEARN_MODEL_NAME

AML_LIGHTGBM_MODEL_NAME=mir-sample-lightgbm-ncd-model
echo $AML_LIGHTGBM_MODEL_NAME

# <create_endpoint>
az ml online-endpoint create --name $ENDPOINT_NAME -f endpoints/online/ncd/create-endpoint.yaml
# </create_endpoint>

# check if create was successful
endpoint_status=`az ml online-endpoint show --name $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $endpoint_status
if [[ $endpoint_status == "Succeeded" ]]
then
  echo "Endpoint created successfully"
else
  echo "Endpoint creation failed"
  exit 1
fi

# cleanup of existing models
model_archive=$(az ml model archive -n $AML_SKLEARN_MODEL_NAME --version 1 || true)
model_archive=$(az ml model archive -n $AML_LIGHTGBM_MODEL_NAME --version 1 || true)


# <create_sklearn_deployment>
az ml online-deployment create --name sklearn-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/ncd/sklearn-deployment.yaml --all-traffic
# </create_sklearn_deployment>

deploy_status=`az ml online-deployment show --name sklearn-deployment --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
  echo "Deployment completed successfully"
else
  echo "Deployment failed"
  exit 1
fi

# <test_sklearn_deployment>
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/ncd/sample-request-sklearn.json
# </test_sklearn_deployment>

# <create_lightgbm_deployment>
az ml online-deployment create --name lightgbm-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/ncd/lightgbm-deployment.yaml
# </create_lightgbm_deployment>

deploy_status=`az ml online-deployment show --name lightgbm-deployment --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv`
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
  echo "Deployment completed successfully"
else
  echo "Deployment failed"
  exit 1
fi

# <test_lightgbm_deployment>
az ml online-endpoint invoke --name $ENDPOINT_NAME --deployment lightgbm-deployment --request-file endpoints/online/ncd/sample-request-lightgbm.json
# </test_lightgbm_deployment>

# cleanup of models
model_archive=$(az ml model archive -n $AML_SKLEARN_MODEL_NAME --version 1 || true)
model_archive=$(az ml model archive -n $AML_LIGHTGBM_MODEL_NAME --version 1 || true)

# <delete_endpoint>
az ml online-endpoint delete --name $ENDPOINT_NAME --yes 
# </delete_endpoint>