Customize outputs in batch deployments

APPLIES TO: Azure CLI ml extension v2 (current) Python SDK azure-ai-ml v2 (current)

This guide explains how to create deployments that generate custom outputs and files. Sometimes you need more control over what's written as output from batch inference jobs. These cases include the following situations:

  • You need to control how predictions are written in the output. For instance, you want to append the prediction to the original data if the data is tabular.
  • You need to write your predictions in a different file format than the one supported out-of-the-box by batch deployments.
  • Your model is a generative model that can't write the output in a tabular format. For instance, models that produce images as outputs.
  • Your model produces multiple tabular files instead of a single one. For example, models that perform forecasting by considering multiple scenarios.

Batch deployments allow you to take control of the output of the jobs by letting you write directly to the output of the batch deployment job. In this tutorial, you learn how to deploy a model to perform batch inference and write the outputs in parquet format by appending the predictions to the original input data.

About this sample

This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. The model is based on the UCI Heart Disease dataset. The database contains 76 attributes, but this example uses a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It's integer valued from 0 (no presence) to 1 (presence).

The model was trained using an XGBBoost classifier and all the required preprocessing was packaged as a scikit-learn pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.

The example in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo and then change directories to the folder:

git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli

The files for this example are in:

cd endpoints/batch/deploy-models/custom-outputs-parquet

Follow along in a Jupyter notebook

There's a Jupyter notebook that you can use to follow this example. In the cloned repository, open the notebook called custom-output-batch.ipynb.

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a Trial before you begin. Try the Azure Machine Learning.

  • An Azure Machine Learning workspace. To create a workspace, see Manage Azure Machine Learning workspaces.

  • Ensure that you have the following permissions in the Machine Learning workspace:

    • Create or manage batch endpoints and deployments: Use an Owner, Contributor, or Custom role that allows Microsoft.MachineLearningServices/workspaces/batchEndpoints/*.
    • Create Azure Resource Manager deployments in the workspace resource group: Use an Owner, Contributor, or Custom role that allows Microsoft.Resources/deployments/write in the resource group where the workspace is deployed.
  • Install the following software to work with Machine Learning:

    Run the following command to install the Azure CLI and the ml extension for Azure Machine Learning:

    az extension add -n ml
    

    Pipeline component deployments for Batch Endpoints are introduced in version 2.7 of the ml extension for the Azure CLI. Use the az extension update --name ml command to get the latest version.


Connect to your workspace

The workspace is the top-level resource for Machine Learning. It provides a centralized place to work with all artifacts you create when you use Machine Learning. In this section, you connect to the workspace where you perform your deployment tasks.

In the following command, enter the values for your subscription ID, workspace, location, and resource group:

az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>

Create a batch deployment with a custom output

In this example, you create a deployment that can write directly to the output folder of the batch deployment job. The deployment uses this feature to write custom parquet files.

Register the model

You can only deploy registered models using a batch endpoint. In this case, you already have a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you're trying to deploy is already registered.

set -e

# <set_variables>
export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
# </set_variables>

# <name_endpoint>
ENDPOINT_NAME="heart-classifier-custom"
# </name_endpoint>

# The following code ensures the created deployment has a unique name
ENDPOINT_SUFIX=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w ${1:-5} | head -n 1)
ENDPOINT_NAME="$ENDPOINT_NAME-$ENDPOINT_SUFIX"

# <register_model>
MODEL_NAME='heart-classifier-sklpipe'
az ml model create --name $MODEL_NAME --type "custom_model" --path "model"
# </register_model>

echo "Creating compute"
# <create_compute>
az ml compute create -n batch-cluster --type amlcompute --min-instances 0 --max-instances 5
# </create_compute>

echo "Creating batch endpoint $ENDPOINT_NAME"
# <create_endpoint>
az ml batch-endpoint create -n $ENDPOINT_NAME -f endpoint.yml
# </create_endpoint>

echo "Showing details of the batch endpoint"
# <query_endpoint>
az ml batch-endpoint show --name $ENDPOINT_NAME
# </query_endpoint>

echo "Creating batch deployment $DEPLOYMENT_NAME for endpoint $ENDPOINT_NAME"
# <create_deployment>
az ml batch-deployment create --file deployment.yml --endpoint-name $ENDPOINT_NAME --set-default
# </create_deployment>

echo "Update the batch deployment as default for the endpoint"
# <set_default_deployment>
DEPLOYMENT_NAME="classifier-xgboost-custom"
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
# </set_default_deployment>

echo "Showing details of the batch deployment"
# <query_deployment>
az ml batch-deployment show --name $DEPLOYMENT_NAME --endpoint-name $ENDPOINT_NAME
# </query_deployment>

echo "Invoking batch endpoint"
# <start_batch_scoring_job>
JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.chinacloudapi.cn/data/heart-disease-uci/data --query name -o tsv)
# </start_batch_scoring_job>

echo "Showing job detail"
# <show_job_in_studio>
az ml job show -n $JOB_NAME --web
# </show_job_in_studio>

echo "List all jobs under the batch deployment"
# <list_all_jobs>
az ml batch-deployment list-jobs --name $DEPLOYMENT_NAME --endpoint-name $ENDPOINT_NAME --query [].name
# </list_all_jobs>

echo "Stream job logs to console"
# <stream_job_logs>
az ml job stream -n $JOB_NAME
# </stream_job_logs>

# <check_job_status>
STATUS=$(az ml job show -n $JOB_NAME --query status -o tsv)
echo $STATUS
if [[ $STATUS == "Completed" ]]
then
  echo "Job completed"
elif [[ $STATUS ==  "Failed" ]]
then
  echo "Job failed"
  exit 1
else 
  echo "Job status not failed or completed"
  exit 2
fi
# </check_job_status>

echo "Download scores to local path"
# <download_outputs>
az ml job download --name $JOB_NAME --output-name score --download-path ./
# </download_outputs>

echo "Delete resources"
# <delete_endpoint>
az ml batch-endpoint delete --name $ENDPOINT_NAME --yes
# </delete_endpoint>

Create a scoring script

You need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. You're also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:

  1. Reads the input data as CSV files.
  2. Runs an MLflow model predict function over the input data.
  3. Appends the predictions to a pandas.DataFrame along with the input data.
  4. Writes the data in a file named as the input file, but in parquet format.

code/batch_driver.py

import os
import pickle
import glob
import pandas as pd
from pathlib import Path
from typing import List


def init():
    global model
    global output_path

    # AZUREML_MODEL_DIR is an environment variable created during deployment
    # It is the path to the model folder
    # Please provide your model's folder name if there's one:
    output_path = os.environ["AZUREML_BI_OUTPUT_PATH"]
    model_path = os.environ["AZUREML_MODEL_DIR"]
    model_file = glob.glob(f"{model_path}/*/*.pkl")[-1]

    with open(model_file, "rb") as file:
        model = pickle.load(file)


def run(mini_batch: List[str]):
    for file_path in mini_batch:
        data = pd.read_csv(file_path)
        pred = model.predict(data)

        data["prediction"] = pred

        output_file_name = Path(file_path).stem
        output_file_path = os.path.join(output_path, output_file_name + ".parquet")
        data.to_parquet(output_file_path)

    return mini_batch

Remarks:

  • Notice how the environment variable AZUREML_BI_OUTPUT_PATH is used to get access to the output path of the deployment job.
  • The init() function populates a global variable called output_path that can be used later to know where to write.
  • The run method returns a list of the processed files. It's required for the run function to return a list or a pandas.DataFrame object.

Warning

Take into account that all the batch executors have write access to this path at the same time. This means that you need to account for concurrency. In this case, ensure that each executor writes its own file by using the input file name as the name of the output folder.

Create the endpoint

You now create a batch endpoint named heart-classifier-batch where the model is deployed.

  1. Decide on the name of the endpoint. The name of the endpoint appears in the URI associated with your endpoint, so batch endpoint names need to be unique within an Azure region. For example, there can be only one batch endpoint with the name mybatchendpoint in westus2.

    In this case, place the name of the endpoint in a variable so you can easily reference it later.

     ENDPOINT_NAME="heart-classifier-custom"
    
  2. Configure your batch endpoint.

    The following YAML file defines a batch endpoint:

    endpoint.yml

     $schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
     name: heart-classifier-batch
     description: A heart condition classifier for batch inference
     auth_mode: aad_token
    
  3. Create the endpoint:

    az ml batch-endpoint create -n $ENDPOINT_NAME -f endpoint.yml
    

Create the deployment

Follow the next steps to create a deployment using the previous scoring script:

  1. First, create an environment where the scoring script can be executed:

    No extra step is required for the Azure Machine Learning CLI. The environment definition is included in the deployment file.

    environment:
      name: batch-mlflow-xgboost
      image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest
      conda_file: environment/conda.yaml
    
  2. Create the deployment. Notice that output_action is now set to SUMMARY_ONLY.

    Note

    This example assumes you have a compute cluster with name batch-cluster. Change that name accordingly.

    To create a new deployment under the created endpoint, create a YAML configuration like the following. You can check the full batch endpoint YAML schema for extra properties.

    $schema: https://azuremlschemas.azureedge.net/latest/modelBatchDeployment.schema.json
    endpoint_name: heart-classifier-batch
    name: classifier-xgboost-custom
    description: A heart condition classifier based on XGBoost and Scikit-Learn pipelines that append predictions on parquet files.
    type: model
    model: azureml:heart-classifier-sklpipe@latest
    environment:
      name: batch-mlflow-xgboost
      image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest
      conda_file: environment/conda.yaml
    code_configuration:
      code: code
      scoring_script: batch_driver.py
    compute: azureml:batch-cluster
    resources:
      instance_count: 2
    settings:
      max_concurrency_per_instance: 2
      mini_batch_size: 2
      output_action: summary_only
      retry_settings:
        max_retries: 3
        timeout: 300
      error_threshold: -1
      logging_level: info
    

    Then, create the deployment with the following command:

    DEPLOYMENT_NAME="classifier-xgboost-parquet"
    az ml batch-deployment create -f endpoint.yml
    
  3. At this point, our batch endpoint is ready to be used.

Test the deployment

To test your endpoint, use a sample of unlabeled data located in this repository, which can be used with the model. Batch endpoints can only process data that's located in the cloud and is accessible from the Azure Machine Learning workspace. In this example, you upload it to an Azure Machine Learning data store. You're going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.

  1. Invoke the endpoint with data from a storage account:

    JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
    

    Note

    The utility jq might not be installed on every installation. You can get instructions on GitHub.

  2. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:

    az ml job show --name $JOB_NAME
    

Analyze the outputs

The job generates a named output called score where all the generated files are placed. Since you wrote into the directory directly, one file per each input file, then you can expect to have the same number of files. In this particular example, name the output files the same as the inputs, but they have a parquet extension.

Note

Notice that a file predictions.csv is also included in the output folder. This file contains the summary of the processed files.

You can download the results of the job by using the job name:

To download the predictions, use the following command:

az ml job download --name $JOB_NAME --output-name score --download-path ./

Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using Pandas dataframe.

import pandas as pd
import glob

output_files = glob.glob("named-outputs/score/*.parquet")
score = pd.concat((pd.read_parquet(f) for f in output_files))

The output looks as follows:

age sex ... thal prediction
63 1 ... fixed 0
67 1 ... normal 1
67 1 ... reversible 0
37 1 ... normal 0

Clean up resources

Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs aren't deleted.

az ml batch-endpoint delete --name $ENDPOINT_NAME --yes