Access the MLflow tracking server from outside Azure Databricks

You may wish to log to the MLflow tracking server from your own applications or from the MLflow CLI.

This article describes the required configuration steps. Start by installing MLflow and configuring your credentials (Step 1). You can then either configure an application (Step 2) or configure the MLflow CLI (Step 3).

For information on how to launch and log to an open-source tracking server, see the open source documentation.

Step 1: Configure your environment

If you don't have an Azure Databricks account, you can try Databricks for free.

To configure your environment to access your Azure Databricks hosted MLflow tracking server:

  1. Install MLflow using pip install mlflow.
  2. Configure authentication. Do one of:
    • Generate a REST API token and create a credentials file using databricks configure --token.

    • Specify credentials via environment variables:

      # Configure MLflow to communicate with a Databricks-hosted tracking server
      export MLFLOW_TRACKING_URI=databricks
      # Specify the workspace hostname and token
      export DATABRICKS_HOST="..."
      export DATABRICKS_TOKEN="..."
      

Step 2: Configure MLflow applications

Configure MLflow applications to log to Azure Databricks by setting the tracking URI to databricks, or databricks://<profileName>, if you specified a profile name via --profile while creating your credentials file. For example, you can achieve this by setting the MLFLOW_TRACKING_URI environment variable to "databricks".

Step 3: Configure the MLflow CLI

Configure the MLflow CLI to communicate with an Azure Databricks tracking server with the MLFLOW_TRACKING_URI environment variable. For example, to create an experiment using the CLI with the tracking URI databricks, run:

# Replace <your-username> with your Databricks username
export MLFLOW_TRACKING_URI=databricks
mlflow experiments create -n /Users/<your-username>/my-experiment