Develop a job on Azure Databricks by using Databricks Asset Bundles

Databricks Asset Bundles, also known simply as bundles, enable you to programmatically validate, deploy, and run Azure Databricks resources such as jobs. You can also use bundles to programmatically manage Delta Live Tables pipelines and work with MLOps Stacks. See What are Databricks Asset Bundles?.

This article describes steps that you can complete from a local development setup to use a bundle that programmatically manages a job. See Introduction to Azure Databricks Workflows.

If you have existing jobs that were created by using the Azure Databricks Workflows user interface or API that you want to move to bundles, then you must recreate them as bundle configuration files. To do so, Databricks recommends that you first create a bundle by using the steps below and the validate whether the bundle works. You can then add job definitions, notebooks, and other sources to the bundle. See Add an existing job definition to a bundle.

In addition to using the Databricks CLI to run a job deployed by a bundle, you can also view and run these jobs in the Azure Databricks Jobs UI. See View and run a job created with a Databricks Asset Bundle.

Requirements

  • Databricks CLI version 0.218.0 or above. To check your installed version of the Databricks CLI, run the command databricks -v. To install the Databricks CLI, see Install or update the Databricks CLI.

Decision: Create the bundle by using a template or manually

Decide whether you want to create an example bundle using a template or manually:

Create the bundle by using a template

In these steps, you create the bundle by using the Azure Databricks default bundle template for Python, which consists of a notebook or Python code, paired with the definition of a job to run it. You then validate, deploy, and run the deployed job within your Azure Databricks workspace. The remote workspace must have workspace files enabled. See What are workspace files?.

Step 1: Set up authentication

For more information about to set up authentication, see databricks authentication.

Step 2: Create the bundle

A bundle contains the artifacts you want to deploy and the settings for the resources you want to run.

  1. Use your terminal or command prompt to switch to a directory on your local development machine that will contain the template's generated bundle.

  2. Use the Dataricks CLI to run the bundle init command:

    databricks bundle init
    
  3. For Template to use, leave the default value of default-python by pressing Enter.

  4. For Unique name for this project, leave the default value of my_project, or type a different value, and then press Enter. This determines the name of the root directory for this bundle. This root directory is created within your current working directory.

  5. For Include a stub (sample) notebook, select yes and press Enter.

  6. For Include a stub (sample) DLT pipeline, select no and press Enter. This instructs the Databricks CLI to not define a sample Delta Live Tables pipeline in your bundle.

  7. For Include a stub (sample) Python package, select no and press Enter. This instructs the Databricks CLI to not add sample Python wheel package files or related build instructions to your bundle.

Step 3: Explore the bundle

To view the files that the template generated, switch to the root directory of your newly created bundle and open this directory with your preferred IDE, for example Visual Studio Code. Files of particular interest include the following:

  • databricks.yml: This file specifies the bundle's programmatic name, includes a reference to the job definition, and specifies settings about the target workspace.
  • resources/<project-name>_job.yml: This file specifies the job's settings, including a default notebook task.
  • src/notebook.ipynb: This file is a sample notebook that, when run, simply initializes an RDD that contains the numbers 1 through 10.

For customizing jobs, the mappings within a job declaration correspond to the create job operation's request payload as defined in POST /api/2.1/jobs/create in the REST API reference, expressed in YAML format.

Tip

You can define, combine, and override the settings for new job clusters in bundles by using the techniques described in Override cluster settings in Databricks Asset Bundles.

Step 4: Validate the project's bundle configuration file

In this step, you check whether the bundle configuration is valid.

  1. From the root directory, use the Databricks CLI to run the bundle validate command, as follows:

    databricks bundle validate
    
  2. If a summary of the bundle configuration is returned, then the validation succeeded. If any errors are returned, fix the errors, and then repeat this step.

If you make any changes to your bundle after this step, you should repeat this step to check whether your bundle configuration is still valid.

Step 5: Deploy the local project to the remote workspace

In this step, you deploy the local notebook to your remote Azure Databricks workspace and create the Azure Databricks job within your workspace.

  1. From the bundle root, use the Databricks CLI to run the bundle deploy command as follows:

    databricks bundle deploy -t dev
    
  2. Check whether the local notebook was deployed: In your Azure Databricks workspace's sidebar, click Workspace.

  3. Click into the Users > <your-username> > .bundle > <project-name> > dev > files > src folder. The notebook should be in this folder.

  4. Check whether the job was created: In your Azure Databricks workspace's sidebar, click Workflows.

  5. On the Jobs tab, click [dev <your-username>] <project-name>_job.

  6. Click the Tasks tab. There should be one task: notebook_task.

If you make any changes to your bundle after this step, you should repeat steps 4-5 to check whether your bundle configuration is still valid and then redeploy the project.

Step 6: Run the deployed project

In this step, you run the Azure Databricks job in your workspace.

  1. From the root directory, use the Databricks CLI to run the bundle run command, as follows, replacing <project-name> with the name of your project from Step 2:

    databricks bundle run -t dev <project-name>_job
    
  2. Copy the value of Run URL that appears in your terminal and paste this value into your web browser to open your Azure Databricks workspace.

  3. In your Azure Databricks workspace, after the job task completes successfully and shows a green title bar, click the job task to see the results.

If you make any changes to your bundle after this step, you should repeat steps 4-6 to check whether your bundle configuration is still valid, redeploy the project, and run the redeployed project.

Step 7: Clean up

In this step, you delete the deployed notebook and the job from your workspace.

  1. From the root directory, use the Databricks CLI to run the bundle destroy command, as follows:

    databricks bundle destroy
    
  2. Confirm the job deletion request: When prompted to permanently destroy resources, type y and press Enter.

  3. Confirm the notebook deletion request: When prompted to permanently destroy the previously deployed folder and all of its files, type y and press Enter.

  4. If you also want to delete the bundle from your development machine, you can now delete the local directory from Step 2.

You have reached the end of the steps for creating a bundle by using a template.

Create the bundle manually

In these steps, you create a bundle from scratch. This simple bundle consists of two notebooks and the definition of an Azure Databricks job to run these notebooks. You then validate, deploy, and run the deployed notebooks from the job within your Azure Databricks workspace. These steps automate the quickstart titled Create your first workflow with an Azure Databricks job.

Step 1: Create the bundle

A bundle contains the artifacts you want to deploy and the settings for the resources you want to run.

  1. Create or identify an empty directory on your development machine.
  2. Switch to the empty directory in your terminal, or open the empty directory in your IDE.

Tip

Your empty directory could be associated with a cloned repository that is managed by a Git provider. This enables you to manage your bundle with external version control and to more easily collaborate with other developers and IT professionals on your project. However, to help simplify this demonstration, a cloned repo is not used here.

If you choose to clone a repo for this demo, Databricks recommends that the repo is empty or has only basic files in it such as README and .gitignore. Otherwise, any pre-existing files in the repo might be unnecessarily synchronized to your Azure Databricks workspace.

Step 2: Add notebooks to the project

In this step, you add two notebooks to your project. The first notebook gets a list of trending baby names since 2007 from the Beijing State Department of Health's public data sources. See Baby Names: Trending by Name: Beginning 2007 on the department's website. The first notebook then saves this data to your Azure Databricks Unity Catalog volume named my-volume in a schema named default within a catalog named main. The second notebook queries the saved data and displays aggregated counts of the baby names by first name and sex for 2014.

  1. From the directory's root, create the first notebook, a file named retrieve-baby-names.py.

  2. Add the following code to the retrieve-baby-names.py file:

    # Databricks notebook source
    import requests
    
    response = requests.get('http://health.data.ny.gov/api/views/jxy9-yhdk/rows.csv')
    csvfile = response.content.decode('utf-8')
    dbutils.fs.put("/Volumes/main/default/my-volume/babynames.csv", csvfile, True)
    
  3. Create the second notebook, a file named filter-baby-names.py, in the same directory.

  4. Add the following code to the filter-baby-names.py file:

    # Databricks notebook source
    babynames = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("/Volumes/main/default/my-volume/babynames.csv")
    babynames.createOrReplaceTempView("babynames_table")
    years = spark.sql("select distinct(Year) from babynames_table").toPandas()['Year'].tolist()
    years.sort()
    dbutils.widgets.dropdown("year", "2014", [str(x) for x in years])
    display(babynames.filter(babynames.Year == dbutils.widgets.get("year")))
    

Step 3: Add a bundle configuration schema file to the project

If you are using an IDE such as Visual Studio Code, PyCharm Professional, or IntelliJ IDEA Ultimate that provides support for YAML files and JSON schema files, you can use your IDE to not only create the bundle configuration schema file but to check your project's bundle configuration file syntax and formatting and provide code completion hints, as follows. Note that while the bundle configuration file that you will create later in Step 5 is YAML-based, the bundle configuration schema file in this step is JSON-based.

Visual Studio Code

  1. Add YAML language server support to Visual Studio Code, for example by installing the YAML extension from the Visual Studio Code Marketplace.

  2. Generate the Databricks Asset Bundle configuration JSON schema file by using the Databricks CLI to run the bundle schema command and redirect the output to a JSON file. For example, generate a file named bundle_config_schema.json within the current directory, as follows:

    databricks bundle schema > bundle_config_schema.json
    
  3. Note that later in Step 5, you will add the following comment to the beginning of your bundle configuration file, which associates your bundle configuration file with the specified JSON schema file:

    # yaml-language-server: $schema=bundle_config_schema.json
    

    Note

    In the preceding comment, if your Databricks Asset Bundle configuration JSON schema file is in a different path, replace bundle_config_schema.json with the full path to your schema file.

PyCharm Professional

  1. Generate the Databricks Asset Bundle configuration JSON schema file by using the Databricks CLI to run the bundle schema command and redirect the output to a JSON file. For example, generate a file named bundle_config_schema.json within the current directory, as follows:

    databricks bundle schema > bundle_config_schema.json
    
  2. Configure PyCharm to recognize the bundle configuration JSON schema file, and then complete the JSON schema mapping, by following the instructions in Configure a custom JSON schema.

  3. Note that later in Step 5, you will use PyCharm to create or open a bundle configuration file. By convention, this file is named databricks.yml.

IntelliJ IDEA Ultimate

  1. Generate the Databricks Asset Bundle configuration JSON schema file by using the Databricks CLI to run the bundle schema command and redirect the output to a JSON file. For example, generate a file named bundle_config_schema.json within the current directory, as follows:

    databricks bundle schema > bundle_config_schema.json
    
  2. Configure IntelliJ IDEA to recognize the bundle configuration JSON schema file, and then complete the JSON schema mapping, by following the instructions in Configure a custom JSON schema.

  3. Note that later in Step 5, you will use IntelliJ IDEA to create or open a bundle configuration file. By convention, this file is named databricks.yml.

Step 4: Set up authentication

For more information about to set up authentication, see databricks authentication.

Step 5: Add a bundle configuration file to the project

In this step, you define how you want to deploy and run the two notebooks. For this demo, you want to use an Azure Databricks job to run the first notebook and then the second notebook. Because the first notebook saves the data and the second notebook queries the saved data, you want the first notebook to finish running before the second notebook starts. You model these objectives within a bundle configuration file in your project.

  1. From the directory's root, create the bundle configuration file, a file named databricks.yml.
  2. Add the following code to the databricks.yml file, replacing <workspace-url> with your per-workspace URL, for example https://adb-1234567890123456.7.databricks.azure.cn. This URL must match the one in your .databrickscfg file:

Tip

The first line, starting with # yaml-language-server, is required only if your IDE supports it. See Step 3 earlier for details.

# yaml-language-server: $schema=bundle_config_schema.json
bundle:
  name: baby-names

resources:
  jobs:
    retrieve-filter-baby-names-job:
      name: retrieve-filter-baby-names-job
      job_clusters:
        - job_cluster_key: common-cluster
          new_cluster:
            spark_version: 12.2.x-scala2.12
            node_type_id: Standard_DS3_v2
            num_workers: 1
      tasks:
        - task_key: retrieve-baby-names-task
          job_cluster_key: common-cluster
          notebook_task:
            notebook_path: ./retrieve-baby-names.py
        - task_key: filter-baby-names-task
          depends_on:
            - task_key: retrieve-baby-names-task
          job_cluster_key: common-cluster
          notebook_task:
            notebook_path: ./filter-baby-names.py

targets:
  development:
    workspace:
      host: <workspace-url>

For customizing jobs, the mappings within a job declaration correspond to the create job operation's request payload as defined in POST /api/2.1/jobs/create in the REST API reference, expressed in YAML format.

Tip

You can define, combine, and override the settings for new job clusters in bundles by using the techniques described in Override cluster settings in Databricks Asset Bundles.

Step 6: Validate the project's bundle configuration file

In this step, you check whether the bundle configuration is valid.

  1. Use the Databricks CLI to run the bundle validate command, as follows:

    databricks bundle validate
    
  2. If a summary of the bundle configuration is returned, then the validation succeeded. If any errors are returned, fix the errors, and then repeat this step.

If you make any changes to your bundle after this step, you should repeat this step to check whether your bundle configuration is still valid.

Step 7: Deploy the local project to the remote workspace

In this step, you deploy the two local notebooks to your remote Azure Databricks workspace and create the Azure Databricks job within your workspace.

  1. Use the Databricks CLI to run the bundle deploy command as follows:

    databricks bundle deploy -t development
    
  2. Check whether the two local notebooks were deployed: In your Azure Databricks workspace's sidebar, click Workspace.

  3. Click into the Users > <your-username> > .bundle > baby-names > development > files folder. The two notebooks should be in this folder.

  4. Check whether the job was created: In your Azure Databricks workspace's sidebar, click Workflows.

  5. On the Jobs tab, click retrieve-filter-baby-names-job.

  6. Click the Tasks tab. There should be two tasks: retrieve-baby-names-task and filter-baby-names-task.

If you make any changes to your bundle after this step, you should repeat steps 6-7 to check whether your bundle configuration is still valid and then redeploy the project.

Step 8: Run the deployed project

In this step, you run the Azure Databricks job in your workspace.

  1. Use the Databricks CLI to run the bundle run command, as follows:

    databricks bundle run -t development retrieve-filter-baby-names-job
    
  2. Copy the value of Run URL that appears in your terminal and paste this value into your web browser to open your Azure Databricks workspace.

  3. In your Azure Databricks workspace, after the two tasks complete successfully and show green title bars, click the filter-baby-names-task task to see the query results.

If you make any changes to your bundle after this step, you should repeat steps 6-8 to check whether your bundle configuration is still valid, redeploy the project, and run the redeployed project.

Step 9: Clean up

In this step, you delete the two deployed notebooks and the job from your workspace.

  1. Use the Databricks CLI to run the bundle destroy command, as follows:

    databricks bundle destroy
    
  2. Confirm the job deletion request: When prompted to permanently destroy resources, type y and press Enter.

  3. Confirm the notebooks deletion request: When prompted to permanently destroy the previously deployed folder and all of its files, type y and press Enter.

Running the bundle destroy command deletes only the deployed job and the folder containing the two deployed notebooks. This command does not delete any side effects, such as the babynames.csv file that the first notebook created. To delete the babybnames.csv file, do the following:

  1. In the sidebar of your Azure Databricks workspace, click Catalog.
  2. Click Browse DBFS.
  3. Click the FileStore folder.
  4. Click the dropdown arrow next to babynames.csv, and click Delete.
  5. If you also want to delete the bundle from your development machine, you can now delete the local directory from Step 1.

Add an existing job definition to a bundle

You can use an existing job definition as a basis to define a new job in a bundle configuration file. To do this, complete the following steps.

Note

The following steps create a new job that has the same settings as the existing job. However, the new job has a different job ID than the existing job. You cannot automatically import an existing job ID into a bundle.

Step 1: Get the existing job definition in YAML format

In this step, use the Azure Databricks workspace user interface to get the YAML representation of the existing job definition.

  1. In your Azure Databricks workspace's sidebar, click Workflows.
  2. On the Jobs tab, click your job's Name link.
  3. Next to the Run now button, click the ellipses, and then click View YAML.
  4. On the Create tab, copy the job definition's YAML to your local clipboard by clicking Copy.

Step 2: Add the job definition YAML to a bundle configuration file

In your bundle configuration file, add the YAML that you copied from the previous step to one of the following locations labelled <job-yaml-can-go-here> in your bundle configuration files, as follows:

resources:
  jobs:
    <some-unique-programmatic-identifier-for-this-job>:
      <job-yaml-can-go-here>

targets:
  <some-unique-programmatic-identifier-for-this-target>:
    resources:
      jobs:
        <some-unique-programmatic-identifier-for-this-job>:
          <job-yaml-can-go-here>

Step 3: Add notebooks, Python files, and other artifacts to the bundle

Any Python files and notebooks that are referenced in the existing job should be moved to the bundle's sources.

For better compatibility with bundles, notebooks should use the IPython notebook format (.ipynb). If you develop the bundle locally, you can export an existing notebook from an Azure Databricks workspace into the .ipynb format by clicking File > Export > IPython Notebook from the Azure Databricks notebook user interface. By convention, you should then put the downloaded notebook into the src/ directory in your bundle.

After you add your notebooks, Python files, and other artifacts to the bundle, make sure that your job definition references them. For example, for a notebook with the filename of hello.ipynb that is in a src/ directory, and the src/ directory is in the same folder as the bundle configuration file that references the src/ directory, the job definition might be expressed as follows:

resources:
  jobs:
    hello-job:
      name: hello-job
      tasks:
      - task_key: hello-task
        notebook_task:
          notebook_path: ./src/hello.ipynb

Step 4: Validate, deploy, and run the new job

  1. Validate that the bundle's configuration files are syntactically correct by running the following command:

    databricks bundle validate
    
  2. Deploy the bundle by running the following command. In this command, replace <target-identifier> with the unique programmatic identifier for the target from the bundle configuration:

    databricks bundle deploy -t <target-identifier>
    
  3. Run the job with the following command.

    databricks bundle run -t <target-identifier> <job-identifier>
    
    • Replace <target-identifier> with the unique programmatic identifier for the target from the bundle configuration.
    • Replace <job-identifier> with the unique programmatic identifier for the job from the bundle configuration.