使用 Databricks 资产捆绑包和 GitHub Actions 运行 CI/CD 工作流

本文介绍了如何使用 GitHub ActionsDatabricks 资产捆绑包在 GitHub 中运行 CI/CD(持续集成/持续部署)工作流。 请参阅什么是 Databricks 资产捆绑包?

可以使用 GitHub Actions 以及 Databricks CLI bundle 命令从 GitHub 存储库中自动执行、自定义和运行 CI/CD 工作流。

可以将 GitHub Actions YAML 文件(例如以下内容)添加到存储库的.github/workflows目录。 以下示例 GitHub Actions YAML 文件在名为“qa”的预生产目标(如捆绑配置文件中所定义)内验证、部署和运行捆绑中的指定作业。 此示例 GitHub Actions YAML 文件依赖于以下内容:

  • 存储库根目录中的捆绑配置文件(通过 GitHub Actions YAML 文件的设置 working-directory: . 显式声明)(如果捆绑配置文件已在存储库的根目录中,则可以省略此设置)。此捆绑配置文件定义名为“my-job”的 Azure Databricks 工作流和名为“qa”的目标。 请参阅 Databricks 资产捆绑包配置
  • 名为 SP_TOKEN 的 GitHub 机密,表示与部署和运行此捆绑包的 Azure Databricks 工作区相关联的 Azure Databricks 服务主体的 Azure Databricks 访问令牌。 请参阅加密的机密
# This workflow validates, deploys, and runs the specified bundle
# within a pre-production target named "qa".
name: "QA deployment"

# Ensure that only a single job or workflow using the same concurrency group
# runs at a time.
concurrency: 1

# Trigger this workflow whenever a pull request is opened against the repo's
# main branch or an existing pull request's head branch is updated.
on:
  pull_request:
    types:
      - opened
      - synchronize
    branches:
      - main

jobs:
  # Used by the "pipeline_update" job to deploy the bundle.
  # Bundle validation is automatically performed as part of this deployment.
  # If validation fails, this workflow fails.
  deploy:
    name: "Deploy bundle"
    runs-on: ubuntu-latest

    steps:
      # Check out this repo, so that this workflow can access it.
      - uses: actions/checkout@v3

      # Download the Databricks CLI.
      # See https://github.com/databricks/setup-cli
      - uses: databricks/setup-cli@main

      # Deploy the bundle to the "qa" target as defined
      # in the bundle's settings file.
      - run: databricks bundle deploy
        working-directory: .
        env:
          DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
          DATABRICKS_BUNDLE_ENV: qa

  # Validate, deploy, and then run the bundle.
  pipeline_update:
    name: "Run pipeline update"
    runs-on: ubuntu-latest

    # Run the "deploy" job first.
    needs:
      - deploy

    steps:
      # Check out this repo, so that this workflow can access it.
      - uses: actions/checkout@v3

      # Use the downloaded Databricks CLI.
      - uses: databricks/setup-cli@main

      # Run the Databricks workflow named "my-job" as defined in the
      # bundle that was just deployed.
      - run: databricks bundle run my-job --refresh-all
        working-directory: .
        env:
          DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
          DATABRICKS_BUNDLE_ENV: qa

以下 GitHub Actions YAML 文件可以在与前面文件相同的存储库中存在。 此文件在名为“prod”的生产目标(如捆绑配置文件中所定义)内验证、部署和运行指定捆绑。 此示例 GitHub Actions YAML 文件依赖于以下内容:

  • 存储库根目录中的捆绑配置文件(通过 GitHub Actions YAML 文件的设置 working-directory: . 显式声明)(如果捆绑配置文件已在存储库的根目录中,则可以省略此设置)。 此捆绑包配置文件定义了名为 my-job 的 Azure Databricks 工作流和名为 prod 的目标。 请参阅 Databricks 资产捆绑包配置
  • 名为 SP_TOKEN 的 GitHub 机密,表示与部署和运行此捆绑包的 Azure Databricks 工作区相关联的 Azure Databricks 服务主体的 Azure Databricks 访问令牌。 请参阅加密的机密
# This workflow validates, deploys, and runs the specified bundle
# within a production target named "prod".
name: "Production deployment"

# Ensure that only a single job or workflow using the same concurrency group
# runs at a time.
concurrency: 1

# Trigger this workflow whenever a pull request is pushed to the repo's
# main branch.
on:
  push:
    branches:
      - main

jobs:
  deploy:
    name: "Deploy bundle"
    runs-on: ubuntu-latest

    steps:
      # Check out this repo, so that this workflow can access it.
      - uses: actions/checkout@v3

      # Download the Databricks CLI.
      # See https://github.com/databricks/setup-cli
      - uses: databricks/setup-cli@main

      # Deploy the bundle to the "prod" target as defined
      # in the bundle's settings file.
      - run: databricks bundle deploy
        working-directory: .
        env:
          DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
          DATABRICKS_BUNDLE_ENV: prod

  # Validate, deploy, and then run the bundle.
  pipeline_update:
    name: "Run pipeline update"
    runs-on: ubuntu-latest

    # Run the "deploy" job first.
    needs:
      - deploy

    steps:
      # Check out this repo, so that this workflow can access it.
      - uses: actions/checkout@v3

      # Use the downloaded Databricks CLI.
      - uses: databricks/setup-cli@main

      # Run the Databricks workflow named "my-job" as defined in the
      # bundle that was just deployed.
      - run: databricks bundle run my-job --refresh-all
        working-directory: .
        env:
          DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
          DATABRICKS_BUNDLE_ENV: prod

另请参阅