重要
此功能目前以公共预览版提供。
GitHub Actions 可用于从 GitHub 存储库触发 CI/CD 工作流的运行,并允许自动执行生成、测试和部署 CI/CD 管道。
本文提供有关 Databricks 开发的 GitHub Actions 的信息,以及常见用例的示例。 有关 Databricks 的其他 CI/CD 功能和最佳做法的信息,请参阅 Azure Databricks 上的 CI/CD 和 最佳做法以及 Databricks 上的推荐 CI/CD 工作流。
Databricks GitHub 操作
Databricks 为 GitHub 上的 CI/CD 工作流开发了以下 GitHub Actions 。 将 GitHub Actions YAML 文件添加到存储库的 .github/workflows 目录中。
注释
本文介绍由第三方开发的 GitHub Actions。 若要联系提供商,请参阅 GitHub Actions 支持。
| GitHub 操作 | DESCRIPTION |
|---|---|
| databricks/setup-cli | 在 GitHub Actions 工作流中设置 Databricks CLI 的复合操作。 |
运行更新生产 Git 文件夹的 CI/CD 工作流
以下示例 GitHub Actions YAML 文件在远程分支更新时更新工作区 Git 文件夹。 有关 CI/CD 的生产 Git 文件夹方法的信息,请参阅 其他源代码管理工具。
此示例使用适用于 GitHub Actions 的工作负荷标识联合,以提高安全性,并要求首先按照 为 GitHub Actions 启用工作负荷标识联合 中的步骤创建联合策略。
name: Sync Git Folder
concurrency: prod_environment
on:
push:
branches:
# Set your base branch name here
- git-folder-cicd-example
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
name: 'Update git folder'
environment: Prod
env:
DATABRICKS_AUTH_TYPE: github-oidc
DATABRICKS_HOST: ${{ vars.DATABRICKS_HOST }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID }}
steps:
- uses: actions/checkout@v3
- uses: databricks/setup-cli@main
- name: Update git folder
# Set your workspace path and branch name here
run: databricks repos update /Workspace/<git-folder-path> --branch git-folder-cicd-example
使用一个用于管道更新的捆绑包运行 CI/CD 工作流
以下示例 GitHub Actions YAML 文件会触发一个测试部署,该部署会在捆绑包配置文件中定义的名为“dev”的预生产目标内验证、部署并运行捆绑包中的指定作业。
此示例要求有:
- 在存储库根目录中的一个捆绑配置文件,它需要通过 GitHub Actions YAML 文件中的设置
working-directory: .显式声明。该捆绑配置文件应定义一个名为my-job的 Azure Databricks 工作流,以及一个名为dev的目标。 请参阅 Databricks 资产捆绑包配置。 - 名为
SP_TOKEN的 GitHub 机密,表示与部署和运行此捆绑包的 Azure Databricks 工作区相关联的 Azure Databricks 服务主体的 Azure Databricks 访问令牌。 请参阅加密的机密。
# This workflow validates, deploys, and runs the specified bundle
# within a pre-production target named "dev".
name: 'Dev deployment'
# Ensure that only a single job or workflow using the same concurrency group
# runs at a time.
concurrency: 1
# Trigger this workflow whenever a pull request is opened against the repo's
# main branch or an existing pull request's head branch is updated.
on:
pull_request:
types:
- opened
- synchronize
branches:
- main
jobs:
# Used by the "pipeline_update" job to deploy the bundle.
# Bundle validation is automatically performed as part of this deployment.
# If validation fails, this workflow fails.
deploy:
name: 'Deploy bundle'
runs-on: ubuntu-latest
steps:
# Check out this repo, so that this workflow can access it.
- uses: actions/checkout@v3
# Download the Databricks CLI.
# See https://github.com/databricks/setup-cli
- uses: databricks/setup-cli@main
# Deploy the bundle to the "dev" target as defined
# in the bundle's settings file.
- run: databricks bundle deploy
working-directory: .
env:
DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
DATABRICKS_BUNDLE_ENV: dev
# Validate, deploy, and then run the bundle.
pipeline_update:
name: 'Run pipeline update'
runs-on: ubuntu-latest
# Run the "deploy" job first.
needs:
- deploy
steps:
# Check out this repo, so that this workflow can access it.
- uses: actions/checkout@v3
# Use the downloaded Databricks CLI.
- uses: databricks/setup-cli@main
# Run the Databricks workflow named "my-job" as defined in the
# bundle that was just deployed.
- run: databricks bundle run my-job --refresh-all
working-directory: .
env:
DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
DATABRICKS_BUNDLE_ENV: dev
可能还需要触发生产部署。 以下 GitHub Actions YAML 文件可以在与前面文件相同的存储库中存在。 此文件在名为“prod”的生产目标中验证、部署和运行一个包,如包配置文件中所定义。
# This workflow validates, deploys, and runs the specified bundle
# within a production target named "prod".
name: 'Production deployment'
# Ensure that only a single job or workflow using the same concurrency group
# runs at a time.
concurrency: 1
# Trigger this workflow whenever a pull request is pushed to the repo's
# main branch.
on:
push:
branches:
- main
jobs:
deploy:
name: 'Deploy bundle'
runs-on: ubuntu-latest
steps:
# Check out this repo, so that this workflow can access it.
- uses: actions/checkout@v3
# Download the Databricks CLI.
# See https://github.com/databricks/setup-cli
- uses: databricks/setup-cli@main
# Deploy the bundle to the "prod" target as defined
# in the bundle's settings file.
- run: databricks bundle deploy
working-directory: .
env:
DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
DATABRICKS_BUNDLE_ENV: prod
# Validate, deploy, and then run the bundle.
pipeline_update:
name: 'Run pipeline update'
runs-on: ubuntu-latest
# Run the "deploy" job first.
needs:
- deploy
steps:
# Check out this repo, so that this workflow can access it.
- uses: actions/checkout@v3
# Use the downloaded Databricks CLI.
- uses: databricks/setup-cli@main
# Run the Databricks workflow named "my-job" as defined in the
# bundle that was just deployed.
- run: databricks bundle run my-job --refresh-all
working-directory: .
env:
DATABRICKS_TOKEN: ${{ secrets.SP_TOKEN }}
DATABRICKS_BUNDLE_ENV: prod
运行生成 JAR 并部署捆绑包的 CI/CD 工作流
如果你有基于 Java 的生态系统,那么你的 GitHub Action 需要在部署包之前构建并上传 JAR。 以下示例 GitHub Actions YAML 文件会触发一项部署,该部署生成 JAR 并将其上传到卷,然后验证捆绑包并将其部署到捆绑包配置文件中定义的名为“prod”的生产目标。 它编译基于 Java 的 JAR,但基于 Scala 的项目的编译步骤类似。
此示例要求有:
- 存储库根目录中的包配置文件,通过 GitHub Actions YAML 文件的设置明确声明
working-directory: . - 一个
DATABRICKS_TOKEN环境变量,该环境变量表示与要向其部署和运行此捆绑包的 Azure Databricks 工作区关联的 Azure Databricks 访问令牌。 - 一个表示 Azure Databricks 主机工作区的
DATABRICKS_HOST环境变量。
name: Build JAR and deploy with bundles
on:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
build-test-upload:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Java
uses: actions/setup-java@v4
with:
java-version: '17' # Specify the Java version used by your project
distribution: 'temurin' # Use a reliable JDK distribution
- name: Cache Maven dependencies
uses: actions/cache@v4
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-
- name: Build and test JAR with Maven
run: mvn clean verify # Use verify to ensure tests are run
- name: Databricks CLI Setup
uses: databricks/setup-cli@v0.9.0 # Pin to a specific version
- name: Upload JAR to a volume
env:
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_TOKEN }}
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }} # Add host for clarity
run: |
databricks fs cp target/my-app-1.0.jar dbfs:/Volumes/artifacts/my-app-${{ github.sha }}.jar --overwrite
validate:
needs: build-test-upload
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Databricks CLI Setup
uses: databricks/setup-cli@v0.9.0
- name: Validate bundle
env:
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_TOKEN }}
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
run: databricks bundle validate
deploy:
needs: validate
if: github.event_name == 'push' && github.ref == 'refs/heads/main' # Only deploy on push to main
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Databricks CLI Setup
uses: databricks/setup-cli@v0.9.0
- name: Deploy bundle
env:
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_TOKEN }}
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
run: databricks bundle deploy --target prod