本页介绍如何在 Databricks 资产捆绑包中使用目标设置替代或联接顶级设置。 有关捆绑包设置的信息,请参阅 Databricks 资产捆绑包配置。
构件设置重写
您可以使用 artifacts 映射中的工件设置替代顶级 targets 映射中的工件设置,例如,
# ...
artifacts:
  <some-unique-programmatic-identifier-for-this-artifact>:
    # Artifact settings.
targets:
  <some-unique-programmatic-identifier-for-this-target>:
    artifacts:
      <the-matching-programmatic-identifier-for-this-artifact>:
        # Any more artifact settings to join with the settings from the
        # matching top-level artifacts mapping.
如果在顶级 artifacts 映射和 targets 映射中为同一工件定义了任何工件设置,则 targets 映射中的设置优先于顶级 artifacts 映射中的设置。
示例 1:仅在顶级工件映射中定义的工件设置
为了演示其实际效果,在以下示例中,path 被定义在顶级 artifacts 映射中,该映射定义了构件的所有设置:
# ...
artifacts:
  my-artifact:
    type: whl
    path: ./my_package
# ...
              databricks bundle validate运行此示例时,生成的图形为:
{
  "...": "...",
  "artifacts": {
    "my-artifact": {
      "type": "whl",
      "path": "./my_package",
      "...": "..."
    }
  },
  "...": "..."
}
示例 2:在多个工件映射中定义的冲突工件设置
在此示例中,path在顶级artifacts映射中定义,也在artifacts的targets映射中定义。 在此示例中,path 在 artifacts 映射 targets 中优先于顶级 path 映射中的 artifacts,以定义制品的设置:
# ...
artifacts:
  my-artifact:
    type: whl
    path: ./my_package
targets:
  dev:
    artifacts:
      my-artifact:
        path: ./my_other_package
    # ...
              databricks bundle validate运行此示例时,生成的图形为:
{
  "...": "...",
  "artifacts": {
    "my-artifact": {
      "type": "whl",
      "path": "./my_other_package",
      "...": "..."
    }
  },
  "...": "..."
}
群集设置替代
可以覆盖或加入目标的作业或管道群集设置。
对于作业,在作业定义中使用 job_cluster_key 来识别顶级 resources 映射中的作业群集设置,以便与 targets 映射中的作业群集设置进行联接:
# ...
resources:
  jobs:
    <some-unique-programmatic-identifier-for-this-job>:
      # ...
      job_clusters:
        - job_cluster_key: <some-unique-programmatic-identifier-for-this-key>
          new_cluster:
            # Cluster settings.
targets:
  <some-unique-programmatic-identifier-for-this-target>:
    resources:
      jobs:
        <the-matching-programmatic-identifier-for-this-job>:
          # ...
          job_clusters:
            - job_cluster_key: <the-matching-programmatic-identifier-for-this-key>
              # Any more cluster settings to join with the settings from the
              # resources mapping for the matching top-level job_cluster_key.
          # ...
如果在顶级resources映射和targets映射中同时定义了同一个job_cluster_key的群集设置,那么targets映射中的设置优先于顶级resources映射中的设置。
对于 Lakeflow 声明性管道,请在管道定义的群集设置中使用 label,以便标识顶级 resources 映射中的群集设置,以便与 targets 映射中的群集设置进行联接,例如:
# ...
resources:
  pipelines:
    <some-unique-programmatic-identifier-for-this-pipeline>:
      # ...
      clusters:
        - label: default | maintenance
          # Cluster settings.
targets:
  <some-unique-programmatic-identifier-for-this-target>:
    resources:
      pipelines:
        <the-matching-programmatic-identifier-for-this-pipeline>:
          # ...
          clusters:
            - label: default | maintenance
              # Any more cluster settings to join with the settings from the
              # resources mapping for the matching top-level label.
          # ...
如果在顶级resources映射和targets映射中同时定义了同一个label的群集设置,那么targets映射中的设置优先于顶级resources映射中的设置。
示例 1:在多个资源映射中定义的新作业群集设置,且无设置冲突
在此示例中,顶级spark_version映射中的resources与node_type_id中num_workers映射的resources和targets结合,以定义名为job_cluster_key的my-cluster设置。
# ...
resources:
  jobs:
    my-job:
      name: my-job
      job_clusters:
        - job_cluster_key: my-cluster
          new_cluster:
            spark_version: 13.3.x-scala2.12
targets:
  development:
    resources:
      jobs:
        my-job:
          name: my-job
          job_clusters:
            - job_cluster_key: my-cluster
              new_cluster:
                node_type_id: Standard_DS3_v2
                num_workers: 1
          # ...
为此示例运行 databricks bundle validate 时,生成的图形如下所示:
{
  "...": "...",
  "resources": {
    "jobs": {
      "my-job": {
        "job_clusters": [
          {
            "job_cluster_key": "my-cluster",
            "new_cluster": {
              "node_type_id": "Standard_DS3_v2",
              "num_workers": 1,
              "spark_version": "13.3.x-scala2.12"
            }
          }
        ],
        "...": "..."
      }
    }
  }
}
示例 2:在多个资源映射中定义的相互冲突的新作业群集设置
在此示例中,spark_version和num_workers定义在顶级resources映射中以及resources中的targets映射中。 在此示例中,spark_version中的num_workers映射中的resources和targets优先于顶级spark_version映射中的num_workers和resources,用于定义名为job_cluster_key的my-cluster的设置。
# ...
resources:
  jobs:
    my-job:
      name: my-job
      job_clusters:
        - job_cluster_key: my-cluster
          new_cluster:
            spark_version: 13.3.x-scala2.12
            node_type_id: Standard_DS3_v2
            num_workers: 1
targets:
  development:
    resources:
      jobs:
        my-job:
          name: my-job
          job_clusters:
            - job_cluster_key: my-cluster
              new_cluster:
                spark_version: 12.2.x-scala2.12
                num_workers: 2
          # ...
为此示例运行 databricks bundle validate 时,生成的图形如下所示:
{
  "...": "...",
  "resources": {
    "jobs": {
      "my-job": {
        "job_clusters": [
          {
            "job_cluster_key": "my-cluster",
            "new_cluster": {
              "node_type_id": "Standard_DS3_v2",
              "num_workers": 2,
              "spark_version": "12.2.x-scala2.12"
            }
          }
        ],
        "...": "..."
      }
    }
  }
}
示例 3:在多个资源映射中定义的管道群集设置,且无设置冲突
在此示例中,顶级node_type_id映射中的resources与num_workers中的resources结合使用,以定义名为targets的label设置default。
# ...
resources:
  pipelines:
    my-pipeline:
      clusters:
        - label: default
          node_type_id: Standard_DS3_v2
targets:
  development:
    resources:
      pipelines:
        my-pipeline:
          clusters:
            - label: default
              num_workers: 1
          # ...
为此示例运行 databricks bundle validate 时,生成的图形如下所示:
{
  "...": "...",
  "resources": {
    "pipelines": {
      "my-pipeline": {
        "clusters": [
          {
            "label": "default",
            "node_type_id": "Standard_DS3_v2",
            "num_workers": 1
          }
        ],
        "...": "..."
      }
    }
  }
}
示例 4:多个资源映射中定义的流水线集群设置冲突
在此示例中,num_workers在顶级resources映射中定义,也在resources的targets映射中定义。 
              num_workers 在 resources 映射 targets 中的设置优先于顶级 num_workers 映射 resources 中的设置,用于定义命名为 label 的 default:
# ...
resources:
  pipelines:
    my-pipeline:
      clusters:
        - label: default
          node_type_id: Standard_DS3_v2
          num_workers: 1
targets:
  development:
    resources:
      pipelines:
        my-pipeline:
          clusters:
            - label: default
              num_workers: 2
          # ...
为此示例运行 databricks bundle validate 时,生成的图形如下所示:
{
  "...": "...",
  "resources": {
    "pipelines": {
      "my-pipeline": {
        "clusters": [
          {
            "label": "default",
            "node_type_id": "Standard_DS3_v2",
            "num_workers": 2
          }
        ],
        "...": "..."
      }
    }
  }
}
作业任务设置优先级
可以在作业定义中使用 tasks 映射,将顶级 resources 映射中的作业任务设置与 targets 映射中的作业任务设置连接,例如:
# ...
resources:
  jobs:
    <some-unique-programmatic-identifier-for-this-job>:
      # ...
      tasks:
        - task_key: <some-unique-programmatic-identifier-for-this-task>
          # Task settings.
targets:
  <some-unique-programmatic-identifier-for-this-target>:
    resources:
      jobs:
        <the-matching-programmatic-identifier-for-this-job>:
          # ...
          tasks:
            - task_key: <the-matching-programmatic-identifier-for-this-key>
              # Any more task settings to join with the settings from the
              # resources mapping for the matching top-level task_key.
          # ...
若要联接顶级 resources 映射和 targets 同一任务的映射,必须将任务映射 task_key 设置为相同的值。
如果在顶级 resources 映射和 targets 映射中的同一个 task 定义了任何作业任务设置,则 targets 映射中的设置优先于顶级 resources 映射中的设置。
示例 1:在多个资源映射中定义作业任务设置,且无设置冲突。
在此示例中,顶级spark_version映射中的resources与node_type_id中num_workers映射的resources和targets结合,以定义名为task_key的my-task设置。
# ...
resources:
  jobs:
    my-job:
      name: my-job
      tasks:
        - task_key: my-key
          new_cluster:
            spark_version: 13.3.x-scala2.12
targets:
  development:
    resources:
      jobs:
        my-job:
          name: my-job
          tasks:
            - task_key: my-task
              new_cluster:
                node_type_id: Standard_DS3_v2
                num_workers: 1
          # ...
运行 databricks bundle validate 此示例时,生成的图形如下所示(省略号指示省略的内容,为简洁起见):
{
  "...": "...",
  "resources": {
    "jobs": {
      "my-job": {
        "tasks": [
          {
            "new_cluster": {
              "node_type_id": "Standard_DS3_v2",
              "num_workers": 1,
              "spark_version": "13.3.x-scala2.12"
            },
            "task-key": "my-task"
          }
        ],
        "...": "..."
      }
    }
  }
}
示例 2:多个资源映射中定义的冲突作业任务设置
在此示例中,spark_version和num_workers定义在顶级resources映射中以及resources中的targets映射中。 在spark_version中num_workers和resources映射优先于顶级targets映射中的spark_version和num_workers。 这定义了命名task_key的设置my-task(省略号指示省略的内容,为简洁起见):
# ...
resources:
  jobs:
    my-job:
      name: my-job
      tasks:
        - task_key: my-task
          new_cluster:
            spark_version: 13.3.x-scala2.12
            node_type_id: Standard_DS3_v2
            num_workers: 1
targets:
  development:
    resources:
      jobs:
        my-job:
          name: my-job
          tasks:
            - task_key: my-task
              new_cluster:
                spark_version: 12.2.x-scala2.12
                num_workers: 2
          # ...
为此示例运行 databricks bundle validate 时,生成的图形如下所示:
{
  "...": "...",
  "resources": {
    "jobs": {
      "my-job": {
        "tasks": [
          {
            "new_cluster": {
              "node_type_id": "Standard_DS3_v2",
              "num_workers": 2,
              "spark_version": "12.2.x-scala2.12"
            },
            "task_key": "my-task"
          }
        ],
        "...": "..."
      }
    }
  }
}