作业Jobs

作业是立即运行或按计划运行笔记本或 JAR 的一种方法。A job is a way of running a notebook or JAR either immediately or on a scheduled basis. 运行笔记本的另一种方法是在笔记本 UI 中以交互方式运行。The other way to run a notebook is interactively in the notebook UI.

可以通过 UI、CLI 以及调用作业 API 来创建和运行作业。You can create and run jobs using the UI, the CLI, and by invoking the Jobs API. 可以通过 CLI、通过查询 API 以及通过电子邮件警报来监视 UI 中的作业运行结果。You can monitor job run results in the UI, using the CLI, by querying the API, and through email alerts. 本文重点介绍如何使用 UI 执行作业任务。This article focuses on performing job tasks using the UI. 有关其他方法,请参阅作业 CLI作业 APIFor the other methods, see Jobs CLI and Jobs API.

重要

  • 作业数限制为 1000。The number of jobs is limited to 1000.
  • 工作区在一小时内可以创建的作业数限制为 5000(包括“立即运行”和“运行提交”)。The number of jobs a workspace can create in an hour is limited to 5000 (includes “run now” and “runs submit”). 此限制还会影响 REST API 和笔记本工作流创建的作业。This limit also affects jobs created by the REST API and notebook workflows.
  • 工作区限制为 150 个并发(正在运行的)作业运行。A workspace is limited to 150 concurrent (running) job runs.
  • 工作区限制为 1000 个活动(正在运行和挂起的)作业运行。A workspace is limited to 1000 active (running and pending) job runs.

查看作业View jobs

单击“作业”图标Click the Jobs icon 作业菜单图标 (在边栏中)。in the sidebar. 此时会显示“作业”列表。The Jobs list displays. “作业”页会列出所有已定义的作业、群集定义、计划(如果有)以及上次运行的结果。The Jobs page lists all defined jobs, the cluster definition, the schedule if any, and the result of the last run.

在“作业”列表中,可以筛选作业:In the Jobs list, you can filter jobs:

  • 使用关键字。Using key words.
  • 仅选择你拥有的作业或你有权访问的作业。Selecting only jobs you own or jobs you have access to. 针对此筛选器的访问权限取决于所启用的作业访问控制Access to this filter depends on Jobs access control being enabled.

你还可以单击任何列标题,以便按该列对作业列表排序(降序或升序)。You can also click any column header to sort the list of jobs (either descending or ascending) by that column. 默认情况下,此页面按作业名称以升序排序。By default, the page is sorted on job names in ascending order.

作业列表Jobs list

创建作业 Create a job

  1. 单击“+ 创建作业”。Click + Create Job. 此时会显示作业详细信息页。The job detail page displays.

    作业详细信息Job detail

  2. 在包含占位符文本 Untitled 的文本字段中输入一个名称。Enter a name in the text field with the placeholder text Untitled.

  3. 指定任务类型:单击“选择笔记本”、“设置 JAR”或“配置 spark-submit”。Specify the task type: click Select Notebook, Set JAR, or Configure spark-submit.

    • 笔记本Notebook

      1. 选择一个笔记本,并单击“确定”。Select a notebook and click OK.
      2. 在“参数”旁边,单击“编辑”。 Next to Parameters, click Edit. 指定键值对或表示键值对的 JSON 字符串。Specify key-value pairs or a JSON string representing key-value pairs. 此类参数设置小组件的值。Such parameters set the value of widgets.
    • JAR:上传 JAR,指定主类和参数,并单击“确定”。JAR: Upload a JAR, specify the main class and arguments, and click OK. 若要详细了解 JAR 作业,请参阅 JAR 作业提示To learn more about JAR jobs, see JAR job tips.

    • spark-submit:指定主类、库 JAR 的路径、参数,并单击“确认”。spark-submit: Specify the main class, path to the library JAR, arguments, and click Confirm. 若要详细了解 spark-submit,请参阅 Apache Spark 文档To learn more about spark-submit, see the Apache Spark documentation.

      备注

      以下 Azure Databricks 功能不可用于 spark-submit 作业:The following Azure Databricks features are not available for spark-submit jobs:

  4. 在“依赖库”字段中,可以选择单击“添加”并指定依赖库。In the Dependent Libraries field, optionally click Add and specify dependent libraries. 依赖库会在启动时自动附加到群集。Dependent libraries are automatically attached to the cluster on launch. 请按照库依赖项中的建议来指定依赖项。Follow the recommendations in Library dependencies for specifying dependencies.

    重要

    如果已将库配置为自动在所有群集上安装,或在下一步中选择一个现有的已终止群集且该群集已安装了库,则作业执行操作不会等待库安装完成。If you have configured a library to automatically install on all clusters or in the next step you select an existing terminated cluster that has libraries installed, the job execution does not wait for library installation to complete. 如果作业需要某个库,则应在“依赖库”字段中将该库附加到此作业。If a job requires a certain library, you should attach the library to the job in the Dependent Libraries field.

  5. 在“群集”字段中单击“编辑”,指定要在其上运行作业的群集。In the Cluster field, click Edit and specify the cluster on which to run the job. 在“群集类型”下拉列表中,选择“新建作业群集”或“现有通用群集”。 In the Cluster Type drop-down, choose New Job Cluster or Existing All-Purpose Cluster.

    备注

    选择群集类型时,请注意以下事项:Keep the following in mind when you choose a cluster type:

    • 对于生产级别作业或务必要完成的作业,建议选择“新建作业群集”。For production-level jobs or jobs that are important to complete, we recommend that you select New Job Cluster.
    • 只能在新群集上运行 spark-submit 作业。You can run spark-submit jobs only on new clusters.
    • 在新群集上运行作业时,该作业会被视为按作业工作负荷定价的数据工程(作业)工作负荷。When you run a job on a new cluster, the job is treated as a data engineering (job) workload subject to the job workload pricing. 在现有群集上运行作业时,该作业会被视为按通用工作负荷定价的数据分析(通用)工作负荷。When you run a job on an existing cluster, the job is treated as a data analytics (all-purpose) workload subject to all-purpose workload pricing.
    • 如果选择一个已终止的现有群集,并且作业所有者拥有“可重启”权限,则 Azure Databricks 会在作业按计划运行时启动该群集。If you select a terminated existing cluster and the job owner has Can Restart permission, Azure Databricks starts the cluster when the job is scheduled to run.
    • 现有群集最适用于按固定时间间隔更新仪表板之类的任务。Existing clusters work best for tasks such as updating dashboards at regular intervals.
    • 新建作业群集 - 完成群集配置New Job Cluster - complete the cluster configuration.
      1. 在群集配置中,选择一个运行时版本。In the cluster configuration, select a runtime version. 有关如何选择运行时版本的帮助,请参阅 Databricks RuntimeDatabricks LightFor help with selecting a runtime version, see Databricks Runtime and Databricks Light.
      2. 若要缩短新群集启动时间,请在群集配置中选择一个To decrease new cluster start time, select a pool in the cluster configuration.
    • 现有的通用群集 - 在下拉列表中选择现有群集。Existing All-Purpose Cluster - in the drop-down, select the existing cluster.
  6. 在“计划”字段中,可以选择单击“编辑”并计划作业。In the Schedule field, optionally click Edit and schedule the job. 请参阅运行作业See Run a job.

  7. 可以选择单击“高级”并指定高级作业选项。Optionally click Advanced and specify advanced job options. 请参阅高级作业选项See Advanced job options.

查看作业详细信息 View job details

在“作业”页上,单击“名称”列中的作业名称。On the Jobs page, click a job name in the Name column. 作业详细信息页会显示配置参数、活动的运行(正在运行的和挂起的)以及已完成的运行。The job details page shows configuration parameters, active runs (running and pending), and completed runs.

作业详细信息Job details

Databricks 最多维护作业运行的历史记录 60 天。Databricks maintains a history of your job runs for up to 60 days. 如果需要保留作业运行,建议在作业运行结果过期之前将其导出。If you need to preserve job runs, we recommend that you export job run results before they expire. 有关详细信息,请参阅导出作业运行结果For more information, see Export job run results.

在作业运行页上,可以通过单击 Spark 列中的“日志”链接来查看作业运行的标准错误、标准输出、log4j 输出。In the job runs page, you can view the standard error, standard output, log4j output for a job run by clicking the Logs link in the Spark column.

运行作业Run a job

可以按计划运行作业,或立即运行作业。You can run a job on a schedule or immediately.

计划作业Schedule a job

若要定义作业的计划,请执行以下操作:To define a schedule for the job:

  1. 单击“计划”旁边的“编辑”。Click Edit next to Schedule.

    编辑计划Edit schedule

    此时会显示“计划作业”对话框。The Schedule Job dialog displays.

    计划作业Schedule job

  2. 指定计划粒度、开始时间和时区。Specify the schedule granularity, starting time, and time zone. (可选)选中“显示 Cron 语法”复选框以使用 Quartz Cron 语法显示和编辑计划。Optionally select the Show Cron Syntax checkbox to display and edit the schedule in Quartz Cron Syntax.

    备注

    • Azure Databricks 强制实施在作业计划触发的后续运行之间的最小间隔(10 秒),而不考虑 cron 表达式中的秒配置。Azure Databricks enforces a minimum interval of 10 seconds between subsequent runs triggered by the schedule of a job regardless of the seconds configuration in the cron expression.
    • 可以选择一个采用夏令时或 UTC 时间的时区。You can choose a time zone that observes daylight saving time or a UTC time. 如果选择一个采用夏令时的区域,则在夏令时开始或结束时,某个每小时作业可能会被系统跳过,或者会在一到两个小时内显示为未触发。If you select a zone that observes daylight saving time, an hourly job will be skipped or may appear to not fire for an hour or two when daylight saving time begins or ends. 如果希望作业每小时(绝对时间)运行一次,请选择 UTC 时间。If you want jobs to run at every hour (absolute time), choose a UTC time.
    • 作业计划程序与 Spark 批处理接口类似,不适用于低延迟作业。The job scheduler, like the Spark batch interface, is not intended for low latency jobs. 由于网络或云问题,作业运行的延迟时间有时可能会长达数分钟。Due to network or cloud issues, job runs may occasionally be delayed up to several minutes. 在这些情况下,计划的作业会在服务可用后立即运行。In these situations, scheduled jobs will run immediately upon service availability.
  3. 单击“确认” 。Click Confirm.

    计划的作业Job scheduled

暂停和恢复作业计划Pause and resume a job schedule

若要暂停作业,请单击作业计划旁边的“暂停”按钮:To pause a job, click the Pause button next to the job schedule:

计划的作业Job scheduled

若要继续暂停的作业计划,请单击“继续”按钮:To resume a paused job schedule, click the Resume button:

继续作业Resume job

立即运行作业Run a job immediately

若要立即运行作业,请在“活动运行”表中单击“立即运行”。To run the job immediately, in the Active runs table click Run Now.

立即运行Run now

提示

单击“立即运行”,在完成作业配置后进行笔记本或 JAR 的测试性运行。Click Run Now to do a test run of your notebook or JAR when you’ve finished configuring your job. 如果笔记本出现故障,可以对其进行编辑,作业会自动运行新版本的笔记本。If your notebook fails, you can edit it and the job will automatically run the new version of the notebook.

使用不同参数运行作业 Run a job with different parameters

可以使用“立即使用不同参数运行”选项,以便重新运行一个指定不同参数或为现有参数指定不同值的作业。You can use Run Now with Different Parameters to re-run a job specifying different parameters or different values for existing parameters.

  1. 在“活动运行”表中,单击“立即使用不同参数运行”。In the Active runs table, click Run Now with Different Parameters. 此对话框因你运行的作业而异,具体取决于你运行的是笔记本作业还是 spark-submit 作业。The dialog varies depending on whether you are running a notebook job or a spark-submit job.

    • 笔记本 - 显示用于设置键值对或 JSON 对象的 UI。Notebook - A UI that lets you set key-value pairs or a JSON object displays. 可以使用此对话框设置小组件的值:You can use this dialog to set the values of widgets:

      使用参数运行笔记本Run notebook with parameters

    • spark-submit - 显示包含参数列表的对话框。spark-submit - A dialog containing the list of parameters displays. 例如,你可以使用 100 个分区而不是默认的 10 个分区来运行创建作业中所述的 SparkPi 估算器:For example, you could run the SparkPi estimator described in Create a job with 100 instead of the default 10 partitions:

      设置 spark-submit 参数Set spark-submit parameters

  2. 指定参数。Specify the parameters. 提供的参数会与已触发的运行的默认参数合并。The provided parameters are merged with the default parameters for the triggered run. 如果删除键,则会使用默认参数。If you delete keys, the default parameters are used.

  3. 单击 “运行”Click Run.

笔记本作业提示 Notebook job tips

笔记本单元格输出总计(所有笔记本单元格的合并输出)存在 20MB 的大小限制。Total notebook cell output (the combined output of all notebook cells) is subject to a 20MB size limit. 此外,单个单元格输出存在 8MB 的大小限制。Additionally, individual cell output is subject to an 8MB size limit. 如果单元格输出大小总计超出 20MB,或者单个单元格的输出大于 8MB,则会取消该运行并将其标记为失败。If total cell output exceeds 20MB in size, or if the output of an individual cell is larger than 8MB, the run will be canceled and marked as failed. 如果不知道如何查找接近或超出限制的单元格,请针对通用群集运行该笔记本,并使用这个笔记本自动保存方法If you need help finding cells that are near or beyond the limit, run the notebook against an all-purpose cluster and use this notebook autosave technique.

JAR 作业提示 JAR job tips

在运行 JAR 作业时,需要了解一些注意事项。There are some caveats you need to be aware of when you run a JAR job.

输出大小限制 Output size limits

备注

适用于 Databricks Runtime 6.3 及更高版本。Available in Databricks Runtime 6.3 and above.

作业输出(如发送到 stdout 的日志输出)的大小限制为 20MB。Job output, such as log output emitted to stdout, is subject to a 20MB size limit. 如果总输出大于该限制,则会取消此运行并将其标记为失败。If the total output has a larger size, the run will be canceled and marked as failed.

若要避免遇到此限制,可以通过将 spark.databricks.driver.disableScalaOutput Spark 配置设置为 true 来阻止 stdout 从驱动程序返回到 Azure Databricks。To avoid encountering this limit, you can prevent stdout from being returned from the driver to Azure Databricks by setting the spark.databricks.driver.disableScalaOutput Spark configuration to true. 默认情况下,标志值为 falseBy default the flag value is false. 该标志控制 Scala JAR 作业和 Scala 笔记本的单元格输出。The flag controls cell output for Scala JAR jobs and Scala notebooks. 如果启用该标志,Spark 不会将作业执行结果返回给客户端。If the flag is enabled, Spark does not return job execution results to the client. 该标志不影响写入群集日志文件中的数据。The flag does not affect the data that is written in the cluster’s log files. 建议只对 JAR 作业的作业群集设置此标志,因为它会禁用笔记本结果。Setting this flag is recommended only for job clusters for JAR jobs, because it will disable notebook results.

使用共享的 SparkContext Use the shared SparkContext

因为 Databricks 是托管服务,所以可能需要进行一些代码更改,以确保 Apache Spark 作业正常运行。Because Databricks is a managed service, some code changes may be necessary to ensure that your Apache Spark jobs run correctly. JAR 作业程序必须使用共享 SparkContext API 来获取 SparkContextJAR job programs must use the shared SparkContext API to get the SparkContext. 由于 Databricks 初始化 SparkContext,因此调用 new SparkContext() 的程序会失败。Because Databricks initializes the SparkContext, programs that invoke new SparkContext() will fail. 若要获取 SparkContext,请只使用由 Databricks 创建的共享 SparkContextTo get the SparkContext, use only the shared SparkContext created by Databricks:

val goodSparkContext = SparkContext.getOrCreate()
val goodSparkSession = SparkSession.builder().getOrCreate()

此外,在使用共享 SparkContext 时,还应避免使用几种方法。In addition, there are several methods you should avoid when using the shared SparkContext.

  • 不要调用 SparkContext.stop()Do not call SparkContext.stop().
  • 请勿在 Main 程序的末尾调用 System.exit(0)sc.stop()Do not call System.exit(0) or sc.stop() at the end of your Main program. 这可能会导致未定义的行为。This can cause undefined behavior.

使用 try-finally 块进行作业清理Use try-finally blocks for job clean up

假设有一个由两部分组成的 JAR:Consider a JAR that consists of two parts:

  • jobBody(),包含作业的主要部分jobBody() which contains the main part of the job
  • jobCleanup(),必须在 jobBody() 后执行,不管该函数是成功还是返回了异常jobCleanup() which has to be executed after jobBody(), irrespective of whether that function succeded or returned an exception

例如,jobBody() 可以创建表,你可以使用 jobCleanup() 来删除这些表。As an example, jobBody() may create tables, and you can use jobCleanup() to drop these tables.

若要确保调用清理方法,安全的方法是在代码中放置一个 try-finally 块:The safe way to ensure that the clean up method is called is to put a try-finally block in the code:

try {
  jobBody()
} finally {
  jobCleanup()
}

不应尝试使用 sys.addShutdownHook(jobCleanup) 或以下代码进行清理:You should should not try to clean up using sys.addShutdownHook(jobCleanup) or

val cleanupThread = new Thread { override def run = jobCleanup() }
Runtime.getRuntime.addShutdownHook(cleanupThread)

考虑到 Spark 容器的生存期在 Azure Databricks 中的管理方式,ShutdownHook 的运行并不可靠。Due to the way the lifetime of Spark containers is managed in Azure Databricks, the shutdown hooks are not run reliably.

配置 JAR 作业参数Configure JAR job parameters

JAR 作业使用字符串数组进行参数化。JAR jobs are parameterized with an array of strings.

  • 在 UI 中,可以在“参数”文本框中输入参数,并通过应用 POSIX shell 分析规则将这些参数拆分为数组。In the UI, you input the parameters in the Arguments text box which are split into an array by applying POSIX shell parsing rules. 有关详细信息,请参阅 shlex 文档For more information, reference the shlex documentation.
  • 在 API 中,将参数以标准 JSON 数组形式输入。In the API, you input the parameters as a standard JSON array. 有关详细信息,请参阅 SparkJarTaskFor more information, reference SparkJarTask. 若要访问这些参数,请检查传入到 main 函数中的 String 数组。To access these parameters, inspect the String array passed into your main function.

查看作业运行详细信息 View job run details

作业运行详细信息页包含作业输出和日志链接:A job run details page contains job output and links to logs:

作业运行详细信息Job run details

可以从“作业”页和“群集”页查看作业运行详细信息。You can view job run details from the Jobs page and the Clusters page.

  • 单击“作业”图标 作业菜单图标Click the Jobs icon Jobs Menu Icon. 在“在过去 60 天内完成”表的“运行”列中,单击运行编号链接。 In the Run column of the Completed in past 60 days table, click the run number link.

    从“作业”运行的作业Job run from Jobs

  • 单击“群集”图标 群集图标Click the Clusters icon Clusters Icon. 在“作业群集”表的作业行中,单击“作业运行”链接。In a job row in the Job Clusters table, click the Job Run link.

    从“群集”运行的作业Job run from Clusters

导出作业运行结果 Export job run results

可以为所有作业类型导出笔记本运行结果和作业运行日志。You can export notebook run results and job run logs for all job types.

导出笔记本运行结果Export notebook run results

可以通过导出作业运行的结果来持久保存作业运行。You can persist job runs by exporting their results. 对于笔记本作业运行,可以先导出呈现的笔记本,稍后再将其导入到 Databricks 工作区中。For notebook job runs, you can export a rendered notebook which can be later be imported into your Databricks workspace.

  1. 在作业详细信息页中,单击“运行”列中的作业运行名称。In the job detail page, click a job run name in the Run column.

    作业运行Job run

  2. 单击“导出到 HTML”。Click Export to HTML.

    导出运行结果Export run result

导出作业运行日志Export job run logs

你还可以导出作业运行的日志。You can also export the logs for your job run. 若要自动执行此过程,可以设置作业,使其通过作业 API 自动将日志传送到 DBFS。To automate this process, you can set up your job so that it automatically delivers logs to DBFS through the Job API. 有关详细信息,请查看作业创建 API 调用中的 NewClusterClusterLogConf 字段。For more information, see the NewCluster and ClusterLogConf fields in the Job Create API call.

编辑作业Edit a job

若要编辑作业,请单击“作业”列表中的作业名称链接。To edit a job, click the job name link in the Jobs list.

删除作业Delete a job

若要删除作业,请单击“作业”列表的“操作”列中的“x”。To delete a job, click the x in the Action column in the Jobs list.

库依赖项Library dependencies

Spark 驱动程序的某些库依赖项不能重写。The Spark driver has certain library dependencies that cannot be overridden. 这些库优先于你自己的与之冲突的任何库。These libraries take priority over any of your own libraries that conflict with them.

若要获取驱动程序库依赖项的完整列表,请在附加到同一 Spark 版本的群集(或包含要检查的驱动程序的群集)的笔记本中运行以下命令。To get the full list of the driver library dependencies, run the following command inside a notebook attached to a cluster of the same Spark version (or the cluster with the driver you want to examine).

%sh
ls /databricks/jars

管理库依赖项Manage library dependencies

如果在为作业创建 JAR 时处理库依赖项,则最好是将 Spark 和 Hadoop 作为 provided 依赖项列出。A good rule of thumb when dealing with library dependencies while creating JARs for jobs is to list Spark and Hadoop as provided dependencies. 在 Maven 上,将 Spark 和/或 Hadoop 添加为提供的依赖项,如以下示例所示。On Maven, add Spark and/or Hadoop as provided dependencies as shown in the following example.

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>1.2.1</version>
  <scope>provided</scope>
</dependency>

sbt 中,将 Spark 和 Hadoop 添加为提供的依赖项,如以下示例所示。In sbt, add Spark and Hadoop as provided dependencies as shown in the following example.

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0" % "provided"
libraryDependencies += "org.apache.hadoop" %% "hadoop-core" % "1.2.1" % "provided"

提示

为依赖项指定正确的 Scala 版本,具体取决于你所运行的版本。Specify the correct Scala version for your dependencies based on the version you are running.

高级作业选项Advanced job options

最大并发运行数Maximum concurrent runs

可并行运行的最大运行数。The maximum number of runs that can be run in parallel. 启动新的运行时,如果作业已达到其最大活动运行数,Azure Databricks 会跳过该运行。On starting a new run, Azure Databricks skips the run if the job has already reached its maximum number of active runs. 如果希望能够以并发方式执行同一作业的多个运行,请将此值设置为高于默认值 1。Set this value higher than the default of 1 if you want to be able to perform multiple runs of the same job concurrently. 设置此值适用于这样的情形:例如,如果你按计划频繁触发作业并希望允许连续的运行彼此重叠,或者,如果你希望触发多个在输入参数方面有区别的运行。This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs that differ by their input parameters.

警报Alerts

作业失败、成功或超时情况下发送的电子邮件警报。Email alerts sent in case of job failure, success, or timeout. 可以针对作业启动、作业成功和作业失败(包括已跳过的作业)设置警报,并为每种警报类型提供多个以逗号分隔的电子邮件地址。You can set alerts up for job start, job success, and job failure (including skipped jobs), providing multiple comma-separated email addresses for each alert type. 还可以选择退出已跳过的作业运行的警报。You can also opt out of alerts for skipped job runs.

配置电子邮件警报Configure email alerts

将这些电子邮件警报与你最喜欢的通知工具集成,这些工具包括:Integrate these email alerts with your favorite notification tools, including:

超时Timeout

作业的最长完成时间。The maximum completion time for a job. 如果作业未在此时间内完成,则 Databricks 会将其状态设置为“已超时”。If the job does not complete in this time, Databricks sets its status to “Timed Out”.

重试Retries

一项策略,用于确定重试已失败运行的时间和次数。Policy that determines when and how many times failed runs are retried.

重试策略Retry policy

备注

如果同时配置“超时”和“重试”,则超时将应用于每次重试。 If you configure both Timeout and Retries, the timeout applies to each retry.

控制对作业的访问 Control access to jobs

作业所有者和管理员可以通过作业访问控制授予对其作业的精细权限。Job access control enable job owners and administrators to grant fine grained permissions on their jobs. 使用作业访问控制,作业所有者可以选择允许哪些其他用户或组查看作业的结果。With job access controls, job owners can choose which other users or groups can view results of the job. 所有者还可以选择允许谁管理其作业的运行(即,调用“立即运行”,然后单击“取消”)。Owners can also choose who can manage runs of their job (that is, invoke Run Now and Cancel.)

有关详细信息,请参阅作业访问控制See Jobs access control for details.