Databricks SDK for Go

In this article, you learn how to automate operations in Azure Databricks accounts, workspaces, and related resources with the Databricks SDK for Go. This article supplements the Databricks SDK for Go README, API reference, and examples.

Note

This feature is in Beta and is okay to use in production.

During the Beta period, Databricks recommends that you pin a dependency on the specific minor version of the Databricks SDK for Go that your code depends on, for example, in a project's go.mod file. For more information about pinning dependencies, see Managing dependencies.

Before you begin

Before you begin to use the Databricks SDK for Go, your development machine must have:

Get started with the Databricks SDK for Go

  1. On your development machine with Go already installed, an existing Go code project already created, and Azure Databricks authentication configured, create a go.mod file to track your Go code's dependencies by running the go mod init command, for example:

    go mod init sample
    
  2. Take a dependency on the Databricks SDK for Go package by running the go mod edit -require command, replacing 0.8.0 with the latest version of the Databricks SDK for Go package as listed in the CHANGELOG:

    go mod edit -require github.com/databricks/databricks-sdk-go@v0.8.0
    

    Your go.mod file should now look like this:

    module sample
    
    go 1.18
    
    require github.com/databricks/databricks-sdk-go v0.8.0
    
  3. Within your project, create a Go code file that imports the Databricks SDK for Go. The following example, in a file named main.go with the following contents, lists all the clusters in your Azure Databricks workspace:

    package main
    
    import (
      "context"
    
      "github.com/databricks/databricks-sdk-go"
      "github.com/databricks/databricks-sdk-go/service/compute"
    )
    
    func main() {
      w := databricks.Must(databricks.NewWorkspaceClient())
      all, err := w.Clusters.ListAll(context.Background(), compute.ListClustersRequest{})
      if err != nil {
        panic(err)
      }
      for _, c := range all {
        println(c.ClusterName)
      }
    }
    
  4. Add any missing module dependencies by running the go mod tidy command:

    go mod tidy
    

    Note

    If you get the error go: warning: "all" matched no packages, you forgot to add a Go code file that imports the Databricks SDK for Go.

  5. Grab copies of all packages needed to support builds and tests of packages in your main module, by running the go mod vendor command:

    go mod vendor
    
  6. Set up your development machine for Azure Databricks authentication.

  7. Run your Go code file, assuming a file named main.go, by running the go run command:

    go run main.go
    

    Note

    By not setting *databricks.Config as an argument in the preceding call to w := databricks.Must(databricks.NewWorkspaceClient()), the Databricks SDK for Go uses its default process for trying to perform Azure Databricks authentication. To override this default behavior, see Authenticate the Databricks SDK for Go with your Azure Databricks account or workspace.

Update the Databricks SDK for Go

To update your Go project to use one of the Databricks SDK for Go packages as listed in the CHANGELOG, do the following:

  1. Run the go get command from the root of your project, specifying the -u flag to do an update, and providing the name and target version number of the Databricks SDK for Go package. For example, to update to version 0.12.0, run the following command:

    go get -u github.com/databricks/databricks-sdk-go@v0.12.0
    
  2. Add and update any missing and outdated module dependencies by running the go mod tidy command:

    go mod tidy
    
  3. Grab copies of all new and updated packages needed to support builds and tests of packages in your main module, by running the go mod vendor command:

    go mod vendor
    

Authenticate the Databricks SDK for Go with your Azure Databricks account or workspace

The Databricks SDK for Go implements the Databricks client unified authentication standard, a consolidated and consistent architectural and programmatic approach to authentication. This approach helps make setting up and automating authentication with Azure Databricks more centralized and predictable. It enables you to configure Databricks authentication once and then use that configuration across multiple Databricks tools and SDKs without further authentication configuration changes. For more information, including more complete code examples in Go, see Databricks client unified authentication.

Some of the available coding patterns to initialize Databricks authentication with the Databricks SDK for Go include:

  • Use Databricks default authentication by doing one of the following:

    • Create or identify a custom Databricks configuration profile with the required fields for the target Databricks authentication type. Then set the DATABRICKS_CONFIG_PROFILE environment variable to the name of the custom configuration profile.
    • Set the required environment variables for the target Databricks authentication type.

    Then instantiate for example a WorkspaceClient object with Databricks default authentication as follows:

    import (
      "github.com/databricks/databricks-sdk-go"
    )
    // ...
    w := databricks.Must(databricks.NewWorkspaceClient())
    
  • Hard-coding the required fields is supported but not recommended, as it risks exposing sensitive information in your code, such as Azure Databricks personal access tokens. The following example hard-codes Azure Databricks host and access token values for Databricks token authentication:

    import (
      "github.com/databricks/databricks-sdk-go"
      "github.com/databricks/databricks-sdk-go/config"
    )
    // ...
    w := databricks.Must(databricks.NewWorkspaceClient(&databricks.Config{
      Host:  "https://...",
      Token: "...",
    }))
    

See also Authentication in the Databricks SDK for Go README.

Examples

The following code examples demonstrate how to use the Databricks SDK for Go to create and delete clusters, run jobs, and list account users. These code examples use the Databricks SDK for Go's default Azure Databricks authentication process.

For additional code examples, see the examples folder in the Databricks SDK for Go repository in GitHub.

Create a cluster

This code example creates a cluster with the latest available Databricks Runtime Long Term Support (LTS) version and the smallest available cluster node type with a local disk. This cluster has one worker, and the cluster will automatically terminate after 15 minutes of idle time. The CreateAndWait method call causes the code to pause until the new cluster is running in the workspace.

package main

import (
  "context"
  "fmt"

  "github.com/databricks/databricks-sdk-go"
  "github.com/databricks/databricks-sdk-go/service/compute"
)

func main() {
  const clusterName            = "my-cluster"
  const autoTerminationMinutes = 15
  const numWorkers             = 1

  w   := databricks.Must(databricks.NewWorkspaceClient())
  ctx := context.Background()

  // Get the full list of available Spark versions to choose from.
  sparkVersions, err := w.Clusters.SparkVersions(ctx)

  if err != nil {
    panic(err)
  }

  // Choose the latest Long Term Support (LTS) version.
  latestLTS, err := sparkVersions.Select(compute.SparkVersionRequest{
    Latest:          true,
    LongTermSupport: true,
  })

  if err != nil {
    panic(err)
  }

  // Get the list of available cluster node types to choose from.
  nodeTypes, err := w.Clusters.ListNodeTypes(ctx)

  if err != nil {
    panic(err)
  }

  // Choose the smallest available cluster node type.
  smallestWithLocalDisk, err := nodeTypes.Smallest(clusters.NodeTypeRequest{
    LocalDisk: true,
  })

  if err != nil {
    panic(err)
  }

  fmt.Println("Now attempting to create the cluster, please wait...")

  runningCluster, err := w.Clusters.CreateAndWait(ctx, compute.CreateCluster{
    ClusterName:            clusterName,
    SparkVersion:           latestLTS,
    NodeTypeId:             smallestWithLocalDisk,
    AutoterminationMinutes: autoTerminationMinutes,
    NumWorkers:             numWorkers,
  })

  if err != nil {
    panic(err)
  }

  switch runningCluster.State {
  case compute.StateRunning:
    fmt.Printf("The cluster is now ready at %s#setting/clusters/%s/configuration\n",
      w.Config.Host,
      runningCluster.ClusterId,
    )
  default:
    fmt.Printf("Cluster is not running or failed to create. %s", runningCluster.StateMessage)
  }

  // Output:
  //
  // Now attempting to create the cluster, please wait...
  // The cluster is now ready at <workspace-host>#setting/clusters/<cluster-id>/configuration
}

Permanently delete a cluster

This code example permanently deletes the cluster with the specified cluster ID from the workspace.

package main

import (
  "context"

  "github.com/databricks/databricks-sdk-go"
  "github.com/databricks/databricks-sdk-go/service/clusters"
)

func main() {
  // Replace with your cluster's ID.
  const clusterId = "1234-567890-ab123cd4"

  w   := databricks.Must(databricks.NewWorkspaceClient())
  ctx := context.Background()

  err := w.Clusters.PermanentDelete(ctx, compute.PermanentDeleteCluster{
    ClusterId: clusterId,
  })

  if err != nil {
    panic(err)
  }
}

Run a job

This code example creates an Azure Databricks job that runs the specified notebook on the specified cluster. As the code runs, it gets the existing notebook's path, the existing cluster ID, and related job settings from the user at the terminal. The RunNowAndWait method call causes the code to pause until the new job has finished running in the workspace.

package main

import (
  "bufio"
  "context"
  "fmt"
  "os"
  "strings"

  "github.com/databricks/databricks-sdk-go"
  "github.com/databricks/databricks-sdk-go/service/jobs"
)

func main() {
  w   := databricks.Must(databricks.NewWorkspaceClient())
  ctx := context.Background()

  nt := jobs.NotebookTask{
    NotebookPath: askFor("Workspace path of the notebook to run:"),
  }

  jobToRun, err := w.Jobs.Create(ctx, jobs.CreateJob{
    Name: askFor("Some short name for the job:"),
    Tasks: []jobs.JobTaskSettings{
      {
        Description:       askFor("Some short description for the job:"),
        TaskKey:           askFor("Some key to apply to the job's tasks:"),
        ExistingClusterId: askFor("ID of the existing cluster in the workspace to run the job on:"),
        NotebookTask:      &nt,
      },
    },
  })

  if err != nil {
    panic(err)
  }

  fmt.Printf("Now attempting to run the job at %s/#job/%d, please wait...\n",
    w.Config.Host,
    jobToRun.JobId,
  )

  runningJob, err := w.Jobs.RunNow(ctx, jobs.RunNow{
    JobId: jobToRun.JobId,
  })

  if err != nil {
    panic(err)
  }

  jobRun, err := runningJob.Get()

  if err != nil {
    panic(err)
  }

  fmt.Printf("View the job run results at %s/#job/%d/run/%d\n",
    w.Config.Host,
    jobRun.JobId,
    jobRun.RunId,
  )

  // Output:
  //
  // Now attempting to run the job at <workspace-host>/#job/<job-id>, please wait...
  // View the job run results at <workspace-host>/#job/<job-id>/run/<run-id>
}

// Get job settings from the user.
func askFor(prompt string) string {
  var s string
  r := bufio.NewReader(os.Stdin)
  for {
    fmt.Fprint(os.Stdout, prompt+" ")
    s, _ = r.ReadString('\n')
    if s != "" {
      break
    }
  }
  return strings.TrimSpace(s)
}

Manage files in Unity Catalog volumes

This code example demonstrates various calls to files functionality within WorkspaceClient to access a Unity Catalog volume.

package main

import (
  "context"
  "io"
  "os"

  "github.com/databricks/databricks-sdk-go"
  "github.com/databricks/databricks-sdk-go/service/files"
)

func main() {
  w := databricks.Must(databricks.NewWorkspaceClient())

  catalog          := "main"
  schema           := "default"
  volume           := "my-volume"
  volumePath       := "/Volumes/" + catalog + "/" + schema + "/" + volume // /Volumes/main/default/my-volume
  volumeFolder     := "my-folder"
  volumeFolderPath := volumePath + "/" + volumeFolder // /Volumes/main/default/my-volume/my-folder
  volumeFile       := "data.csv"
  volumeFilePath   := volumeFolderPath + "/" + volumeFile // /Volumes/main/default/my-volume/my-folder/data.csv
  uploadFilePath   := "./data.csv"

  // Create an empty folder in a volume.
  err := w.Files.CreateDirectory(
    context.Background(),
    files.CreateDirectoryRequest{DirectoryPath: volumeFolderPath},
  )
  if err != nil {
    panic(err)
  }

  // Upload a file to a volume.
  fileUpload, err := os.Open(uploadFilePath)
  if err != nil {
    panic(err)
  }
  defer fileUpload.Close()

  w.Files.Upload(
    context.Background(),
    files.UploadRequest{
      Contents:  fileUpload,
      FilePath:  volumeFilePath,
      Overwrite: true,
    },
  )

  // List the contents of a volume.
  items := w.Files.ListDirectoryContents(
    context.Background(),
    files.ListDirectoryContentsRequest{DirectoryPath: volumePath},
  )

  for {
    if items.HasNext(context.Background()) {
      item, err := items.Next(context.Background())
      if err != nil {
        break
      }
      println(item.Path)

    } else {
      break
    }
  }

  // List the contents of a folder in a volume.
  itemsFolder := w.Files.ListDirectoryContents(
    context.Background(),
    files.ListDirectoryContentsRequest{DirectoryPath: volumeFolderPath},
  )

  for {
    if itemsFolder.HasNext(context.Background()) {
      item, err := itemsFolder.Next(context.Background())
      if err != nil {
        break
      }
      println(item.Path)
    } else {
      break
    }
  }

  // Print the contents of a file in a volume.
  file, err := w.Files.DownloadByFilePath(
    context.Background(),
    volumeFilePath,
  )
  if err != nil {
    panic(err)
  }

  bufDownload := make([]byte, file.ContentLength)

  for {
    file, err := file.Contents.Read(bufDownload)
    if err != nil && err != io.EOF {
      panic(err)
    }
    if file == 0 {
      break
    }

    println(string(bufDownload[:file]))
  }

  // Delete a file from a volume.
  w.Files.DeleteByFilePath(
    context.Background(),
    volumeFilePath,
  )

  // Delete a folder from a volume.
  w.Files.DeleteDirectory(
    context.Background(),
    files.DeleteDirectoryRequest{
      DirectoryPath: volumeFolderPath,
    },
  )
}

List account users

This code example lists the available users within an Azure Databricks account.

package main

import (
  "context"

  "github.com/databricks/databricks-sdk-go"
  "github.com/databricks/databricks-sdk-go/service/iam"
)

func main() {
  a := databricks.Must(databricks.NewAccountClient())
  all, err := a.Users.ListAll(context.Background(), iam.ListAccountUsersRequest{})
  if err != nil {
    panic(err)
  }
  for _, u := range all {
    println(u.UserName)
  }
}

Additional resources

For more information, see: