Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this tutorial, you use Databricks secrets to set up JDBC credentials for connecting to an Azure Data Lake Storage account.
Create a secret scope called jdbc
.
databricks secrets create-scope jdbc
To create an Azure Key Vault-backed secret scope, follow the instructions in Manage secret scopes.
Add the secrets username
and password
. Run the following commands and enter the secret values in the opened editor.
databricks secrets put-secret jdbc username
databricks secrets put-secret jdbc password
Use the dbutils.secrets
utility to access secrets in notebooks.
The following example reads the secrets that are stored in the secret scope jdbc
to configure a JDBC read operation:
username = dbutils.secrets.get(scope = "jdbc", key = "username")
password = dbutils.secrets.get(scope = "jdbc", key = "password")
df = (spark.read
.format("jdbc")
.option("url", "<jdbc-url>")
.option("dbtable", "<table-name>")
.option("user", username)
.option("password", password)
.load()
)
val username = dbutils.secrets.get(scope = "jdbc", key = "username")
val password = dbutils.secrets.get(scope = "jdbc", key = "password")
val df = spark.read
.format("jdbc")
.option("url", "<jdbc-url>")
.option("dbtable", "<table-name>")
.option("user", username)
.option("password", password)
.load()
The values fetched from the scope are redacted from the notebook output. See Secret redaction.
Note
This step requires the Premium plan.
After verifying that the credentials were configured correctly, you can grant permissions on the secret scope to other users and groups in your workspace.
Grant the datascience
group the READ permission to the secret scope:
databricks secrets put-acl jdbc datascience READ
For more information about secret access control, see Secret ACLs.