Quickstart: Interactive Data Wrangling with Apache Spark in Azure Machine Learning

To handle interactive Azure Machine Learning notebook data wrangling, Azure Machine Learning integration with Azure Synapse Analytics provides easy access to the Apache Spark framework. This access allows for Azure Machine Learning Notebook interactive data wrangling.

In this quickstart guide, you learn how to perform interactive data wrangling with Azure Machine Learning serverless Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough.

Prerequisites

Store Azure storage account credentials as secrets in Azure Key Vault

To store Azure storage account credentials as secrets in the Azure Key Vault, with the Azure portal user interface:

  1. Navigate to your Azure Key Vault in the Azure portal

  2. Select Secrets from the left panel

  3. Select + Generate/Import

    Screenshot that shows the Azure Key Vault Secrets Generate Or Import tab.

  4. At the Create a secret screen, enter a Name for the secret you want to create

  5. Navigate to Azure Blob Storage Account, in the Azure portal, as shown in this image:

    Screenshot that shows the Azure access key and connection string values screen.

  6. Select Access keys from the Azure Blob Storage Account page left panel

  7. Select Show next to Key 1, and then Copy to clipboard to get the storage account access key

    Note

    Select the appropriate options to copy

    • Azure Blob storage container shared access signature (SAS) tokens
    • Azure Data Lake Storage (ADLS) Gen 2 storage account service principal credentials
      • tenant ID
      • client ID and
      • secret

    on the respective user interfaces while you create the Azure Key Vault secrets for them

  8. Navigate back to the Create a secret screen

  9. In the Secret value textbox, enter the access key credential for the Azure storage account, which was copied to the clipboard in the earlier step

  10. Select Create

    Screenshot that shows the Azure secret creation screen.

Tip

Azure CLI and Azure Key Vault secret client library for Python can also create Azure Key Vault secrets.

Add role assignments in Azure storage accounts

We must ensure that the input and output data paths are accessible before we start interactive data wrangling. First, for

  • the user identity of the Notebooks session logged-in user

    or

  • a service principal

assign Reader and Storage Blob Data Reader roles to the user identity of the logged-in user. However, in certain scenarios, we might want to write the wrangled data back to the Azure storage account. The Reader and Storage Blob Data Reader roles provide read-only access to the user identity or service principal. To enable read and write access, assign Contributor and Storage Blob Data Contributor roles to the user identity or service principal. To assign appropriate roles to the user identity:

  1. Open the Azure portal

  2. Search and select the Storage accounts service

    Expandable screenshot that shows Storage accounts service search and selection in Azure portal.

  3. On the Storage accounts page, select the Azure Data Lake Storage (ADLS) Gen 2 storage account from the list. A page showing the storage account Overview opens

    Expandable screenshot that shows selection of the Azure Data Lake Storage (ADLS) Gen 2 storage account Storage account.

  4. Select Access Control (IAM) from the left panel

  5. Select Add role assignment

    Screenshot that shows the Azure access keys screen.

  6. Find and select role Storage Blob Data Contributor

  7. Select Next

    Screenshot that shows the Azure add role assignment screen.

  8. Select User, group, or service principal

  9. Select + Select members

  10. Search for the user identity below Select

  11. Select the user identity from the list, so that it shows under Selected members

  12. Select the appropriate user identity

  13. Select Next

    Screenshot that shows the Azure add role assignment screen Members tab.

  14. Select Review + Assign

    Screenshot showing the Azure add role assignment screen review and assign tab.

  15. Repeat steps 2-13 for Contributor role assignment

Once the user identity has the appropriate roles assigned, data in the Azure storage account should become accessible.

Note

If an attached Synapse Spark pool points to a Synapse Spark pool, in an Azure Synapse workspace, that has a managed virtual network associated with it, you should configure a managed private endpoint to a storage account to ensure data access.

Ensuring resource access for Spark jobs

To access data and other resources, Spark jobs can use either a managed identity or user identity passthrough. The following table summarizes the different mechanisms for resource access while you use Azure Machine Learning serverless Spark compute and attached Synapse Spark pool.

Spark pool Supported identities Default identity
Serverless Spark compute User identity, user-assigned managed identity attached to the workspace User identity
Attached Synapse Spark pool User identity, user-assigned managed identity attached to the attached Synapse Spark pool, system-assigned managed identity of the attached Synapse Spark pool System-assigned managed identity of the attached Synapse Spark pool

If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning serverless Spark compute relies on a user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace with Azure Machine Learning CLI v2, or with ARMClient.

Next steps