Connect to Hevo Data

Hevo Data is an end-to-end data pipeline platform that allows you to ingest data from 150+ sources, load it into the Databricks lakehouse, then transform it to derive business insights.

You can connect to Hevo Data using a Databricks SQL warehouse (formerly Databricks SQL endpoints) or an Azure Databricks cluster.

Connect to Hevo Data manually

This section describes how to connect to Hevo Data manually.

To connect to Hevo Data manually, complete the following steps in the Hevo Data documentation:

  1. Create a Hevo Data account or sign in to your exising Hevo account.

  2. Configure Databricks as a Destination.

    Note

    The document you referenced from the third-party must be modified in order to fit in the Azure China Cloud Environment. For example, replace some endpoints -- "blob.core.windows.net" by "blob.core.chinacloudapi.cn", "cloudapp.azure.com" by "chinacloudapp.cn"; change some unsupported Location,VM images, VM sizes, SKU and resource-provider's API Version when necessary.

Next steps

Follow steps in the Hevo Data documentation to do the following:

  1. Create a Pipeline to move your data from a source system to the Databricks lakehouse.
  2. Create Models and Workflows to transform your data in the Databricks lakehouse for analysis and reporting.

Additional resources

Explore the following Hevo Data resources: