Compute

Azure Databricks compute refers to the selection of computing resources available on Azure Databricks to run your data engineering, data science, and analytics workloads. Choose from serverless compute for on-demand scaling, classic compute for customizable resources, or SQL warehouses for optimized analytics.

You can view and manage compute resources in the Compute section of your workspace:

Classic compute

Provisioned compute resources that you create, configure, and manage for your workloads.

Topic Description
Classic compute overview Overview of who can access and create classic compute resources.
Configure compute Create and configure compute for interactive data analysis in notebooks or automated workflows with Lakeflow Jobs.
Standard compute Multi-user compute with shared resources for cost-effective collaboration. Lakeguard provides secure user isolation.
Dedicated compute Compute resource assigned to a single user or group.
Instance pools Pre-configured instances that reduce compute startup time and provide cost savings for frequent workloads.

SQL warehouses

Optimized compute resources for specific use cases and advanced functionality. SQL warehouses can be configured as serverless or classic.

Topic Description
SQL warehouses Optimized compute for SQL queries, analytics, and business intelligence workloads with classic options.
SQL warehouse types Understanding the differences between serverless and classic SQL warehouse options to choose the right type for your workloads.

Additional topics

Topic Description
What is Photon? High-performance query engine that accelerates SQL workloads and provides faster data processing.
What is Lakeguard? Security framework that provides data governance and access control for compute resources.

For information about working with compute using the command line or APIs, see What is the Databricks CLI? and the Databricks REST API reference.