Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
These features and Azure Databricks platform improvements were released in September 2025.
Note
The release date and content listed below only corresponds to actual deployment of the Azure Public Cloud in most case.
It provide the evolution history of Azure Databricks service on Azure Public Cloud for your reference that may not be suitable for Azure operated by 21Vianet.
Note
Releases are staged. Your Azure Databricks account might not be updated until a week or more after the initial release date.
Mount Delta shares to an existing shared catalog
September 12, 2025
Delta Sharing recipients can now mount shares received from their Delta Sharing provider to an existing shared catalog. Previously, recipients needed to create a new catalog for each new share. See Create a catalog from a share.
Python custom data sources can be used with Lakeflow Declarative Pipelines
September 10, 2025
You can use Python custom data sources and sinks in your pipeline definitions in Lakeflow Declarative Pipelines.
For information about Python custom data sources, see the following:
- Load data from a Python custom data source.
- Create a Lakeflow Declarative Pipelines sink.
- PySpark custom data sources.
Automatic identity management is generally available
September 10, 2025
Automatic identity management enables you to sync users, service principals, and groups from Microsoft Entra ID into Azure Databricks without configuring an application in Microsoft Entra ID. When enabled, you can directly search in identity federated workspaces for Microsoft Entra ID users, service principals, and groups and add them to your workspace. Databricks uses Microsoft Entra ID as the source of record, so any changes to group memberships are respected in Azure Databricks. Automatic identity management also supports nested groups.
See Sync users and groups automatically from Microsoft Entra ID.
Lakeflow Declarative Pipelines now supports stream progress metrics in Public Preview
September 10, 2025
Lakeflow Declarative Pipelines now supports querying the event log for metrics about the progress of a stream. See Monitor pipeline streaming metrics.
Databricks Runtime maintenance update
September 8, 2025
New maintenance updates are available for supported Databricks Runtime versions. These updates include bug fixes, security patches, and performance improvements. For details, see Databricks Runtime maintenance updates.
Databricks Apps support for Genie resources
September 8, 2025
Databricks Apps now supports adding an AI/BI Genie space as an app resource to enable natural language querying over curated datasets.
Databricks Online Feature Stores (Public Preview)
September 5, 2025
Databricks Online Feature Stores, powered by Lakebase, provide highly-scalable low-latency access to feature data while maintaining consistency with your offline feature tables. Native integrations with Unity Catalog, MLflow, and Mosaic AI Model Serving help you productionize your model endpoints, agents, and rule engines, so they can automatically and securely access features from Online Feature Stores while maintaining high performance.
MLflow metadata is now available in system tables (Public Preview)
September 5, 2025
MLflow metadata is now available in system tables. View metadata managed within the MLflow tracking service from the entire workspace in one central location, taking advantage of all the lakehouse tooling Databricks offers, such as building custom AI/BI dashboards, SQL alerts, and large scale data analytic queries.
Databricks Assistant Agent Mode: Data Science Agent is in Beta
September 3, 2025
Agent Mode for Databricks Assistant is now in Beta. In Agent Mode, the Assistant can orchestrate multi-step workflows from a single prompt.
The Data Science Agent is custom-built for data science workflows and can build an entire notebook for tasks like EDA, forecasting, and machine learning from scratch. Using your prompt, it can plan a solution, retrieve relevant assets, run code, use cell outputs to improve results, fix errors automatically, and more.
Tables backed by default storage can be Delta shared to any recipient (Beta)
September 2, 2025
Delta Sharing providers can now share tables backed by default storage with any recipient, including both open and Azure Databricks recipients—even if the recipient is using classic compute. Tables with partitioning enabled are an exception.
Migrate Lakeflow Declarative Pipelines from legacy publishing mode is rolled back to Public Preview
September 2, 2025
Lakeflow Declarative Pipelines includes a legacy publishing mode that previously limited publishing to a single catalog and schema. Default publishing mode enables publishing to multiple catalogs and schemas. A feature, recently released as generally available, can help migrate from the legacy publishing mode to the default publishing mode. Due to an issue found after release, the migration feature has been rolled back to Public Preview status and functionality.
See Enable the default publishing mode in a pipeline
AI agents: Authorize on-behalf-of-user Public Preview
September 2, 2025
AI agents deployed to Model Serving endpoints can use on-behalf-of-user authorization. This lets an agent act as the Databricks user who runs the query for added security and fine-grained access to sensitive data.
SQL Server connector supports SCD type 2
September 1, 2025
The Microsoft SQL Server connector in Lakeflow Connect now supports SCD type 2. This setting, known as history tracking or slowly changing dimensions (SCD), determines how to handle changes in your data over time. With history tracking off (SCD type 1), outdated records are overwritten as they're updated and deleted in the source. With history tracking on (SCD type 2), the connector maintains a history of those changes.