Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
APPLIES TO:
Azure Data Factory
Azure Synapse Analytics
This article provides guidance for upgrading connectors in Azure Data Factory.
How to receive notifications in Azure Service Health portal
Regular notifications are sent to you to help you upgrade related connectors or notify you the key dates for EOS and removal. You can find the notification under Service Health portal - Health advisories tab.
Here's the steps to help you find the notification:
Navigate to Service Health portal or you can select Service Health icon on your Azure portal dashboard.
Go to Health advisories tab and you can see the notification related to your connectors in the list. You can also go to Health history tab to check historical notifications.
To learn more about the Service Health portal, see this article.
How to find your impacted objects from data factory portal
Here's the steps to get your objects which still rely on the deprecated connectors or connectors that have a precise end of support date. It is recommended to take action to upgrade those objects to the new connector version before the end of the support date.
- Open your Azure Data Factory.
- Go to Manage - Linked services page.
- You should see the Linked Service that is still on V1 with an alert behind it.
- Click on the number under the 'Related' column will show you the related objects that utilize this Linked service.
- To learn more about the upgrade guidance and the comparison between V1 and V2, you can navigate to the connector upgrade section within each connector page.
How to find your impacted objects programmatically
Users can run a PowerShell script to programmatically extract a list of Azure Data Factory or Synapse linked services that are using integration runtimes running on versions that are either out of support or nearing end-of-support. The script can be customized to query each data factory under a specified tenant or subscription, enumerate a list of specified linked services, and inspect configuration properties such as connection types, connector versions. It can then cross-reference these details against known version EOS timelines, flagging any linked services using unsupported or soon-to-be unsupported connector versions. This automated approach enables users to proactively identify and remediate outdated components to ensure continued support, security compliance, and service availability.
You can find examples of the script and customize it as needed:
- Script to list all linked services within the specified subscription ID
- Script to list all linked services within the specified tenant ID
Automatic connector upgrade
In addition to providing tools and best practices to help users manually upgrade their connectors, the service now also provides a more streamlined upgrade process for some cases where applicable. This is designed to help users adopt the most reliable and supported connector versions with minimal disruption.
The following section outlines the general approach that the service takes for automatic upgrades. While this provides a high-level overview, it's strongly recommended to review the documentation specific to each connector to understand which scenarios are supported and how the upgrade process applies to your workloads.
In cases where certain scenarios running on the latest GA connector version are fully backward compatible with the previous version, the service will automatically upgrade existing workloads (such as Copy, Lookup, and Script activities) to a compatibility mode that preserves the behavior of the earlier version.
These auto-upgraded workloads aren't affected by the announced removal date of the older version, giving users additional time to evaluate and transition to the latest GA version without facing immediate failures.
You can identify which activities have been automatically upgraded by inspecting the activity output, where relevant upgraded information is recorded. The examples below show the upgraded information in various activity outputs.
Example:
Copy activity output
"source": {
"type": "AmazonS3",
"autoUpgrade": "true"
}
"sink": {
"type": "AmazonS3",
"autoUpgrade": "true"
}
Lookup activity output
"source": {
"type": "AmazonS3",
"autoUpgrade": "true"
}
Script activity output
"source": {
"type": "AmazonS3",
"autoUpgrade": "true"
}
Note
While compatibility mode offers flexibility, we strongly encourage users to upgrade to the latest GA version as soon as possible to benefit from ongoing improvements, optimizations, and full support.
You can find more details from the table below on the connector list that is planned for the automatic upgrade.
Connector | Scenario |
---|---|
Amazon RDS for Oracle | Scenario that doesn't rely on capability below in Oracle (version 1.0): • Use procedureRetResults, batchFailureReturnsError, truststore, and truststorepassword as connection properties. • Use query ends with the semicolon. If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
Amazon Redshift | Scenario that doesn't rely on below capability in Amazon Redshift (version 1.0): • Linked service that uses Azure integration runtime. • Use UNLOAD. Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime. For more information, go to Install Amazon Redshift ODBC driver for the version 2.0. |
Google BigQuery | Scenario that doesn't rely on below capability in Google BigQuery V1: • Use trustedCertsPath , additionalProjects , requestgoogledrivescope connection properties.• Set useSystemTrustStore connection property as false .• Use STRUCT and ARRAY data types. If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
Greenplum | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
Hive | Scenario that doesn't rely on below capability in Hive (version 1.0): • Authentication types: • Username • Thrift transport protocol: • HiveServer1 • Service discovery mode: True • Use native query: True If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
Impala | Scenario that doesn't rely on below capability in Impala (version 1.0): • Authentication types: • SASL Username If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
MariaDB | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
MySQL | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
Oracle | Scenario that doesn't rely on capability below in Oracle (version 1.0): • Use procedureRetResults, batchFailureReturnsError, truststore, and truststorepassword as connection properties. • Use Oracle connector as sink. • Use query ends with the semicolon. • Use PL/SQL command in Script activity • Use script parameters in Script activity If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
PostgreSQL | If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
Presto | Scenario that doesn't rely on capability below in Presto (version 1.0): • Use MAP, ARRAY, or ROW data types. • trustedCertPath/allowSelfSignedServerCert/allowSelfSignedServerCert (will be supported soon) If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.57 or above. |
Salesforce | Scenario that doesn't rely on capability below in Salesforce V1: • SOQL queries that use: • TYPEOF clauses • Compound address/geolocations fields • All SQL-92 query • Report query {call "<report name>"} • Use Self-hosted integration runtime (To be supported) |
Salesforce Service Cloud | Scenario that doesn't rely on capability below in Salesforce Service Cloud V1: • SOQL queries that use: • TYPEOF clauses • Compound address/geolocations fields • All SQL-92 query • Report query {call "<report name>"} • Use Self-hosted integration runtime (To be supported) |
Snowflake | Scenario that doesn't rely on capability below in Snowflake V1: • Use any of below properties: connection_timeout, disableocspcheck, enablestaging, on_error, query_tag, quoted_identifiers_ignore_case, skip_header, stage, table, timezone, token, validate_utf8, no_proxy, nonproxyhosts, noproxy. • Use multi-statement query in script activity or lookup activity. If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above. |
Spark | Scenario that doesn't rely on below capability in Spark (version 1.0): • Authentication types: • Username • Thrift transport protocol: • SASL • Binary • Thrift transport protocol: • SharkServer • SharkServer2 If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
Teradata | Scenario that doesn't rely on below capability in Teradata (version 1.0): • Set below value for CharacterSet: • BIG5 (TCHBIG5_1R0) • EUC (Unix compatible, KANJIEC_0U) • GB (SCHGB2312_1T0) • IBM Mainframe (KANJIEBCDIC5035_0I) • NetworkKorean (HANGULKSC5601_2R4) • Shift-JIS (Windows, DOS compatible, KANJISJIS_0S) |
Vertica | Scenario that doesn't rely on below capability in Vertica (version 1.0): • Linked service that uses Azure integration runtime. Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.55 or above). For more information, go to Install Vertica ODBC driver for the version 2.0. |