Automatic connector upgrade

In addition to providing tools and best practices to help users manually upgrade their connectors, the service now also provides a more streamlined upgrade process for some cases where applicable. This is designed to help users adopt the most reliable and supported connector versions with minimal disruption.

The following section outlines the general approach that the service takes for automatic upgrades. While this provides a high-level overview, it's strongly recommended to review the documentation specific to each connector to understand which scenarios are supported and how the upgrade process applies to your workloads.

In cases where certain scenarios running on the latest GA connector version are fully backward compatible with the previous version, the service will automatically upgrade existing workloads (such as Copy, Lookup, and Script activities) to a compatibility mode that preserves the behavior of the earlier version.

These auto-upgraded workloads aren't affected by the announced removal date of the older version, giving users additional time to evaluate and transition to the latest GA version without facing immediate failures.

You can identify which activities have been automatically upgraded by inspecting the activity output, where relevant upgraded information is recorded. The examples below show the upgraded information in various activity outputs.

Example:

Copy activity output

"source": {
    "type": "AmazonS3",
    "autoUpgrade": "true"
} 

"sink": {
    "type": "AmazonS3",
    "autoUpgrade": "true"
}

Lookup activity output

"source": {
    "type": "AmazonS3",
    "autoUpgrade": "true"
}

Script activity output

"source": {
    "type": "AmazonS3",
    "autoUpgrade": "true"
}

Note

While compatibility mode offers flexibility, we strongly encourage users to upgrade to the latest GA version as soon as possible to benefit from ongoing improvements, optimizations, and full support.

Supported automatic upgraded criteria

You can find more details from the table below on the connector list that is planned for the automatic upgrade.

Connector Scenario
Amazon RDS for Oracle Scenario that doesn't rely on capability below in Amazon RDS for Oracle (version 1.0):

• Use procedureRetResults, truststore, and truststorepassword as connection properties.
• Set the connection properties batchFailureReturnsError to 0 and enableBulkLoad to 0.

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
Amazon Redshift Scenario that doesn't rely on below capability in Amazon Redshift (version 1.0):

• Linked service that uses Azure integration runtime.
• Use UNLOAD.

Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime with version 5.56 or above.

For more information, go to Install Amazon Redshift ODBC driver for the version 2.0.
Google BigQuery Scenario that doesn't rely on below capability in Google BigQuery V1:

• Use trustedCertsPath, additionalProjects, requestgoogledrivescope connection properties.
• Set useSystemTrustStore connection property as false.
• Use STRUCT and ARRAY data types.

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
Greenplum If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.56 or above.
Hive Scenario that doesn't rely on below capability in Hive (version 1.0):

• Authentication types:
  • Username
• Thrift transport protocol:
  • HiveServer1
• Service discovery mode: True
• Use native query: True

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.
Impala Scenario that doesn't rely on below capability in Impala (version 1.0):

• Authentication types:
  • SASL Username

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.
MariaDB If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
MySQL If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
Netezza If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.
Oracle Scenario that doesn't rely on capability below in Oracle (version 1.0):

• Use procedureRetResults, truststore and truststorepassword as connection properties.
• Set the connection properties batchFailureReturnsError to 0 and enableBulkLoad to 0
• Use PL/SQL command in Script activity
• Use script parameters in Script activity

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
PostgreSQL If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
Presto Scenario that doesn't rely on capability below in Presto (version 1.0):

• Use MAP, ARRAY, or ROW data types.
• trustedCertPath/allowSelfSignedServerCert/allowSelfSignedServerCert (will be supported soon)

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.57 or above.
Salesforce Scenario that doesn't rely on capability below in Salesforce V1:

• Use the following SOQL queries and your pipeline runs on the self-hosted integration runtime with a version below 5.59.
  • TYPEOF clauses
  • Compound address/geolocations fields
• Use the following SQL-92 queries and your pipeline runs on the self-hosted integration runtime.
  • Timestamp ts keyword
  • Top keyword
  • Comments with -- or /*
  • Group By and Having
• Report query {call "<report name>"}
Salesforce Service Cloud Scenario that doesn't rely on capability below in Salesforce Service Cloud V1:

• Use the following SOQL queries and your pipeline runs on the self-hosted integration runtime with a version below 5.59.
  • TYPEOF clauses
  • Compound address/geolocations fields
• Use the following SQL-92 queries and your pipeline runs on the self-hosted integration runtime.
  • Timestamp ts keyword
  • Top keyword
  • Comments with -- or /*
  • Group By and Having
• Report query {call "<report name>"}
ServiceNow Scenario that doesn't use the custom SQL query in dataset in ServiceNow V1.

Ensure that you have a role with at least read access to sys_db_object, sys_db_view and sys_dictionary tables in ServiceNow. To access views in ServiceNow, you need to have a role with at least read access to sys_db_view_table and sys_db_view_table_field tables.

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.
Snowflake Scenario that doesn't rely on capability below in Snowflake V1:

• Use any of below
  properties: connection_timeout, disableocspcheck, enablestaging, on_error, query_tag, quoted_identifiers_ignore_case, skip_header, stage, table, timezone, token, validate_utf8, no_proxy, nonproxyhosts, noproxy.
• Use multi-statement query in script activity or lookup activity.

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.
Spark Scenario that doesn't rely on below capability in Spark (version 1.0):

• Authentication types:
  • Username
• Thrift transport protocol:
  • SASL
  • Binary
• Thrift transport protocol:
  • SharkServer
  • SharkServer2

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.59 or above.
Teradata Scenario that doesn't rely on below capability in Teradata (version 1.0):

• Set below value for CharacterSet:
  • BIG5 (TCHBIG5_1R0)
  • EUC (Unix compatible, KANJIEC_0U)
  • GB (SCHGB2312_1T0)
  • IBM Mainframe (KANJIEBCDIC5035_0I)
  • NetworkKorean (HANGULKSC5601_2R4)
  • Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)

If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.58 or above.
Vertica Scenario that doesn't rely on below capability in Vertica (version 1.0):

• Linked service that uses Azure integration runtime.

Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.56 or above).

For more information, go to Install Vertica ODBC driver for the version 2.0.