使用 Azure 数据工厂从 Teradata Vantage 复制数据Copy data from Teradata Vantage by using Azure Data Factory

适用于: Azure 数据工厂 Azure Synapse Analytics

本文概述了如何使用 Azure 数据工厂中的复制活动从 Teradata Vantage 复制数据。This article outlines how to use the copy activity in Azure Data Factory to copy data from Teradata Vantage. 本文是在复制活动概述的基础上编写的。It builds on the copy activity overview.

支持的功能Supported capabilities

以下活动支持此 Teradata 连接器:This Teradata connector is supported for the following activities:

可以将数据从 Teradata Vantage 复制到任何支持的接收器数据存储。You can copy data from Teradata Vantage to any supported sink data store. 有关复制活动支持作为源/接收器的数据存储列表,请参阅支持的数据存储表。For a list of data stores that are supported as sources/sinks by the copy activity, see the Supported data stores table.

具体而言,此 Teradata 连接器支持:Specifically, this Teradata connector supports:

  • Teradata 版本 14.10、15.0、15.10、16.0、16.10 和 16.20Teradata version 14.10, 15.0, 15.10, 16.0, 16.10, and 16.20.
  • 使用“基本”、“Windows”或“LDAP”身份验证复制数据。Copying data by using Basic, Windows, or LDAP authentication.
  • 从 Teradata 源进行并行复制。Parallel copying from a Teradata source. 有关详细信息,请参阅从 Teradata 进行并行复制部分。See the Parallel copy from Teradata section for details.

先决条件Prerequisites

如果数据存储位于本地网络、Azure 虚拟网络或 Amazon Virtual Private Cloud 内部,则需要配置自承载集成运行时才能连接到该数据存储。If your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to configure a self-hosted integration runtime to connect to it.

另外,如果数据存储是托管的云数据服务,可以使用 Azure 集成运行时。Alternatively, if your data store is a managed cloud data service, you can use Azure integration runtime. 如果访问范围限制为防火墙规则中允许的 IP,你可以选择将 Azure Integration Runtime IP 添加到允许列表。If the access is restricted to IPs that are approved in the firewall rules, you can add Azure Integration Runtime IPs into the allow list.

要详细了解网络安全机制和数据工厂支持的选项,请参阅数据访问策略For more information about the network security mechanisms and options supported by Data Factory, see Data access strategies.

如果使用自承载集成运行时,请注意,它从 3.18 版开始提供内置的 Teradata 驱动程序。If you use Self-hosted Integration Runtime, note it provides a built-in Teradata driver starting from version 3.18. 无需手动安装任何驱动程序。You don't need to manually install any driver. 驱动程序要求在自承载集成运行时计算机上安装“Visual C++ Redistributable 2012 Update 4”。The driver requires "Visual C++ Redistributable 2012 Update 4" on the self-hosted integration runtime machine. 如果尚未安装,请在此处下载。If you don't yet have it installed, download it from here.

入门Getting started

若要使用管道执行复制活动,可以使用以下工具或 SDK 之一:To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:

对于特定于 Teradata 连接器的数据工厂实体,以下部分提供有关用于定义这些实体的属性的详细信息。The following sections provide details about properties that are used to define Data Factory entities specific to the Teradata connector.

链接服务属性Linked service properties

Teradata 链接服务支持以下属性:The Teradata linked service supports the following properties:

属性Property 说明Description 必需Required
typetype type 属性必须设置为 TeradataThe type property must be set to Teradata. Yes
connectionStringconnectionString 指定连接到 Teradata 实例所需的信息。Specifies the information needed to connect to the Teradata instance. 请参阅以下示例。Refer to the following samples.
还可以将密码放在 Azure Key Vault 中,并从连接字符串中拉取 password 配置。You can also put a password in Azure Key Vault, and pull the password configuration out of the connection string. 有关更多详细信息,请参阅在 Azure Key Vault 中存储凭据Refer to Store credentials in Azure Key Vault with more details.
Yes
usernameusername 指定用于连接到 Teradata 的用户名。Specify a user name to connect to Teradata. 使用 Windows 身份验证时适用。Applies when you are using Windows authentication. No
passwordpassword 指定为用户名指定的用户帐户的密码。Specify a password for the user account you specified for the user name. 此外,还可以选择引用 Azure Key Vault 中存储的机密You can also choose to reference a secret stored in Azure Key Vault.
使用 Windows 身份验证时,或引用 Key Vault 中用于基本身份验证的密码时适用。Applies when you are using Windows authentication, or referencing a password in Key Vault for basic authentication.
No
connectViaconnectVia 用于连接到数据存储的集成运行时The Integration Runtime to be used to connect to the data store. 先决条件部分了解更多信息。Learn more from Prerequisites section. 如果未指定,则使用默认 Azure Integration Runtime。If not specified, it uses the default Azure Integration Runtime. No

可以根据自己的情况在连接字符串中设置更多连接属性:More connection properties you can set in connection string per your case:

属性Property 说明Description 默认值Default value
TdmstPortNumberTdmstPortNumber 用于访问 Teradata 数据库的端口号。The number of the port used to access Teradata database.
除非技术支持指示,否则请勿更改此值。Do not change this value unless instructed to do so by Technical Support.
10251025
UseDataEncryptionUseDataEncryption 指定是否对 Teradata 数据库的所有通信进行加密。Specifies whether to encrypt all communication with the Teradata database. 允许的值为 0 或 1。Allowed values are 0 or 1.

- 0(已禁用,为默认值) :仅加密身份验证信息。- 0 (disabled, default): Encrypts authentication information only.
- 1(已启用) :对驱动程序和数据库之间传递的所有数据进行加密。- 1 (enabled): Encrypts all data that is passed between the driver and the database.
0
CharacterSetCharacterSet 要用于会话的字符集。The character set to use for the session. 例如,CharacterSet=UTF16E.g., CharacterSet=UTF16.

此值可以是用户定义的字符集,也可以是以下预定义的字符集之一:This value can be a user-defined character set, or one of the following pre-defined character sets:
- ASCII- ASCII
- UTF8- UTF8
- UTF16- UTF16
- LATIN1252_0A- LATIN1252_0A
- LATIN9_0A- LATIN9_0A
- LATIN1_0A- LATIN1_0A
- Shift-JIS(Windows、兼容 DOS、KANJISJIS_0S)- Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)
- EUC(兼容 Unix、KANJIEC_0U)- EUC (Unix compatible, KANJIEC_0U)
- IBM Mainframe (KANJIEBCDIC5035_0I)- IBM Mainframe (KANJIEBCDIC5035_0I)
- KANJI932_1S0- KANJI932_1S0
- BIG5 (TCHBIG5_1R0)- BIG5 (TCHBIG5_1R0)
- GB (SCHGB2312_1T0)- GB (SCHGB2312_1T0)
- SCHINESE936_6R0- SCHINESE936_6R0
- TCHINESE950_8R0- TCHINESE950_8R0
- NetworkKorean (HANGULKSC5601_2R4)- NetworkKorean (HANGULKSC5601_2R4)
- HANGUL949_7R0- HANGUL949_7R0
- ARABIC1256_6A0- ARABIC1256_6A0
- CYRILLIC1251_2A0- CYRILLIC1251_2A0
- HEBREW1255_5A0- HEBREW1255_5A0
- LATIN1250_1A0- LATIN1250_1A0
- LATIN1254_7A0- LATIN1254_7A0
- LATIN1258_8A0- LATIN1258_8A0
- THAI874_4A0- THAI874_4A0
ASCII
MaxRespSizeMaxRespSize SQL 请求的响应缓冲区的最大大小,以千字节 (KB) 为单位。The maximum size of the response buffer for SQL requests, in kilobytes (KBs). 例如,MaxRespSize=‭10485760‬E.g., MaxRespSize=‭10485760‬.

对于 Teradata 数据库版本 16.00 或更高版本,最大值为 7361536。For Teradata Database version 16.00 or later, the maximum value is 7361536. 对于使用较早版本的连接,最大值为 1048576。For connections that use earlier versions, the maximum value is 1048576.
65536
MechanismNameMechanismName 若要使用 LDAP 协议对连接进行身份验证,请指定 MechanismName=LDAPTo use the LDAP protocol to authenticate the connection, specify MechanismName=LDAP. 不适用N/A

示例:使用基本身份验证Example using basic authentication

{
    "name": "TeradataLinkedService",
    "properties": {
        "type": "Teradata",
        "typeProperties": {
            "connectionString": "DBCName=<server>;Uid=<username>;Pwd=<password>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

示例:使用 Windows 身份验证Example using Windows authentication

{
    "name": "TeradataLinkedService",
    "properties": {
        "type": "Teradata",
        "typeProperties": {
            "connectionString": "DBCName=<server>",
            "username": "<username>",
            "password": "<password>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

示例:使用 LDAP 身份验证Example using LDAP authentication

{
    "name": "TeradataLinkedService",
    "properties": {
        "type": "Teradata",
        "typeProperties": {
            "connectionString": "DBCName=<server>;MechanismName=LDAP;Uid=<username>;Pwd=<password>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

备注

仍支持以下有效负载。The following payload is still supported. 不过,今后应使用新的有效负载。Going forward, however, you should use the new one.

先前的有效负载:Previous payload:

{
    "name": "TeradataLinkedService",
    "properties": {
        "type": "Teradata",
        "typeProperties": {
            "server": "<server>",
            "authenticationType": "<Basic/Windows>",
            "username": "<username>",
            "password": {
                "type": "SecureString",
                "value": "<password>"
            }
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

数据集属性Dataset properties

本部分提供 Teradata 数据集支持的属性列表。This section provides a list of properties supported by the Teradata dataset. 有关可用于定义数据集的各个部分和属性的完整列表,请参阅数据集For a full list of sections and properties available for defining datasets, see Datasets.

从 Teradata 复制数据时,支持以下属性:To copy data from Teradata, the following properties are supported:

属性Property 说明Description 必需Required
typetype 数据集的 type 属性必须设置为 TeradataTableThe type property of the dataset must be set to TeradataTable. Yes
databasedatabase Teradata 实例的名称。The name of the Teradata instance. 否(如果指定了活动源中的“query”)No (if "query" in activity source is specified)
table Teradata 实例中的表名。The name of the table in the Teradata instance. 否(如果指定了活动源中的“query”)No (if "query" in activity source is specified)

示例:Example:

{
    "name": "TeradataDataset",
    "properties": {
        "type": "TeradataTable",
        "typeProperties": {},
        "schema": [],        
        "linkedServiceName": {
            "referenceName": "<Teradata linked service name>",
            "type": "LinkedServiceReference"
        }
    }
}

备注

仍支持 RelationalTable 类型数据集。RelationalTable type dataset is still supported. 不过,我们建议使用新的数据集。However, we recommend that you use the new dataset.

先前的有效负载:Previous payload:

{
    "name": "TeradataDataset",
    "properties": {
        "type": "RelationalTable",
        "linkedServiceName": {
            "referenceName": "<Teradata linked service name>",
            "type": "LinkedServiceReference"
        },
        "typeProperties": {}
    }
}

复制活动属性Copy activity properties

本部分提供 Teradata 源支持的属性列表。This section provides a list of properties supported by Teradata source. 有关可用于定义活动的各个部分和属性的完整列表,请参阅管道For a full list of sections and properties available for defining activities, see Pipelines.

以 Teradata 作为源Teradata as source

提示

若要详细了解如何使用数据分区从 Teradata 有效加载数据,请参阅从 Teradata 进行并行复制部分。To load data from Teradata efficiently by using data partitioning, learn more from Parallel copy from Teradata section.

从 Teradata 复制数据时,复制活动的 source 节支持以下属性:To copy data from Teradata, the following properties are supported in the copy activity source section:

属性Property 说明Description 必需Required
typetype 复制活动 source 的 type 属性必须设置为 TeradataSourceThe type property of the copy activity source must be set to TeradataSource. Yes
queryquery 使用自定义 SQL 查询读取数据。Use the custom SQL query to read data. 例如 "SELECT * FROM MyTable"An example is "SELECT * FROM MyTable".
启用分区加载时,需要在查询中挂接任何相应的内置分区参数。When you enable partitioned load, you need to hook any corresponding built-in partition parameters in your query. 有关示例,请参阅从 Teradata 进行并行复制部分。For examples, see the Parallel copy from Teradata section.
否(如果指定了数据集中的表)No (if table in dataset is specified)
partitionOptionspartitionOptions 指定用于从 Teradata 加载数据的数据分区选项。Specifies the data partitioning options used to load data from Teradata.
允许的值为:None(默认值)、HashDynamicRangeAllow values are: None (default), Hash and DynamicRange.
启用分区选项(即,该选项不为 None)时,用于从 Teradata 并行加载数据的并行度由复制活动上的 parallelCopies 设置控制。When a partition option is enabled (that is, not None), the degree of parallelism to concurrently load data from Teradata is controlled by the parallelCopies setting on the copy activity.
No
partitionSettingspartitionSettings 指定数据分区的设置组。Specify the group of the settings for data partitioning.
当分区选项不是 None 时适用。Apply when partition option isn't None.
No
partitionColumnNamepartitionColumnName 指定用于并行复制的,由范围分区或哈希分区使用的源列的名称。Specify the name of the source column that will be used by range partition or Hash partition for parallel copy. 如果未指定,系统会自动检测表的主索引并将其用作分区列。If not specified, the primary index of the table is autodetected and used as the partition column.
当分区选项是 HashDynamicRange 时适用。Apply when the partition option is Hash or DynamicRange. 如果使用查询来检索源数据,请在 WHERE 子句中挂接 ?AdfHashPartitionCondition?AdfRangePartitionColumnNameIf you use a query to retrieve the source data, hook ?AdfHashPartitionCondition or ?AdfRangePartitionColumnName in WHERE clause. 请参阅从 Teradata 进行并行复制部分中的示例。See example in Parallel copy from Teradata section.
No
partitionUpperBoundpartitionUpperBound 要从中复制数据的分区列的最大值。The maximum value of the partition column to copy data out.
当分区选项是 DynamicRange 时适用。Apply when partition option is DynamicRange. 如果使用查询来检索源数据,请在 WHERE 子句中挂接 ?AdfRangePartitionUpboundIf you use query to retrieve source data, hook ?AdfRangePartitionUpbound in the WHERE clause. 有关示例,请参阅从 Teradata 进行并行复制部分。For an example, see the Parallel copy from Teradata section.
No
partitionLowerBoundpartitionLowerBound 要从中复制数据的分区列的最小值。The minimum value of the partition column to copy data out.
当分区选项是 DynamicRange 时适用。Apply when the partition option is DynamicRange. 如果使用查询来检索源数据,请在 WHERE 子句中挂接 ?AdfRangePartitionLowboundIf you use a query to retrieve the source data, hook ?AdfRangePartitionLowbound in the WHERE clause. 有关示例,请参阅从 Teradata 进行并行复制部分。For an example, see the Parallel copy from Teradata section.
No

备注

仍支持 RelationalSource 类型复制源,但它不支持从 Teradata 进行并行加载(分区选项)的新内置功能。RelationalSource type copy source is still supported, but it doesn't support the new built-in parallel load from Teradata (partition options). 不过,我们建议使用新的数据集。However, we recommend that you use the new dataset.

示例:使用基本查询但不使用分区复制数据Example: copy data by using a basic query without partition

"activities":[
    {
        "name": "CopyFromTeradata",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<Teradata input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "TeradataSource",
                "query": "SELECT * FROM MyTable"
            },
            "sink": {
                "type": "<sink type>"
            }
        }
    }
]

从 Teradata 进行并行复制Parallel copy from Teradata

数据工厂 Teradata 连接器提供内置的数据分区,用于从 Teradata 并行复制数据。The Data Factory Teradata connector provides built-in data partitioning to copy data from Teradata in parallel. 可以在复制活动的“源”表中找到数据分区选项。 You can find data partitioning options on the Source table of the copy activity.

分区选项的屏幕截图

启用分区复制时,数据工厂将对 Teradata 源运行并行查询,以按分区加载数据。When you enable partitioned copy, Data Factory runs parallel queries against your Teradata source to load data by partitions. 可通过复制活动中的 parallelCopies 设置控制并行度。The parallel degree is controlled by the parallelCopies setting on the copy activity. 例如,如果将 parallelCopies 设置为 4,则数据工厂会根据指定的分区选项和设置并行生成并运行 4 个查询,每个查询从 Teradata 检索一部分数据。For example, if you set parallelCopies to four, Data Factory concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Teradata.

建议同时启用并行复制和数据分区,尤其是从 Teradata 加载大量数据时。You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Teradata. 下面是适用于不同方案的建议配置。The following are suggested configurations for different scenarios. 将数据复制到基于文件的数据存储中时,建议将数据作为多个文件写入文件夹(仅指定文件夹名称),在这种情况下,性能优于写入单个文件。When copying data into file-based data store, it's recommanded to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.

方案Scenario 建议的设置Suggested settings
从大型表进行完整加载。Full load from large table. 分区选项:哈希。Partition option: Hash.

在执行期间,数据工厂将自动检测主索引列,对其应用哈希,然后按分区复制数据。During execution, Data Factory automatically detects the primary index column, applies a hash against it, and copies data by partitions.
使用自定义查询加载大量数据。Load large amount of data by using a custom query. 分区选项:哈希。Partition option: Hash.
查询SELECT * FROM <TABLENAME> WHERE ?AdfHashPartitionCondition AND <your_additional_where_clause>Query: SELECT * FROM <TABLENAME> WHERE ?AdfHashPartitionCondition AND <your_additional_where_clause>.
分区列:指定用于应用哈希分区的列。Partition column: Specify the column used for apply hash partition. 如果未指定,数据工厂将自动检测 Teradata 数据集中指定的表的 PK 列。If not specified, Data Factory automatically detects the PK column of the table you specified in the Teradata dataset.

在执行期间,数据工厂会将 ?AdfHashPartitionCondition 替换为哈希分区逻辑,并发送到 Teradata。During execution, Data Factory replaces ?AdfHashPartitionCondition with the hash partition logic, and sends to Teradata.
使用自定义查询加载大量数据,某个整数列包含均匀分布的范围分区值。Load large amount of data by using a custom query, having an integer column with evenly distributed value for range partitioning. 分区选项:动态范围分区。Partition options: Dynamic range partition.
查询SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>Query: SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>.
分区列:指定用于对数据进行分区的列。Partition column: Specify the column used to partition data. 可以针对整数数据类型的列进行分区。You can partition against the column with integer data type.
分区上限分区下限:指定是否要对分区列进行筛选,以便仅检索介于下限和上限之间的数据。Partition upper bound and partition lower bound: Specify if you want to filter against the partition column to retrieve data only between the lower and upper range.

在执行期间,数据工厂会将 ?AdfRangePartitionColumnName?AdfRangePartitionUpbound?AdfRangePartitionLowbound 替换为每个分区的实际列名和值范围,并将其发送到 Teradata。During execution, Data Factory replaces ?AdfRangePartitionColumnName, ?AdfRangePartitionUpbound, and ?AdfRangePartitionLowbound with the actual column name and value ranges for each partition, and sends to Teradata.
例如,如果为分区列“ID”设置了下限 1、上限 80,并将并行复制设置为 4,则数据工厂会按 4 个分区检索数据。For example, if your partition column "ID" set with the lower bound as 1 and the upper bound as 80, with parallel copy set as 4, Data Factory retrieves data by 4 partitions. 其 ID 分别介于 [1, 20]、[21, 40]、[41, 60] 和 [61, 80] 之间。Their IDs are between [1,20], [21, 40], [41, 60], and [61, 80], respectively.

示例:使用哈希分区进行查询Example: query with hash partition

"source": {
    "type": "TeradataSource",
    "query": "SELECT * FROM <TABLENAME> WHERE ?AdfHashPartitionCondition AND <your_additional_where_clause>",
    "partitionOption": "Hash",
    "partitionSettings": {
        "partitionColumnName": "<hash_partition_column_name>"
    }
}

示例:使用动态范围分区进行查询Example: query with dynamic range partition

"source": {
    "type": "TeradataSource",
    "query": "SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>",
    "partitionOption": "DynamicRange",
    "partitionSettings": {
        "partitionColumnName": "<dynamic_range_partition_column_name>",
        "partitionUpperBound": "<upper_value_of_partition_column>",
        "partitionLowerBound": "<lower_value_of_partition_column>"
    }
}

Teradata 的数据类型映射Data type mapping for Teradata

从 Teradata 复制数据时,将应用以下映射。When you copy data from Teradata, the following mappings apply. 若要了解复制活动如何将源架构和数据类型映射到接收器,请参阅架构和数据类型映射To learn about how the copy activity maps the source schema and data type to the sink, see Schema and data type mappings.

Teradata 数据类型Teradata data type 数据工厂临时数据类型Data Factory interim data type
BigIntBigInt Int64Int64
BlobBlob Byte[]Byte[]
ByteByte Byte[]Byte[]
ByteIntByteInt Int16Int16
CharChar 字符串String
ClobClob 字符串String
DateDate DateTimeDateTime
DecimalDecimal 小数Decimal
DoubleDouble DoubleDouble
GraphicGraphic 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
IntegerInteger Int32Int32
Interval DayInterval Day 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Day To HourInterval Day To Hour 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Day To MinuteInterval Day To Minute 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Day To SecondInterval Day To Second 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval HourInterval Hour 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Hour To MinuteInterval Hour To Minute 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Hour To SecondInterval Hour To Second 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval MinuteInterval Minute 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Minute To SecondInterval Minute To Second 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval MonthInterval Month 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval SecondInterval Second 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval YearInterval Year 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
Interval Year To MonthInterval Year To Month 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
NumberNumber DoubleDouble
期间(日期)Period (Date) 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
期间(时间)Period (Time) 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
期间(带时区的时间)Period (Time With Time Zone) 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
期间(时间戳)Period (Timestamp) 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
期间(带时区的时间戳)Period (Timestamp With Time Zone) 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
SmallIntSmallInt Int16Int16
时间Time TimeSpanTimeSpan
Time With Time ZoneTime With Time Zone TimeSpanTimeSpan
时间戳Timestamp DateTimeDateTime
Timestamp With Time ZoneTimestamp With Time Zone DateTimeDateTime
VarByteVarByte Byte[]Byte[]
VarCharVarChar 字符串String
VarGraphicVarGraphic 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.
XmlXml 不支持。Not supported. 在源查询中应用显式强制转换。Apply explicit cast in source query.

查找活动属性Lookup activity properties

若要了解有关属性的详细信息,请查看 Lookup 活动To learn details about the properties, check Lookup activity.

后续步骤Next steps

有关数据工厂中复制活动支持作为源和接收器的数据存储的列表,请参阅支持的数据存储For a list of data stores supported as sources and sinks by the copy activity in Data Factory, see Supported data stores.