Azure 数据工厂中复制活动的容错Fault tolerance of copy activity in Azure Data Factory

适用于: Azure 数据工厂 Azure Synapse Analytics

将数据从源复制到目标存储时,Azure 数据工厂复制活动提供一定程度的容错,以防止数据移动过程中因出现故障而中断。When you copy data from source to destination store, Azure Data Factory copy activity provides certain level of fault tolerances to prevent interruption from failures in the middle of data movement. 例如,你要将数百万行从源复制到目标存储,其中,在目标数据库中创建了主键,但源数据库没有定义任何主键。For example, you are copying millions of rows from source to destination store, where a primary key has been created in the destination database, but source database does not have any primary keys defined. 当你将重复行从源复制到目标时,将在目标数据库上遇到 PK 冲突故障。When you happen to copy duplicated rows from source to the destination, you will hit the PK violation failure on the destination database. 目前,复制活动提供了两种处理此类错误的方法:At this moment, copy activity offers you two ways to handle such errors:

  • 如果遇到任何故障,你可以中止复制活动。You can abort the copy activity once any failure is encountered.
  • 可以通过启用容错以跳过不兼容的数据,继续复制其余部分。You can continue to copy the rest by enabling fault tolerance to skip the incompatible data. 例如,在这种情况下,跳过重复的行。For example, skip the duplicated row in this case. 此外,还可以通过在复制活动中启用会话日志来记录跳过的数据。In addition, you can log the skipped data by enabling session log within copy activity. 有关更多详细信息,可以参阅复制活动中的会话日志You can refer to session log in copy activity for more details.

复制二进制文件Copying binary files

ADF 在复制二进制文件时支持以下容错方案。ADF supports the following fault tolerance scenarios when copying binary files. 在以下情况下,可以选择中止复制活动或继续复制其余内容:You can choose to abort the copy activity or continue to copy the rest in the following scenarios:

  1. ADF 要复制的文件同时被其他应用程序删除。The files to be copied by ADF are being deleted by other applications at the same time.
  2. 某些特定文件夹或文件不允许 ADF 访问,因为这些文件或文件夹的 ACL 需要比 ADF 中配置的连接信息更高的权限级别。Some particular folders or files do not allow ADF to access because ACLs of those files or folders require higher permission level than the connection information configured in ADF.
  3. 如果在 ADF 中启用数据一致性验证设置,则不会验证一个或多个文件在源和目标存储之间是否一致。One or more files are not verified to be consistent between source and destination store if you enable data consistency verification setting in ADF.

配置Configuration

在存储之间复制二进制文件时,可以启用容错,如下所示:When you copy binary files between storage stores, you can enable fault tolerance as followings:

"typeProperties": { 
    "source": { 
        "type": "BinarySource", 
        "storeSettings": { 
            "type": "AzureDataLakeStoreReadSettings", 
            "recursive": true 
            } 
    }, 
    "sink": { 
        "type": "BinarySink", 
        "storeSettings": { 
            "type": "AzureDataLakeStoreWriteSettings" 
        } 
    }, 
    "skipErrorFile": { 
        "fileMissing": true, 
        "fileForbidden": true, 
        "dataInconsistency": true 
    }, 
    "validateDataConsistency": true, 
    "logSettings": {
        "enableCopyActivityLog": true,
        "copyActivityLogSettings": {            
            "logLevel": "Warning",
            "enableReliableLogging": false
        },
        "logLocationSettings": {
            "linkedServiceName": {
               "referenceName": "ADLSGen2",
               "type": "LinkedServiceReference"
            },
            "path": "sessionlog/"
        }
    }
} 
属性Property 说明Description 允许的值Allowed values 必须Required
skipErrorFileskipErrorFile 一组属性,用于指定在数据移动过程中要跳过的失败类型。A group of properties to specify the types of failures you want to skip during the data movement. No
fileMissingfileMissing SkipErrorFile 属性包中的一个键值对,用于确定是否要跳过在复制 ADF 时被其他应用程序删除的文件。One of the key-value pairs within skipErrorFile property bag to determine if you want to skip files, which are being deleted by other applications when ADF is copying in the meanwhile.
-True:跳过其他应用程序正在删除的文件,复制其余内容。-True: you want to copy the rest by skipping the files being deleted by other applications.
-False:在数据移动过程中,一旦从源存储中删除任何文件则中止复制活动。- False: you want to abort the copy activity once any files are being deleted from source store in the middle of data movement.
默认情况下,该属性设置为 True。Be aware this property is set to true as default.
True(默认值)True(default)
FalseFalse
No
fileForbiddenfileForbidden SkipErrorFile 属性包中的一个键值对,用来确定当这些文件或文件夹的 ACL 需要比 ADF 中配置的连接更高的权限级别时,是否要跳过特定文件。One of the key-value pairs within skipErrorFile property bag to determine if you want to skip the particular files, when the ACLs of those files or folders require higher permission level than the connection configured in ADF.
-True:要通过跳过文件来复制其余内容。-True: you want to copy the rest by skipping the files.
-False:在获取文件夹或文件的权限问题后,要中止复制活动。- False: you want to abort the copy activity once getting the permission issue on folders or files.
TrueTrue
False(默认值)False(default)
No
dataInconsistencydataInconsistency SkipErrorFile 属性包中的一个键值对,用于确定是否要跳过源和目标存储之间不一致的数据。One of the key-value pairs within skipErrorFile property bag to determine if you want to skip the inconsistent data between source and destination store.
-True:要通过跳过不一致的数据来复制其余内容。-True: you want to copy the rest by skipping inconsistent data.
-False:找到不一致的数据后要中止复制活动。- False: you want to abort the copy activity once inconsistent data found.
请注意,仅当你将 validateDataConsistency 设置为 True 时,此属性才有效。Be aware this property is only valid when you set validateDataConsistency as True.
TrueTrue
False(默认值)False(default)
No
logSettingslogSettings 当要记录跳过的对象名称时可以指定的一组属性。A group of properties that can be specified when you want to log the skipped object names.   No
linkedServiceNamelinkedServiceName Azure Blob 存储Azure Data Lake Storage Gen2 的链接服务,用于存储会话日志文件。The linked service of Azure Blob Storage or Azure Data Lake Storage Gen2 to store the session log files. AzureBlobStorageAzureBlobFS 类型链接服务的名称,指代用于存储日志文件的实例。The names of an AzureBlobStorage or AzureBlobFS type linked service, which refers to the instance that you use to store the log file. No
pathpath 日志文件的路径。The path of the log files. 指定用于存储日志文件的路径。Specify the path that you use to store the log files. 如果未提供路径,服务会为用户创建一个容器。If you do not provide a path, the service creates a container for you. No

备注

下面是在复制二进制文件时在复制活动中启用容错功能的先决条件。The followings are the prerequisites of enabling fault tolerance in copy activity when copying binary files. 若要在从源存储中删除特定文件时跳过这些文件,必须满足以下条件:For skipping particular files when they are being deleted from source store:

  • 源数据集和接收器数据集必须是二进制格式,且不能指定压缩类型。The source dataset and sink dataset have to be binary format, and the compression type cannot be specified.
  • 支持的数据存储类型为:Azure Blob 存储、Azure Data Lake Storage Gen2、Azure 文件存储、文件系统、FTP、SFTP、Amazon S3、Google Cloud Storage 和 HDFS。The supported data store types are Azure Blob storage, Azure Data Lake Storage Gen2, Azure File Storage, File System, FTP, SFTP, Amazon S3, Google Cloud Storage and HDFS.
  • 仅当在源数据集中指定多个文件(可以是文件夹、通配符或文件列表)时,复制活动才能跳过特定错误文件。Only if when you specify multiple files in source dataset, which can be a folder, wildcard or a list of files, copy activity can skip the particular error files. 若要将源数据集中的单个文件指定为复制到目标,则如果出现任何错误,复制活动会失败。If a single file is specified in source dataset to be copied to the destination, copy activity will fail if any error occurred.

若要在源存储中禁止访问特定文件时跳过这些文件,必须满足以下条件:For skipping particular files when their access are forbidden from source store:

  • 源数据集和接收器数据集必须是二进制格式,且不能指定压缩类型。The source dataset and sink dataset have to be binary format, and the compression type cannot be specified.
  • 支持的数据存储类型为:Azure Blob 存储、Azure Data Lake Storage Gen2、Azure 文件存储、SFTP、Amazon S3 和 HDFS。The supported data store types are Azure Blob storage, Azure Data Lake Storage Gen2, Azure File Storage, SFTP, Amazon S3 and HDFS.
  • 仅当在源数据集中指定多个文件(可以是文件夹、通配符或文件列表)时,复制活动才能跳过特定错误文件。Only if when you specify multiple files in source dataset, which can be a folder, wildcard or a list of files, copy activity can skip the particular error files. 若要将源数据集中的单个文件指定为复制到目标,则如果出现任何错误,复制活动会失败。If a single file is specified in source dataset to be copied to the destination, copy activity will fail if any error occurred.

若要在验证特定文件在源存储和目标存储之间是否不一致时跳过这些文件,必须满足以下条件:For skipping particular files when they are verified to be inconsistent between source and destination store:

  • 可从此处的数据一致性文档获取更多详细信息。You can get more details from data consistency doc here.

监视Monitoring

复制活动的输出Output from copy activity

可以通过每个复制活动运行的输出来获取读取、写入和跳过的文件数。You can get the number of files being read, written, and skipped via the output of each copy activity run.

"output": {
            "dataRead": 695,
            "dataWritten": 186,
            "filesRead": 3,  
            "filesWritten": 1, 
            "filesSkipped": 2, 
            "throughput": 297,
            "logFilePath": "myfolder/a84bf8d4-233f-4216-8cb5-45962831cd1b/",
            "dataConsistencyVerification": 
           { 
                "VerificationResult": "Verified", 
                "InconsistentData": "Skipped" 
           } 
        }

复制活动的会话日志Session log from copy activity

如果配置为记录跳过的文件名称,可以通过此路径找到日志文件:https://[your-blob-account].blob.core.chinacloudapi.cn/[path-if-configured]/copyactivity-logs/[copy-activity-name]/[copy-activity-run-id]/[auto-generated-GUID].csvIf you configure to log the skipped file names, you can find the log file from this path: https://[your-blob-account].blob.core.chinacloudapi.cn/[path-if-configured]/copyactivity-logs/[copy-activity-name]/[copy-activity-run-id]/[auto-generated-GUID].csv.

日志文件只能是 csv 文件。The log files have to be the csv files. 日志文件的架构如下所示:The schema of the log file is as following:

Column 说明Description
时间戳Timestamp ADF 跳过文件时的时间戳。The timestamp when ADF skips the file.
LevelLevel 此项的日志级别。The log level of this item. 对于显示文件跳过的项,它将处于“警告”级别。It will be in 'Warning' level for the item showing file skipping.
OperationNameOperationName 每个文件上的 ADF 复制活动操作行为。ADF copy activity operational behavior on each file. 它将为“FileSkip”,以指定要跳过的文件。It will be 'FileSkip' to specify the file to be skipped.
OperationItemOperationItem 要跳过的文件名。The file names to be skipped.
消息Message 说明为何要跳过文件的详细信息。More information to illustrate why the file being skipped.

日志文件的示例如下所示:The example of a log file is as following:

Timestamp,Level,OperationName,OperationItem,Message 
2020-03-24 05:35:41.0209942,Warning,FileSkip,"bigfile.csv","File is skipped after read 322961408 bytes: ErrorCode=UserErrorSourceBlobNotExist,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The required Blob is missing. ContainerName: https://transferserviceonebox.blob.core.chinacloudapi.cn/skipfaultyfile, path: bigfile.csv.,Source=Microsoft.DataTransfer.ClientLibrary,'." 
2020-03-24 05:38:41.2595989,Warning,FileSkip,"3_nopermission.txt","File is skipped after read 0 bytes: ErrorCode=AdlsGen2OperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ADLS Gen2 operation failed for: Operation returned an invalid status code 'Forbidden'. Account: 'adlsgen2perfsource'. FileSystem: 'skipfaultyfilesforbidden'. Path: '3_nopermission.txt'. ErrorCode: 'AuthorizationPermissionMismatch'. Message: 'This request is not authorized to perform this operation using this permission.'. RequestId: '35089f5d-101f-008c-489e-01cce4000000'..,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Operation returned an invalid status code 'Forbidden',Source=,''Type=Microsoft.Azure.Storage.Data.Models.ErrorSchemaException,Message='Type=Microsoft.Azure.Storage.Data.Models.ErrorSchemaException,Message=Operation returned an invalid status code 'Forbidden',Source=Microsoft.DataTransfer.ClientLibrary,',Source=Microsoft.DataTransfer.ClientLibrary,'." 

在上面的日志中,你可以看到已跳过 bigfile.csv,因为在 ADF 复制时,其他应用程序删除了此文件。From the log above, you can see bigfile.csv has been skipped due to another application deleted this file when ADF was copying it. 已跳过 3_nopermission.txt,因为由于权限问题不允许 ADF 访问它。And 3_nopermission.txt has been skipped because ADF is not allowed to access it due to permission issue.

复制表格数据Copying tabular data

支持的方案Supported scenarios

复制活动支持三种检测、跳过和记录不兼容表格数据的方案:Copy activity supports three scenarios for detecting, skipping, and logging incompatible tabular data:

  • 源数据类型与接收器本机类型不兼容Incompatibility between the source data type and the sink native type.

    例如:将数据从 Blob 存储中的 CSV 文件复制到具有架构定义的包含三个 INT 类型列的 SQL 数据库。For example: Copy data from a CSV file in Blob storage to a SQL database with a schema definition that contains three INT type columns. 包含数值数据的 CSV 文件行(如 123,456,789)成功复制到接收器存储。The CSV file rows that contain numeric data, such as 123,456,789 are copied successfully to the sink store. 但是,包含非数字值的行(如 123,456, abc )检测为不兼容,并被跳过。However, the rows that contain non-numeric values, such as 123,456, abc are detected as incompatible and are skipped.

  • 源与接收器之间的列数不匹配Mismatch in the number of columns between the source and the sink.

    例如:使用包含六个列的架构定义,将数据从 Blob 存储中的 CSV 文件复制到 SQL 数据库。For example: Copy data from a CSV file in Blob storage to a SQL database with a schema definition that contains six columns. 包含六个列的 CSV 文件行会成功复制到接收器存储。The CSV file rows that contain six columns are copied successfully to the sink store. 包含多于六列的 CSV 文件行将检测为不兼容,并被跳过。The CSV file rows that contain more than six columns are detected as incompatible and are skipped.

  • 写入 SQL Server/Azure SQL 数据库/Azure Cosmos DB 时发生主键冲突。Primary key violation when writing to SQL Server/Azure SQL Database/Azure Cosmos DB.

    例如:将数据从 SQL 服务器复制到 SQL 数据库。For example: Copy data from a SQL server to a SQL database. 接收器 SQL 数据库中定义了主键,但源 SQL 服务器中未定义此类主键。A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. 源中的重复行无法复制到接收器。The duplicated rows that exist in the source cannot be copied to the sink. 复制活动仅将源数据的第一行复制到接收器。Copy activity copies only the first row of the source data into the sink. 包含重复主键值的后续源行会被检测为不兼容,并被跳过。The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.

备注

  • 若要使用 PolyBase 将数据加载到 Azure Synapse Analytics,请配置 PolyBase 的原生容错设置,方法是在复制活动中通过 polyBaseSettings 指定拒绝策略。To load data into Azure Synapse Analytics using PolyBase, configure PolyBase's native fault tolerance settings by specifying reject policies via "polyBaseSettings" in copy activity. 同时,仍然可以正常启用将 PolyBase 不兼容行重定向到 Blob 或 ADLS,如下所示。You can still enable redirecting PolyBase incompatible rows to Blob or ADLS as normal as shown below.
  • 将复制活动配置为调用 AmazonRedShift 卸载时,此功能不适用。This feature doesn't apply when copy activity is configured to invoke Amazon Redshift Unload.
  • 当复制活动配置为调用 SQL 接收器中的存储过程时,此功能不适用。This feature doesn't apply when copy activity is configured to invoke a stored procedure from a SQL sink.

配置Configuration

下面的 JSON 定义示例用于配置在复制活动中跳过不兼容行:The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:

"typeProperties": { 
    "source": { 
        "type": "AzureSqlSource" 
    }, 
    "sink": { 
        "type": "AzureSqlSink" 
    }, 
    "enableSkipIncompatibleRow": true, 
    "logSettings": {
        "enableCopyActivityLog": true,
        "copyActivityLogSettings": {            
            "logLevel": "Warning",
            "enableReliableLogging": false
        },
        "logLocationSettings": {
            "linkedServiceName": {
               "referenceName": "ADLSGen2",
               "type": "LinkedServiceReference"
            },
            "path": "sessionlog/"
        }
    } 
}, 
属性Property 说明Description 允许的值Allowed values 必须Required
enableSkipIncompatibleRowenableSkipIncompatibleRow 指定是否在复制期间跳过不兼容的行。Specifies whether to skip incompatible rows during copy or not. TrueTrue
False(默认值)False (default)
No
logSettingslogSettings 若要记录不兼容行,可以指定的一组属性。A group of properties that can be specified when you want to log the incompatible rows.   No
linkedServiceNamelinkedServiceName Azure Blob 存储Azure Data Lake Storage Gen2 的链接服务,用于存储包含已跳过行的日志。The linked service of Azure Blob Storage or Azure Data Lake Storage Gen2 to store the log that contains the skipped rows. AzureBlobStorageAzureBlobFS 类型链接服务的名称,指代用于存储日志文件的实例。The names of an AzureBlobStorage or AzureBlobFS type linked service, which refers to the instance that you use to store the log file. No
pathpath 包含已跳过行的日志文件的路径。The path of the log files that contains the skipped rows. 指定要用于记录不兼容数据的路径。Specify the path that you want to use to log the incompatible data. 如果未提供路径,服务会为用户创建一个容器。If you do not provide a path, the service creates a container for you. No

监视跳过的行Monitor skipped rows

复制活动运行完成后,可以在复制活动输出中看到跳过的行数:After the copy activity run completes, you can see the number of skipped rows in the output of the copy activity:

"output": {
            "dataRead": 95,
            "dataWritten": 186,
            "rowsCopied": 9,
            "rowsSkipped": 2,
            "copyDuration": 16,
            "throughput": 0.01,
            "logFilePath": "myfolder/a84bf8d4-233f-4216-8cb5-45962831cd1b/",
            "errors": []
        },

如果配置为记录不兼容的行,可以通过此路径找到日志文件:https://[your-blob-account].blob.core.chinacloudapi.cn/[path-if-configured]/copyactivity-logs/[copy-activity-name]/[copy-activity-run-id]/[auto-generated-GUID].csvIf you configure to log the incompatible rows, you can find the log file from this path: https://[your-blob-account].blob.core.chinacloudapi.cn/[path-if-configured]/copyactivity-logs/[copy-activity-name]/[copy-activity-run-id]/[auto-generated-GUID].csv.

日志文件将是 csv 文件。The log files will be the csv files. 日志文件的架构如下所示:The schema of the log file is as following:

Column 说明Description
时间戳Timestamp ADF 跳过不兼容行时的时间戳The timestamp when ADF skips the incompatible rows
LevelLevel 此项的日志级别。The log level of this item. 如果此项显示跳过的行,它将处于“警告”级别It will be in 'Warning' level if this item shows the skipped rows
OperationNameOperationName 每个行上的 ADF 复制活动操作行为。ADF copy activity operational behavior on each row. 它将为“TabularRowSkip”以指定已跳过特定不兼容行It will be 'TabularRowSkip' to specify that the particular incompatible row has been skipped
OperationItemOperationItem 源数据存储中的已跳过行。The skipped rows from the source data store.
消息Message 说明此特定行不兼容性的详细信息。More information to illustrate why the incompatibility of this particular row.

下面的示例展示了日志文件内容:An example of the log file content is as follows:

Timestamp, Level, OperationName, OperationItem, Message
2020-02-26 06:22:32.2586581, Warning, TabularRowSkip, """data1"", ""data2"", ""data3""," "Column 'Prop_2' contains an invalid value 'data3'. Cannot convert 'data3' to type 'DateTime'." 
2020-02-26 06:22:33.2586351, Warning, TabularRowSkip, """data4"", ""data5"", ""data6"",", "Violation of PRIMARY KEY constraint 'PK_tblintstrdatetimewithpk'. Cannot insert duplicate key in object 'dbo.tblintstrdatetimewithpk'. The duplicate key value is (data4)." 

在上面的示例日志文件中,你可以看到由于源到目标存储的类型转换问题,跳过了一行“data1, data2, data3”。From the sample log file above, you can see one row "data1, data2, data3" has been skipped due to type conversion issue from source to destination store. 由于源到目标存储中出现 PK 冲突问题,已跳过另一行“data4, data5, data6”。Another row "data4, data5, data6" has been skipped due to PK violation issue from source to destination store.

复制表格数据(旧版):Copying tabular data (legacy):

下面是仅对复制表格数据启用容错的传统方法。The following is the legacy way to enable fault tolerance for copying tabular data only. 如果你正在创建新的管道或活动,建议你从此处开始。If you are creating new pipeline or activity, you are encouraged to start from here instead.

配置Configuration

下面的 JSON 定义示例用于配置在复制活动中跳过不兼容行:The following example provides a JSON definition to configure skipping the incompatible rows in copy activity:

"typeProperties": {
    "source": {
        "type": "BlobSource"
    },
    "sink": {
        "type": "SqlSink",
    },
    "enableSkipIncompatibleRow": true,
    "redirectIncompatibleRowSettings": {
         "linkedServiceName": {
              "referenceName": "<Azure Storage linked service>",
              "type": "LinkedServiceReference"
            },
            "path": "redirectcontainer/erroroutput"
     }
}
属性Property 说明Description 允许的值Allowed values 必须Required
enableSkipIncompatibleRowenableSkipIncompatibleRow 指定是否在复制期间跳过不兼容的行。Specifies whether to skip incompatible rows during copy or not. TrueTrue
False(默认值)False (default)
No
redirectIncompatibleRowSettingsredirectIncompatibleRowSettings 若要记录不兼容行,可以指定的一组属性。A group of properties that can be specified when you want to log the incompatible rows.   No
linkedServiceNamelinkedServiceName Azure 存储的链接服务,用于存储包含跳过的行的记录。The linked service of Azure Storage to store the log that contains the skipped rows. AzureStorageAzureDataLakeStore 类型链接服务的名称,指代要用于存储日志文件的实例。The names of an AzureStorage or AzureDataLakeStore type linked service, which refers to the instance that you want to use to store the log file. No
pathpath 包含跳过行的日志文件的路径。The path of the log file that contains the skipped rows. 指定要用于记录不兼容数据的路径。Specify the path that you want to use to log the incompatible data. 如果未提供路径,服务会为用户创建一个容器。If you do not provide a path, the service creates a container for you. No

监视跳过的行Monitor skipped rows

复制活动运行完成后,可以在复制活动输出中看到跳过的行数:After the copy activity run completes, you can see the number of skipped rows in the output of the copy activity:

"output": {
            "dataRead": 95,
            "dataWritten": 186,
            "rowsCopied": 9,
            "rowsSkipped": 2,
            "copyDuration": 16,
            "throughput": 0.01,
            "redirectRowPath": "https://myblobstorage.blob.core.chinacloudapi.cn//myfolder/a84bf8d4-233f-4216-8cb5-45962831cd1b/",
            "errors": []
        },

如果配置为记录不兼容的行,可以通过此路径找到日志文件:https://[your-blob-account].blob.core.chinacloudapi.cn/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csvIf you configure to log the incompatible rows, you can find the log file at this path: https://[your-blob-account].blob.core.chinacloudapi.cn/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csv.

日志文件只能是 csv 文件。The log files can only be the csv files. 所跳过的原始数据必要时将以逗号作为列分隔符进行记录。The original data being skipped will be logged with comma as column delimiter if needed. 除了日志文件中的原始源数据外,我们还另外添加两列“ErrorCode”和“ErrorMessage”,可从中查看不兼容的根本原因。We add two more columns "ErrorCode" and "ErrorMessage" in additional to the original source data in log file, where you can see the root cause of the incompatibility. ErrorCode 和 ErrorMessage 将用双引号引起来。The ErrorCode and ErrorMessage will be quoted by double quotes.

下面的示例展示了日志文件内容:An example of the log file content is as follows:

data1, data2, data3, "UserErrorInvalidDataValue", "Column 'Prop_2' contains an invalid value 'data3'. Cannot convert 'data3' to type 'DateTime'."
data4, data5, data6, "2627", "Violation of PRIMARY KEY constraint 'PK_tblintstrdatetimewithpk'. Cannot insert duplicate key in object 'dbo.tblintstrdatetimewithpk'. The duplicate key value is (data4)."

后续步骤Next steps

请参阅其他复制活动文章:See the other copy activity articles: