Azure 数据工厂中的 Avro 格式Avro format in Azure Data Factory

适用于: Azure 数据工厂 Azure Synapse Analytics

如果要 分析 Avro 文件或以 Avro 格式写入数据,请遵循本文中的说明。Follow this article when you want to parse the Avro files or write the data into Avro format.

以下连接器支持 Avro 格式:Amazon S3Azure BlobAzure Data Lake Storage Gen2Azure 文件存储文件系统FTPGoogle 云存储HDFSHTTPSFTPAvro format is supported for the following connectors: Amazon S3, Azure Blob, Azure Data Lake Storage Gen2, Azure File Storage, File System, FTP, Google Cloud Storage, HDFS, HTTP, and SFTP.

数据集属性Dataset properties

有关可用于定义数据集的各部分和属性的完整列表,请参阅数据集一文。For a full list of sections and properties available for defining datasets, see the Datasets article. 本部分提供 Avro 数据集支持的属性列表。This section provides a list of properties supported by the Avro dataset.

属性Property 说明Description 必须Required
typetype 数据集的 type 属性必须设置为 AvroThe type property of the dataset must be set to Avro. Yes
locationlocation 文件的位置设置。Location settings of the file(s). 每个基于文件的连接器在 location 下都有其自己的位置类型和支持的属性。Each file-based connector has its own location type and supported properties under location. 请在连接器文章 -> 数据集属性部分中查看详细信息See details in connector article -> Dataset properties section. Yes
avroCompressionCodecavroCompressionCodec 写入到 Avro 文件时要使用的压缩编解码器。The compression codec to use when writing to Avro files. 当从 Avro 文件进行读取时,数据工厂会基于文件元数据自动确定压缩编解码器。When reading from Avro files, Data Factory automatically determines the compression codec based on the file metadata.
支持的类型为“none” (默认值)、“deflate” 、“snappy” 。Supported types are "none" (default), "deflate", "snappy". 请注意,当前复制活动在读取/写入 Avro 文件时不支持 Snappy。Note currently Copy activity doesn't support Snappy when read/write Avro files.
No

备注

Avro 文件不支持列名称中包含空格。White space in column name is not supported for Avro files.

下面是 Azure Blob 存储上的 Avro 数据集的示例:Below is an example of Avro dataset on Azure Blob Storage:

{
    "name": "AvroDataset",
    "properties": {
        "type": "Avro",
        "linkedServiceName": {
            "referenceName": "<Azure Blob Storage linked service name>",
            "type": "LinkedServiceReference"
        },
        "schema": [ < physical schema, optional, retrievable during authoring > ],
        "typeProperties": {
            "location": {
                "type": "AzureBlobStorageLocation",
                "container": "containername",
                "folderPath": "folder/subfolder",
            },
            "avroCompressionCodec": "snappy"
        }
    }
}

复制活动属性Copy activity properties

有关可用于定义活动的各部分和属性的完整列表,请参阅管道一文。For a full list of sections and properties available for defining activities, see the Pipelines article. 本部分提供 Avro 源和接收器支持的属性列表。This section provides a list of properties supported by the Avro source and sink.

Avro 作为源Avro as source

复制活动的 __source* * 节支持以下属性。The following properties are supported in the copy activity __source** section.

属性Property 说明Description 必须Required
typetype 复制活动源的 type 属性必须设置为 AvroSourceThe type property of the copy activity source must be set to AvroSource. Yes
storeSettingsstoreSettings 有关如何从数据存储读取数据的一组属性。A group of properties on how to read data from a data store. 每个基于文件的连接器在 storeSettings 下都有其自己支持的读取设置。Each file-based connector has its own supported read settings under storeSettings. 请在连接器文章 -> 复制活动属性部分中查看详细信息See details in connector article -> Copy activity properties section. No

Avro 作为接收器Avro as sink

复制活动的 _sink* 节支持以下属性。The following properties are supported in the copy activity __sink** section.

属性Property 说明Description 必须Required
typetype 复制活动源的 type 属性必须设置为 AvroSinkThe type property of the copy activity source must be set to AvroSink. Yes
formatSettingsformatSettings 一组属性。A group of properties. 请参阅下面的“Avro 写入设置”表。Refer to Avro write settings table below. No
storeSettingsstoreSettings 有关如何将数据写入到数据存储的一组属性。A group of properties on how to write data to a data store. 每个基于文件的连接器在 storeSettings 下都有其自身支持的写入设置。Each file-based connector has its own supported write settings under storeSettings. 请在连接器文章 -> 复制活动属性部分中查看详细信息See details in connector article -> Copy activity properties section. No

formatSettings 下支持的“Avro 写入设置”:Supported Avro write settings under formatSettings:

属性Property 说明Description 必需Required
typetype formatSettings 的类型必须设置为 AvroWriteSettings。The type of formatSettings must be set to AvroWriteSettings. Yes
maxRowsPerFilemaxRowsPerFile 在将数据写入到文件夹时,可选择写入多个文件,并指定每个文件的最大行数。When writing data into a folder, you can choose to write to multiple files and specify the max rows per file. No
fileNamePrefixfileNamePrefix 配置 maxRowsPerFile 时适用。Applicable when maxRowsPerFile is configured.
在将数据写入多个文件时,指定文件名前缀,生成的模式为 <fileNamePrefix>_00000.<fileExtension>Specify the file name prefix when writing data to multiple files, resulted in this pattern: <fileNamePrefix>_00000.<fileExtension>. 如果未指定,将自动生成文件名前缀。If not specified, file name prefix will be auto generated. 如果源是基于文件的存储或已启用分区选项的数据存储,则此属性不适用。This property does not apply when source is file-based store or partition-option-enabled data store.
No

映射数据流属性Mapping data flow properties

在映射数据流中,可以在以下数据存储中读取和写入 avro 格式:Azure Blob 存储Azure Data Lake Storage Gen2In mapping data flows, you can read and write to avro format in the following data stores: Azure Blob Storage, and Azure Data Lake Storage Gen2.

源属性Source properties

下表列出了 avro 源支持的属性。The below table lists the properties supported by an avro source. 可以在“源选项”选项卡中编辑这些属性。You can edit these properties in the Source options tab.

名称Name 说明Description 必需Required 允许的值Allowed values 数据流脚本属性Data flow script property
通配符路径Wild card paths 所有匹配通配符路径的文件都会得到处理。All files matching the wildcard path will be processed. 重写数据集中设置的文件夹和文件路径。Overrides the folder and file path set in the dataset. no String[]String[] wildcardPathswildcardPaths
分区根路径Partition root path 对于已分区的文件数据,可以输入分区根路径,以便将已分区的文件夹读取为列For file data that is partitioned, you can enter a partition root path in order to read partitioned folders as columns no 字符串String partitionRootPathpartitionRootPath
文件列表List of files 源是否指向某个列出待处理文件的文本文件Whether your source is pointing to a text file that lists files to process no truefalsetrue or false fileListfileList
用于存储文件名的列Column to store file name 使用源文件名称和路径创建新列Create a new column with the source file name and path no 字符串String rowUrlColumnrowUrlColumn
完成后After completion 在处理后删除或移动文件。Delete or move the files after processing. 文件路径从容器根开始File path starts from the container root no 删除:truefalseDelete: true or false
移动:['<from>', '<to>']Move: ['<from>', '<to>']
purgeFilespurgeFiles
moveFilesmoveFiles
按上次修改时间筛选Filter by last modified 选择根据上次更改文件的时间筛选文件Choose to filter files based upon when they were last altered no 时间戳Timestamp ModifiedAftermodifiedAfter
modifiedBeforemodifiedBefore
允许找不到文件Allow no files found 如果为 true,在找不到文件时不会引发错误If true, an error is not thrown if no files are found no truefalsetrue or false ignoreNoFilesFoundignoreNoFilesFound

接收器属性Sink properties

下表列出了 avro 接收器支持的属性。The below table lists the properties supported by an avro sink. 你可以在“设置”选项卡中编辑这些属性。You can edit these properties in the Settings tab.

名称Name 说明Description 必需Required 允许的值Allowed values 数据流脚本属性Data flow script property
清除文件夹Clear the folder 如果在写入前目标文件夹已被清除If the destination folder is cleared prior to write no truefalsetrue or false truncatetruncate
文件名选项File name option 写入的数据的命名格式。The naming format of the data written. 默认情况下,每个分区有一个 part-#####-tid-<guid> 格式的文件By default, one file per partition in format part-#####-tid-<guid> no 模式:StringPattern: String
按分区:String[]Per partition: String[]
作为列中的数据:StringAs data in column: String
输出到单个文件:['<fileName>']Output to single file: ['<fileName>']
filePatternfilePattern
partitionFileNamespartitionFileNames
rowUrlColumnrowUrlColumn
partitionFileNamespartitionFileNames
全部引用Quote all 将所有值括在引号中Enclose all values in quotes no truefalsetrue or false quoteAllquoteAll

数据类型支持Data type support

复制活动Copy activity

复制活动不支持 Avro 复杂数据类型(记录、枚举、数组、映射、联合与固定值)。Avro complex data types are not supported (records, enums, arrays, maps, unions, and fixed) in Copy Activity.

数据流Data flows

处理数据流中的 Avro 文件时,可以读取和写入复杂的数据类型,但请务必先从数据集中清除物理架构。When working with Avro files in data flows, you can read and write complex data types, but be sure to clear the physical schema from the dataset first. 在数据流中,可以设置逻辑投影并派生作为复杂结构的列,然后将这些字段自动映射到 Avro 文件。In data flows, you can set your logical projection and derive columns that are complex structures, then auto-map those fields to an Avro file.

后续步骤Next steps