Azure 数据工厂中的 Parquet 格式Parquet format in Azure Data Factory

适用于:是 Azure 数据工厂是 Azure Synapse Analytics(预览版)APPLIES TO: yesAzure Data Factory yesAzure Synapse Analytics (Preview)

如果要分析 Parquet 文件或以 Parquet 格式写入数据,请遵循此文章中的说明。Follow this article when you want to parse the Parquet files or write the data into Parquet format.

以下连接器支持 Parquet 格式:Amazon S3Azure BlobAzure Data Lake Storage Gen2Azure 文件存储文件系统FTPGoogle 云存储HDFSHTTPSFTPParquet format is supported for the following connectors: Amazon S3, Azure Blob, Azure Data Lake Storage Gen2, Azure File Storage, File System, FTP, Google Cloud Storage, HDFS, HTTP, and SFTP.

数据集属性Dataset properties

有关可用于定义数据集的各部分和属性的完整列表,请参阅数据集一文。For a full list of sections and properties available for defining datasets, see the Datasets article. 本部分提供了 Parquet 数据集支持的属性列表。This section provides a list of properties supported by the Parquet dataset.

属性Property 说明Description 必须Required
typetype 数据集的 type 属性必须设置为 ParquetThe type property of the dataset must be set to Parquet. Yes
locationlocation 文件的位置设置。Location settings of the file(s). 每个基于文件的连接器在 location 下都有其自己的位置类型和支持的属性。Each file-based connector has its own location type and supported properties under location. 请在连接器文章 -> 数据集属性部分中查看详细信息See details in connector article -> Dataset properties section. Yes
compressionCodeccompressionCodec 写入到 Parquet 文件时要使用的压缩编解码器。The compression codec to use when writing to Parquet files. 从 Parquet 文件中读取时,数据工厂会基于文件元数据自动确定压缩编解码器。When reading from Parquet files, Data Factories automatically determine the compression codec based on the file metadata.
支持的类型为“none”、“gzip”、“snappy”(默认值)和“lzo”。Supported types are “none”, “gzip”, “snappy” (default), and "lzo". 请注意,当前复制活动在读取/写入 Parquet 文件时不支持 LZO。Note currently Copy activity doesn't support LZO when read/write Parquet files.
No

备注

Parquet 文件不支持列名称中包含空格。White space in column name is not supported for Parquet files.

下面是 Azure Blob 存储上的 Parquet 数据集的示例:Below is an example of Parquet dataset on Azure Blob Storage:

{
    "name": "ParquetDataset",
    "properties": {
        "type": "Parquet",
        "linkedServiceName": {
            "referenceName": "<Azure Blob Storage linked service name>",
            "type": "LinkedServiceReference"
        },
        "schema": [ < physical schema, optional, retrievable during authoring > ],
        "typeProperties": {
            "location": {
                "type": "AzureBlobStorageLocation",
                "container": "containername",
                "folderPath": "folder/subfolder",
            },
            "compressionCodec": "snappy"
        }
    }
}

复制活动属性Copy activity properties

有关可用于定义活动的各部分和属性的完整列表,请参阅管道一文。For a full list of sections and properties available for defining activities, see the Pipelines article. 本部分提供了 Parquet 源和接收器支持的属性列表。This section provides a list of properties supported by the Parquet source and sink.

Parquet 作为源Parquet as source

复制活动的 *source* 部分支持以下属性。The following properties are supported in the copy activity *source* section.

属性Property 说明Description 必须Required
typetype 复制活动源的 type 属性必须设置为 ParquetSourceThe type property of the copy activity source must be set to ParquetSource. Yes
storeSettingsstoreSettings 有关如何从数据存储读取数据的一组属性。A group of properties on how to read data from a data store. 每个基于文件的连接器在 storeSettings 下都有其自己支持的读取设置。Each file-based connector has its own supported read settings under storeSettings. 请在连接器文章 -> 复制活动属性部分中查看详细信息See details in connector article -> Copy activity properties section. No

Parquet 作为接收器Parquet as sink

复制活动的 *sink* 部分支持以下属性。The following properties are supported in the copy activity *sink* section.

属性Property 说明Description 必须Required
typetype 复制活动源的 type 属性必须设置为 ParquetSinkThe type property of the copy activity source must be set to ParquetSink. Yes
storeSettingsstoreSettings 有关如何将数据写入到数据存储的一组属性。A group of properties on how to write data to a data store. 每个基于文件的连接器在 storeSettings 下都有其自身支持的写入设置。Each file-based connector has its own supported write settings under storeSettings. 请在连接器文章 -> 复制活动属性部分中查看详细信息See details in connector article -> Copy activity properties section. No

数据类型支持Data type support

目前不支持 Parquet 复杂数据类型(例如 MAP、LIST、STRUCT)。Parquet complex data types are currently not supported (e.g. MAP, LIST, STRUCT).

使用自承载集成运行时Using Self-hosted Integration Runtime

重要

对于由自承载集成运行时(例如,本地与云数据存储之间)提供支持的复制,如果不是按原样复制 Parquet 文件,则需要在 IR 计算机上安装 64 位 JRE 8(Java 运行时环境)或 OpenJDKMicrosoft Visual C++ 2010 Redistributable PackageFor copy empowered by Self-hosted Integration Runtime e.g. between on-premises and cloud data stores, if you are not copying Parquet files as-is, you need to install the 64-bit JRE 8 (Java Runtime Environment) or OpenJDK and Microsoft Visual C++ 2010 Redistributable Package on your IR machine. 请查看以下段落以了解更多详细信息。Check the following paragraph with more details.

对于使用 Parquet 文件序列化/反序列化在自承载集成运行时上运行的复制,ADF 将通过首先检查 JRE 的注册表项 (SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome) 来查找 Java 运行时,如果未找到,则会检查系统变量 JAVA_HOME 来查找 OpenJDK。For copy running on Self-hosted IR with Parquet file serialization/deserialization, ADF locates the Java runtime by firstly checking the registry (SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome) for JRE, if not found, secondly checking system variable JAVA_HOME for OpenJDK.

  • 若要使用 JRE:64 位 IR 需要 64 位 JRE。To use JRE: The 64-bit IR requires 64-bit JRE. 可在此处找到它。You can find it from here.
  • 若要使用 OpenJDK:从 IR 版本 3.13 开始受支持。To use OpenJDK: It's supported since IR version 3.13. 将 jvm.dll 以及所有其他必需的 OpenJDK 程序集打包到自承载 IR 计算机中,并相应地设置系统环境变量 JAVA_HOME。Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly.
  • 若要安装 Visual C++ 2010 Redistributable Package:安装自承载 IR 时未安装 Visual C++ 2010 Redistributable Package。To install Visual C++ 2010 Redistributable Package: Visual C++ 2010 Redistributable Package is not installed with self-hosted IR installations. 可在此处找到它。You can find it from here.

提示

如果使用自承载集成运行时将数据复制为 Parquet 格式或从 Parquet 格式复制数据,并遇到“调用 java 时发生错误,消息: java.lang.OutOfMemoryError:Java 堆空间”的错误,则可以在托管自承载 IR 的计算机上添加环境变量 _JAVA_OPTIONS,以便调整 JVM 的最小/最大堆大小,以支持此类复制,然后重新运行管道 。If you copy data to/from Parquet format using Self-hosted Integration Runtime and hit error saying "An error occurred when invoking java, message: java.lang.OutOfMemoryError:Java heap space", you can add an environment variable _JAVA_OPTIONS in the machine that hosts the Self-hosted IR to adjust the min/max heap size for JVM to empower such copy, then rerun the pipeline.

在自承载 IR 上设置 JVM 堆大小

示例:将变量 _JAVA_OPTIONS 的值设置为 -Xms256m -Xmx16gExample: set variable _JAVA_OPTIONS with value -Xms256m -Xmx16g. 标志 Xms 指定 Java 虚拟机 (JVM) 的初始内存分配池,而 Xmx 指定最大内存分配池。The flag Xms specifies the initial memory allocation pool for a Java Virtual Machine (JVM), while Xmx specifies the maximum memory allocation pool. 这意味着 JVM 初始内存为 Xms,并且能够使用的最多内存为 XmxThis means that JVM will be started with Xms amount of memory and will be able to use a maximum of Xmx amount of memory. 默认情况下,ADF 最少使用 64 MB 且最多使用 1G。By default, ADF use min 64 MB and max 1G.

后续步骤Next steps