Azure 数据工厂中的 ORC 格式ORC format in Azure Data Factory

适用于: Azure 数据工厂 Azure Synapse Analytics(预览版)

如果要 分析 ORC 文件或以 ORC 格式写入数据 ,请遵循本文中的说明。Follow this article when you want to parse the ORC files or write the data into ORC format .

以下连接器支持 ORC 格式:Amazon S3Azure BlobAzure Data Lake Storage Gen2Azure 文件存储文件系统FTPGoogle 云存储HDFSHTTPSFTPORC format is supported for the following connectors: Amazon S3, Azure Blob, Azure Data Lake Storage Gen2, Azure File Storage, File System, FTP, Google Cloud Storage, HDFS, HTTP, and SFTP.

数据集属性Dataset properties

有关可用于定义数据集的各部分和属性的完整列表,请参阅数据集一文。For a full list of sections and properties available for defining datasets, see the Datasets article. 本部分提供 ORC 数据集支持的属性列表。This section provides a list of properties supported by the ORC dataset.

属性Property 说明Description 必须Required
typetype 数据集的 type 属性必须设置为 OrcThe type property of the dataset must be set to Orc . Yes
locationlocation 文件的位置设置。Location settings of the file(s). 每个基于文件的连接器在 location 下都有其自己的位置类型和支持的属性。Each file-based connector has its own location type and supported properties under location. 请在连接器文章 -> 数据集属性部分中查看详细信息See details in connector article -> Dataset properties section . Yes

下面是 Azure Blob 存储上的 ORC 数据集的示例:Below is an example of ORC dataset on Azure Blob Storage:

{
    "name": "ORCDataset",
    "properties": {
        "type": "Orc",
        "linkedServiceName": {
            "referenceName": "<Azure Blob Storage linked service name>",
            "type": "LinkedServiceReference"
        },
        "schema": [ < physical schema, optional, retrievable during authoring > ],
        "typeProperties": {
            "location": {
                "type": "AzureBlobStorageLocation",
                "container": "containername",
                "folderPath": "folder/subfolder",
            }
        }
    }
}

请注意以下几点:Note the following points:

  • 不支持复杂数据类型(STRUCT、MAP、LIST、UNION)。Complex data types are not supported (STRUCT, MAP, LIST, UNION).
  • 不支持列名称中的空格。White space in column name is not supported.
  • ORC 文件有三个压缩相关的选项:NONE、ZLIB、SNAPPY。ORC file has three compression-related options: NONE, ZLIB, SNAPPY. 数据工厂支持从使用其中任一压缩格式的 ORC 文件中读取数据。Data Factory supports reading data from ORC file in any of these compressed formats. 它使用元数据中的压缩编解码器来读取数据。It uses the compression codec is in the metadata to read the data. 但是,写入 ORC 文件时,数据工厂会选择 ZLIB,这是 ORC 的默认选项。However, when writing to an ORC file, Data Factory chooses ZLIB, which is the default for ORC. 目前没有任何选项可以重写此行为。Currently, there is no option to override this behavior.

复制活动属性Copy activity properties

有关可用于定义活动的各部分和属性的完整列表,请参阅管道一文。For a full list of sections and properties available for defining activities, see the Pipelines article. 本部分提供 ORC 源和接收器支持的属性列表。This section provides a list of properties supported by the ORC source and sink.

以 ORC 作为源ORC as source

复制活动的 *source* 节支持以下属性。The following properties are supported in the copy activity *source* section.

属性Property 说明Description 必须Required
typetype 复制活动源的 type 属性必须设置为 OrcSource 。The type property of the copy activity source must be set to OrcSource . Yes
storeSettingsstoreSettings 有关如何从数据存储读取数据的一组属性。A group of properties on how to read data from a data store. 每个基于文件的连接器在 storeSettings 下都有其自己支持的读取设置。Each file-based connector has its own supported read settings under storeSettings. 请在连接器文章 -> 复制活动属性部分中查看详细信息See details in connector article -> Copy activity properties section . No

以 ORC 作为接收器ORC as sink

复制活动的 *sink* 节支持以下属性。The following properties are supported in the copy activity *sink* section.

属性Property 说明Description 必须Required
typetype 复制活动源的 type 属性必须设置为 OrcSink 。The type property of the copy activity source must be set to OrcSink . Yes
formatSettingsformatSettings 一组属性。A group of properties. 请参阅下面的“ORC 写入设置”表。Refer to ORC write settings table below. No
storeSettingsstoreSettings 有关如何将数据写入到数据存储的一组属性。A group of properties on how to write data to a data store. 每个基于文件的连接器在 storeSettings 下都有其自身支持的写入设置。Each file-based connector has its own supported write settings under storeSettings. 请在连接器文章 -> 复制活动属性部分中查看详细信息See details in connector article -> Copy activity properties section . No

formatSettings 下支持的 ORC 写入设置:Supported ORC write settings under formatSettings:

属性Property 说明Description 必须Required
typetype formatSettings 的类型必须设置为 OrcWriteSettings。The type of formatSettings must be set to OrcWriteSettings . Yes
maxRowsPerFilemaxRowsPerFile 在将数据写入到文件夹时,可选择写入多个文件,并指定每个文件的最大行数。When writing data into a folder, you can choose to write to multiple files and specify the max rows per file. No
fileNamePrefixfileNamePrefix 配置 maxRowsPerFile 时适用。Applicable when maxRowsPerFile is configured.
在将数据写入多个文件时,指定文件名前缀,生成的模式为 <fileNamePrefix>_00000.<fileExtension>Specify the file name prefix when writing data to multiple files, resulted in this pattern: <fileNamePrefix>_00000.<fileExtension>. 如果未指定,将自动生成文件名前缀。If not specified, file name prefix will be auto generated. 如果源是基于文件的存储或已启用分区选项的数据存储,则此属性不适用。This property does not apply when source is file-based store or partition-option-enabled data store.
No

使用自承载集成运行时Using Self-hosted Integration Runtime

重要

对于由自承载集成运行时(例如,本地与云数据存储之间)提供支持的复制,如果不是 按原样 复制 ORC 文件,则需要在 IR 计算机上安装 64 位 JRE 8(Java 运行时环境)或 OpenJDKMicrosoft Visual C++ 2010 Redistributable PackageFor copy empowered by Self-hosted Integration Runtime e.g. between on-premises and cloud data stores, if you are not copying ORC files as-is , you need to install the 64-bit JRE 8 (Java Runtime Environment) or OpenJDK and Microsoft Visual C++ 2010 Redistributable Package on your IR machine. 请查看以下段落以了解更多详细信息。Check the following paragraph with more details.

对于使用 ORC 文件序列化/反序列化在自承载集成运行时上运行的复制,ADF 将通过首先检查 JRE 的注册表项 (SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome) 来查找 Java 运行时,如果未找到,则会检查系统变量 JAVA_HOME 来查找 OpenJDK。For copy running on Self-hosted IR with ORC file serialization/deserialization, ADF locates the Java runtime by firstly checking the registry (SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome) for JRE, if not found, secondly checking system variable JAVA_HOME for OpenJDK.

  • 若要使用 JRE :64 位 IR 需要 64 位 JRE。To use JRE : The 64-bit IR requires 64-bit JRE. 可在此处找到它。You can find it from here.
  • 若要使用 OpenJDK :从 IR 版本 3.13 开始受支持。To use OpenJDK : It's supported since IR version 3.13. 将 jvm.dll 以及所有其他必需的 OpenJDK 程序集打包到自承载 IR 计算机中,并相应地设置系统环境变量 JAVA_HOME。Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly.
  • 若要安装 Visual C++ 2010 Redistributable Package :安装自承载 IR 时未安装 Visual C++ 2010 Redistributable Package。To install Visual C++ 2010 Redistributable Package : Visual C++ 2010 Redistributable Package is not installed with self-hosted IR installations. 可在此处找到它。You can find it from here.

提示

如果使用自承载集成运行时将数据复制为 ORC 格式或从 ORC 格式复制数据,并遇到“调用 java 时发生错误,消息: java.lang.OutOfMemoryError:Java 堆空间”的错误,则可以在托管自承载 IR 的计算机中添加环境变量 _JAVA_OPTIONS,以便调整 JVM 的最小/最大堆大小,以支持此类复制,然后重新运行管道 。If you copy data to/from ORC format using Self-hosted Integration Runtime and hit error saying "An error occurred when invoking java, message: java.lang.OutOfMemoryError:Java heap space ", you can add an environment variable _JAVA_OPTIONS in the machine that hosts the Self-hosted IR to adjust the min/max heap size for JVM to empower such copy, then rerun the pipeline.

在自承载 IR 上设置 JVM 堆大小

示例:将变量 _JAVA_OPTIONS 的值设置为 -Xms256m -Xmx16gExample: set variable _JAVA_OPTIONS with value -Xms256m -Xmx16g. 标志 Xms 指定 Java 虚拟机 (JVM) 的初始内存分配池,而 Xmx 指定最大内存分配池。The flag Xms specifies the initial memory allocation pool for a Java Virtual Machine (JVM), while Xmx specifies the maximum memory allocation pool. 这意味着 JVM 初始内存为 Xms,并且能够使用的最多内存为 XmxThis means that JVM will be started with Xms amount of memory and will be able to use a maximum of Xmx amount of memory. 默认情况下,ADF 最少使用 64 MB 且最多使用 1G。By default, ADF use min 64 MB and max 1G.

后续步骤Next steps