使用 Azure 数据工厂从 Spark 复制数据Copy data from Spark using Azure Data Factory

适用于: Azure 数据工厂 Azure Synapse Analytics(预览版)

本文概述了如何使用 Azure 数据工厂中的复制活动从 Spark 复制数据。This article outlines how to use the Copy Activity in Azure Data Factory to copy data from Spark. 它是基于概述复制活动总体的复制活动概述一文。It builds on the copy activity overview article that presents a general overview of copy activity.

支持的功能Supported capabilities

以下活动支持此 Spark 连接器:This Spark connector is supported for the following activities:

可以将数据从 Spark 复制到任何支持的接收器数据存储。You can copy data from Spark to any supported sink data store. 有关复制活动支持作为源/接收器的数据存储列表,请参阅支持的数据存储表。For a list of data stores that are supported as sources/sinks by the copy activity, see the Supported data stores table.

Azure 数据工厂提供内置的驱动程序用于启用连接,因此无需使用此连接器手动安装任何驱动程序。Azure Data Factory provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.

先决条件Prerequisites

如果数据存储位于本地网络、Azure 虚拟网络或 Amazon Virtual Private Cloud 内部,则需要设置自承载集成运行时才能连接到该数据存储。If your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to set up a self-hosted integration runtime to connect to it.

如果数据存储是托管的云数据服务,则可以使用 Azure 集成运行时。If your data store is a managed cloud data service, you can use Azure integration runtime. 如果访问范围限制为防火墙规则中列入白名单的 IP,可以选择将 Azure 集成运行时 IP 添加到允许列表。If the access is restricted to IPs that are whitelisted in the firewall rules, you can choose to add Azure Integration Runtime IPs into the allow list.

要详细了解网络安全机制和数据工厂支持的选项,请参阅数据访问策略For more information about the network security mechanisms and options supported by Data Factory, see Data access strategies.

入门Getting started

若要使用管道执行复制活动,可以使用以下工具或 SDK 之一:To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:

对于特定于 Spark 连接器的数据工厂实体,以下部分提供有关用于定义这些实体的属性的详细信息。The following sections provide details about properties that are used to define Data Factory entities specific to Spark connector.

链接服务属性Linked service properties

Spark 链接服务支持以下属性:The following properties are supported for Spark linked service:

属性Property 说明Description 必需Required
typetype type 属性必须设置为:SparkThe type property must be set to: Spark Yes
hosthost Spark 服务器的 IP 地址或主机名IP address or host name of the Spark server Yes
portport Spark 服务器用来侦听客户端连接的 TCP 端口。The TCP port that the Spark server uses to listen for client connections. 如果连接到 Azure HDInsights,请指定端口 443。If you connect to Azure HDInsights, specify port as 443. Yes
serverTypeserverType Spark 服务器的类型。The type of Spark server.
允许值包括:SharkServer、SharkServer2、SparkThriftServer Allowed values are: SharkServer, SharkServer2, SparkThriftServer
No
thriftTransportProtocolthriftTransportProtocol Thrift 层中要使用的传输协议。The transport protocol to use in the Thrift layer.
允许值包括:二进制、SASL、HTTP Allowed values are: Binary, SASL, HTTP
No
authenticationTypeauthenticationType 用于访问 Spark 服务器的身份验证方法。The authentication method used to access the Spark server.
允许值包括:Anonymous、Username、UsernameAndPassword、WindowsAzureHDInsightService Allowed values are: Anonymous, Username, UsernameAndPassword, WindowsAzureHDInsightService
Yes
usernameusername 用于访问 Spark 服务器的用户名。The user name that you use to access Spark Server. No
passwordpassword 用户所对应的密码。The password corresponding to the user. 将此字段标记为 SecureString 以安全地将其存储在数据工厂中或引用存储在 Azure Key Vault 中的机密Mark this field as a SecureString to store it securely in Data Factory, or reference a secret stored in Azure Key Vault. No
httpPathhttpPath 对应于 Spark 服务器的部分 URL。The partial URL corresponding to the Spark server. No
enableSslenableSsl 指定是否使用 TLS 加密到服务器的连接。Specifies whether the connections to the server are encrypted using TLS. 默认值为 false。The default value is false. No
trustedCertPathtrustedCertPath 包含受信任 CA 证书(通过 TLS 进行连接时用于验证服务器)的 .pem 文件的完整路径。The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over TLS. 只有在自承载 IR 上使用 TLS 时才能设置此属性。This property can only be set when using TLS on self-hosted IR. 默认值是随 IR 一起安装的 cacerts.pem 文件。The default value is the cacerts.pem file installed with the IR. No
useSystemTrustStoreuseSystemTrustStore 指定是使用系统信任存储中的 CA 证书还是使用指定 PEM 文件中的 CA 证书。Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. 默认值为 false。The default value is false. No
allowHostNameCNMismatchallowHostNameCNMismatch 指定通过 TLS 进行连接时是否要求 CA 颁发的 TLS/SSL 证书名称与服务器的主机名相匹配。Specifies whether to require a CA-issued TLS/SSL certificate name to match the host name of the server when connecting over TLS. 默认值为 false。The default value is false. No
allowSelfSignedServerCertallowSelfSignedServerCert 指定是否允许来自服务器的自签名证书。Specifies whether to allow self-signed certificates from the server. 默认值为 false。The default value is false. No
connectViaconnectVia 用于连接到数据存储的集成运行时The Integration Runtime to be used to connect to the data store. 先决条件部分了解更多信息。Learn more from Prerequisites section. 如果未指定,则使用默认 Azure Integration Runtime。If not specified, it uses the default Azure Integration Runtime. No

示例:Example:

{
    "name": "SparkLinkedService",
    "properties": {
        "type": "Spark",
        "typeProperties": {
            "host" : "<cluster>.azurehdinsight.cn",
            "port" : "<port>",
            "authenticationType" : "WindowsAzureHDInsightService",
            "username" : "<username>",
            "password": {
                 "type": "SecureString",
                 "value": "<password>"
            }
        }
    }
}

数据集属性Dataset properties

有关可用于定义数据集的各部分和属性的完整列表,请参阅数据集一文。For a full list of sections and properties available for defining datasets, see the datasets article. 本部分提供 Spark 数据集支持的属性列表。This section provides a list of properties supported by Spark dataset.

要从 Spark 复制数据,请将数据集的 type 属性设置为“SparkObject” 。To copy data from Spark, set the type property of the dataset to SparkObject. 支持以下属性:The following properties are supported:

属性Property 说明Description 必需Required
typetype 数据集的 type 属性必须设置为:SparkObject The type property of the dataset must be set to: SparkObject Yes
架构schema 架构的名称。Name of the schema. 否(如果指定了活动源中的“query”)No (if "query" in activity source is specified)
table 表的名称。Name of the table. 否(如果指定了活动源中的“query”)No (if "query" in activity source is specified)
tableNametableName 具有架构的表的名称。Name of the table with schema. 支持此属性是为了向后兼容。This property is supported for backward compatibility. 对于新的工作负荷,请使用 schematableUse schema and table for new workload. 否(如果指定了活动源中的“query”)No (if "query" in activity source is specified)

示例Example

{
    "name": "SparkDataset",
    "properties": {
        "type": "SparkObject",
        "typeProperties": {},
        "schema": [],
        "linkedServiceName": {
            "referenceName": "<Spark linked service name>",
            "type": "LinkedServiceReference"
        }
    }
}

复制活动属性Copy activity properties

有关可用于定义活动的各部分和属性的完整列表,请参阅管道一文。For a full list of sections and properties available for defining activities, see the Pipelines article. 本部分提供 Spark 数据源支持的属性列表。This section provides a list of properties supported by Spark source.

Spark 作为源Spark as source

要从 Spark 复制数据,请将复制活动中的源类型设置为“SparkSource” 。To copy data from Spark, set the source type in the copy activity to SparkSource. 复制活动source部分支持以下属性:The following properties are supported in the copy activity source section:

属性Property 说明Description 必需Required
typetype 复制活动 source 的 type 属性必须设置为:SparkSource The type property of the copy activity source must be set to: SparkSource Yes
查询query 使用自定义 SQL 查询读取数据。Use the custom SQL query to read data. 例如:"SELECT * FROM MyTable"For example: "SELECT * FROM MyTable". 否(如果指定了数据集中的“tableName”)No (if "tableName" in dataset is specified)

示例:Example:

"activities":[
    {
        "name": "CopyFromSpark",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<Spark input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "SparkSource",
                "query": "SELECT * FROM MyTable"
            },
            "sink": {
                "type": "<sink type>"
            }
        }
    }
]

Lookup 活动属性Lookup activity properties

若要了解有关属性的详细信息,请查看 Lookup 活动To learn details about the properties, check Lookup activity.

后续步骤Next steps

有关 Azure 数据工厂中复制活动支持作为源和接收器的数据存储的列表,请参阅支持的数据存储For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see supported data stores.