使用 Azure 数据工厂将数据从 Azure Blob 复制到 Azure SQL 数据库Copy data from Azure Blob to Azure SQL Database using Azure Data Factory

适用于: Azure 数据工厂 Azure Synapse Analytics

本教程介绍如何创建一个将数据从 Azure Blob 存储复制到 Azure SQL 数据库的数据工厂管道。In this tutorial, you create a Data Factory pipeline that copies data from Azure Blob Storage to Azure SQL Database. 本教程中的配置模式适用于从基于文件的数据存储复制到关系数据存储。The configuration pattern in this tutorial applies to copying from a file-based data store to a relational data store. 如需支持作为源和接收器的数据存储的列表,请参阅支持的数据存储和格式For a list of data stores supported as sources and sinks, see supported data stores and formats.

在本教程中,你将执行以下步骤:You take the following steps in this tutorial:

  • 创建数据工厂。Create a data factory.
  • 创建 Azure 存储和 Azure SQL 数据库链接服务。Create Azure Storage and Azure SQL Database linked services.
  • 创建 Azure BLob 和 Azure SQL 数据库数据集。Create Azure Blob and Azure SQL Database datasets.
  • 创建包含复制活动的管道。Create a pipeline contains a Copy activity.
  • 启动管道运行。Start a pipeline run.
  • 监视管道和活动运行。Monitor the pipeline and activity runs.

本教程使用 .NET SDK。This tutorial uses .NET SDK. 可以使用其他机制与 Azure 数据工厂交互;请参阅“快速入门”下的示例。You can use other mechanisms to interact with Azure Data Factory; refer to samples under Quickstarts.

如果没有 Azure 订阅,请在开始前创建一个试用帐户If you don't have an Azure subscription, create a trial account before you begin.

先决条件Prerequisites

创建 blob 和 SQL 表Create a blob and a SQL table

现在,请创建源博客和接收器 SQL 表,以便准备本教程所需的 Azure Blob 和 Azure SQL 数据库。Now, prepare your Azure Blob and Azure SQL Database for the tutorial by creating a source blog and a sink SQL table.

创建源 blobCreate a source blob

首先创建源 blob,方法是:创建一个容器并将输入文本文件上传到其中:First, create a source blob by creating a container and uploading an input text file to it:

  1. 打开记事本。Open Notepad. 复制以下文本,并通过本地方式将其保存到名为 inputEmp.txt 的文件。Copy the following text and save it locally to a file named inputEmp.txt.

    John|Doe
    Jane|Doe
    
  2. 使用 Azure 存储资源管理器之类的工具创建 adfv2tutorial 容器,并将 inputEmp.txt 文件上传到该容器。Use a tool such as Azure Storage Explorer to create the adfv2tutorial container, and to upload the inputEmp.txt file to the container.

创建接收器 SQL 表Create a sink SQL table

接下来,创建接收器 SQL 表:Next, create a sink SQL table:

  1. 使用以下 SQL 脚本在 Azure SQL 数据库中创建 dbo.emp 表。Use the following SQL script to create the dbo.emp table in your Azure SQL Database.

    CREATE TABLE dbo.emp
    (
        ID int IDENTITY(1,1) NOT NULL,
        FirstName varchar(50),
        LastName varchar(50)
    )
    GO
    
    CREATE CLUSTERED INDEX IX_emp_ID ON dbo.emp (ID);
    
  2. 允许 Azure 服务访问 SQL 数据库。Allow Azure services to access SQL Database. 确保允许访问服务器中的 Azure 服务,以便数据工厂服务可以将数据写入 SQL 数据库。Ensure that you allow access to Azure services in your server so that the Data Factory service can write data to SQL Database. 若要验证并启用此设置,请执行以下步骤:To verify and turn on this setting, do the following steps:

    1. 转到 Azure 门户以管理 SQL Server。Go to the Azure portal to manage your SQL server. 搜索并选择“SQL Server”。Search for and select SQL servers.

    2. 选择服务器。Select your server.

    3. 在 SQL Server 菜单的“安全性”标题下,选择“防火墙和虚拟网络”。 Under the SQL server menu's Security heading, select Firewalls and virtual networks.

    4. 在“防火墙和虚拟网络”页的“允许 Azure 服务和资源访问此服务器”下,选择“启用”。 In the Firewall and virtual networks page, under Allow Azure services and resources to access this server, select ON.

创建 Visual Studio 项目Create a Visual Studio project

使用 Visual Studio 创建 C# .NET 控制台应用程序。Using Visual Studio, create a C# .NET console application.

  1. 打开 Visual Studio。Open Visual Studio.
  2. 在“开始”窗口中,选择“创建新项目” 。In the Start window, select Create a new project.
  3. 在“创建新项目”窗口的项目类型列表中,选择 C# 版“控制台应用(.NET Framework)”。 In the Create a new project window, choose the C# version of Console App (.NET Framework) from the list of project types. 然后,选择“下一步”。Then select Next.
  4. 在“配置新项目”窗口中,输入 ADFv2Tutorial 作为 项目名称In the Configure your new project window, enter a Project name of ADFv2Tutorial. 对于“位置”,请浏览到要在其中保存项目的目录,或者创建该目录。For Location, browse to and/or create the directory to save the project in. 然后选择“创建”。Then select Create. 新项目会显示在 Visual Studio IDE 中。The new project appears in the Visual Studio IDE.

安装 NuGet 包Install NuGet packages

接下来,使用 NuGet 包管理器安装所需的库包。Next, install the required library packages using the NuGet package manager.

  1. 在菜单栏中,选择 “工具” > “NuGet 包管理器” > “包管理器控制台”In the menu bar, choose Tools > NuGet Package Manager > Package Manager Console.

  2. 在“包管理器控制台”窗格中,运行以下命令来安装包。In the Package Manager Console pane, run the following commands to install packages. 有关 Azure 数据工厂 NuGet 包的信息,请参阅 Microsoft.Azure.Management.DataFactoryFor information about the Azure Data Factory NuGet package, see Microsoft.Azure.Management.DataFactory.

    Install-Package Microsoft.Azure.Management.DataFactory
    Install-Package Microsoft.Azure.Management.ResourceManager -PreRelease
    Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
    

创建数据工厂客户端Create a data factory client

按这些步骤创建数据工厂客户端。Follow these steps to create a data factory client.

  1. 打开 Program.cs,然后使用以下代码覆盖现有的 using 语句,以便添加对命名空间的引用。Open Program.cs, then overwrite the existing using statements with the following code to add references to namespaces.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using Microsoft.Rest;
    using Microsoft.Rest.Serialization;
    using Microsoft.Azure.Management.ResourceManager;
    using Microsoft.Azure.Management.DataFactory;
    using Microsoft.Azure.Management.DataFactory.Models;
    using Microsoft.IdentityModel.Clients.ActiveDirectory;
    
  2. 将以下代码添加到 Main 方法以设置变量。Add the following code to the Main method that sets variables. 将 14 个占位符替换为你自己的值。Replace the 14 placeholders with your own values.

    若要查看当前可以使用数据工厂的 Azure 区域的列表,请参阅可用产品(按区域)To see the list of Azure regions in which Data Factory is currently available, see Products available by region. 在“产品”下拉列表下,选择“浏览” > “分析” > “数据工厂”。Under the Products drop-down list, choose Browse > Analytics > Data Factory. 然后,在“区域”下拉列表中选择感兴趣的区域。Then in the Regions drop-down list, choose the regions that interest you. 此时会出现一个网格,其中包含所选区域的数据工厂产品的可用性状态。A grid appears with the availability status of Data Factory products for your selected regions.

    备注

    数据工厂使用的数据存储(例如 Azure 存储和 Azure SQL 数据库)和计算资源(例如 HDInsight)可以位于其他区域(不同于你为数据工厂选择的区域)。Data stores, such as Azure Storage and Azure SQL Database, and computes, such as HDInsight, that Data Factory uses can be in other regions than what you choose for Data Factory.

    // Set variables
    string tenantID = "<your tenant ID>";
    string applicationId = "<your application ID>";
    string authenticationKey = "<your authentication key for the application>";
    string subscriptionId = "<your subscription ID to create the factory>";
    string resourceGroup = "<your resource group to create the factory>";
    
    string region = "<location to create the data factory in, such as China East 2>";
    string dataFactoryName = "<name of data factory to create (must be globally unique)>";
    
    // Specify the source Azure Blob information
    string storageAccount = "<your storage account name to copy data>";
    string storageKey = "<your storage account key>";
    string inputBlobPath = "adfv2tutorial/";
    string inputBlobName = "inputEmp.txt";
    
    // Specify the sink Azure SQL Database information
    string azureSqlConnString =
        "Server=tcp:<your server name>.database.chinacloudapi.cn,1433;" +
        "Database=<your database name>;" +
        "User ID=<your username>@<your server name>;" +
        "Password=<your password>;" +
        "Trusted_Connection=False;Encrypt=True;Connection Timeout=30";
    string azureSqlTableName = "dbo.emp";
    
    string storageLinkedServiceName = "AzureStorageLinkedService";
    string sqlDbLinkedServiceName = "AzureSqlDbLinkedService";
    string blobDatasetName = "BlobDataset";
    string sqlDatasetName = "SqlDataset";
    string pipelineName = "Adfv2TutorialBlobToSqlCopy";
    
  3. Main 方法中添加用于创建 DataFactoryManagementClient 类的实例的以下代码。Add the following code to the Main method that creates an instance of DataFactoryManagementClient class. 将使用此对象来创建数据工厂、链接服务、数据集和管道。You use this object to create a data factory, linked service, datasets, and pipeline. 还将使用此对象来监视管道运行详细信息。You also use this object to monitor the pipeline run details.

    // Authenticate and create a data factory management client
    var context = new AuthenticationContext("https://login.chinacloudapi.cn/" + tenantID);
    ClientCredential cc = new ClientCredential(applicationId, authenticationKey);
    AuthenticationResult result = context.AcquireTokenAsync(
        "https://management.chinacloudapi.cn/", cc
    ).Result;
    ServiceClientCredentials cred = new TokenCredentials(result.AccessToken);
    var client = new DataFactoryManagementClient(cred) { SubscriptionId = subscriptionId , BaseUri = new Uri("https://management.chinacloudapi.cn/")};
    

创建数据工厂Create a data factory

Main 方法中添加用于创建数据工厂的以下代码。Add the following code to the Main method that creates a data factory.

// Create a data factory
Console.WriteLine("Creating a data factory " + dataFactoryName + "...");
Factory dataFactory = new Factory
{
    Location = region,
    Identity = new FactoryIdentity()
};

client.Factories.CreateOrUpdate(resourceGroup, dataFactoryName, dataFactory);
Console.WriteLine(
    SafeJsonConvert.SerializeObject(dataFactory, client.SerializationSettings)
);

while (
    client.Factories.Get(
        resourceGroup, dataFactoryName
    ).ProvisioningState == "PendingCreation"
)
{
    System.Threading.Thread.Sleep(1000);
}

创建链接服务Create linked services

在本教程中,我们将创建两个链接服务,分别用于源和接收器。In this tutorial, you create two linked services for the source and sink, respectively.

创建 Azure 存储链接服务Create an Azure Storage linked service

Main 方法中添加用于创建 Azure 存储链接服务 的以下代码。Add the following code to the Main method that creates an Azure Storage linked service. 有关支持的属性和信息,请参阅 Azure Blob 链接服务属性For information about supported properties and details, see Azure Blob linked service properties.

// Create an Azure Storage linked service
Console.WriteLine("Creating linked service " + storageLinkedServiceName + "...");

LinkedServiceResource storageLinkedService = new LinkedServiceResource(
    new AzureStorageLinkedService
    {
        ConnectionString = new SecureString(
            "DefaultEndpointsProtocol=https;AccountName=" + storageAccount +
            ";AccountKey=" + storageKey +
        ";EndpointSuffix=core.chinacloudapi.cn"
        )
    }
);

client.LinkedServices.CreateOrUpdate(
    resourceGroup, dataFactoryName, storageLinkedServiceName, storageLinkedService
);
Console.WriteLine(
    SafeJsonConvert.SerializeObject(storageLinkedService, client.SerializationSettings)
);

创建 Azure SQL 数据库链接服务Create an Azure SQL Database linked service

Main 方法中添加用于创建 Azure SQL 数据库链接服务 的以下代码。Add the following code to the Main method that creates an Azure SQL Database linked service. 有关支持的属性和信息,请参阅 Azure SQL 数据库链接服务属性For information about supported properties and details, see Azure SQL Database linked service properties.

// Create an Azure SQL Database linked service
Console.WriteLine("Creating linked service " + sqlDbLinkedServiceName + "...");

LinkedServiceResource sqlDbLinkedService = new LinkedServiceResource(
    new AzureSqlDatabaseLinkedService
    {
        ConnectionString = new SecureString(azureSqlConnString)
    }
);

client.LinkedServices.CreateOrUpdate(
    resourceGroup, dataFactoryName, sqlDbLinkedServiceName, sqlDbLinkedService
);
Console.WriteLine(
    SafeJsonConvert.SerializeObject(sqlDbLinkedService, client.SerializationSettings)
);

创建数据集Create datasets

在本部分中创建两个数据集:一个用于源,另一个用于接收器。In this section, you create two datasets: one for the source, the other for the sink.

为源 Azure Blob 创建数据集Create a dataset for source Azure Blob

Main 方法中添加用于创建 Azure blob 数据集 的以下代码。Add the following code to the Main method that creates an Azure blob dataset. 有关支持的属性和信息,请参阅 Azure Blob 数据集属性For information about supported properties and details, see Azure Blob dataset properties.

在 Azure Blob 中定义表示源数据的数据集。You define a dataset that represents the source data in Azure Blob. 此 Blob 数据集引用在上一步中创建的 Azure 存储链接服务,并说明以下信息:This Blob dataset refers to the Azure Storage linked service you create in the previous step, and describes:

  • 要从其中复制数据的 Blob 的位置:FolderPathFileNameThe location of the blob to copy from: FolderPath and FileName
  • 指示分析内容方式的 blob 格式:TextFormat 及其设置(例如列分隔符)The blob format indicating how to parse the content: TextFormat and its settings, such as column delimiter
  • 数据结构,包括列名称和数据类型(在本示例中映射到接收器 SQL 表)The data structure, including column names and data types, which map in this example to the sink SQL table
// Create an Azure Blob dataset
Console.WriteLine("Creating dataset " + blobDatasetName + "...");
DatasetResource blobDataset = new DatasetResource(
    new AzureBlobDataset
    {
        LinkedServiceName = new LinkedServiceReference {
            ReferenceName = storageLinkedServiceName
        },
        FolderPath = inputBlobPath,
        FileName = inputBlobName,
        Format = new TextFormat { ColumnDelimiter = "|" },
        Structure = new List<DatasetDataElement>
        {
            new DatasetDataElement { Name = "FirstName", Type = "String" },
            new DatasetDataElement { Name = "LastName", Type = "String" }
        }
    }
);

client.Datasets.CreateOrUpdate(
    resourceGroup, dataFactoryName, blobDatasetName, blobDataset
);
Console.WriteLine(
    SafeJsonConvert.SerializeObject(blobDataset, client.SerializationSettings)
);

为接收器 Azure SQL 数据库创建数据集Create a dataset for sink Azure SQL Database

Main 方法中添加用于创建 Azure SQL 数据库数据集 的以下代码。Add the following code to the Main method that creates an Azure SQL Database dataset. 有关支持的属性和信息,请参阅 Azure SQL 数据库数据集属性For information about supported properties and details, see Azure SQL Database dataset properties.

在 Azure SQL 数据库中定义表示接收器数据的数据集。You define a dataset that represents the sink data in Azure SQL Database. 此数据集引用在上一步创建的 Azure SQL 数据库链接服务。This dataset refers to the Azure SQL Database linked service you created in the previous step. 它还指定用于保存所复制数据的 SQL 表。It also specifies the SQL table that holds the copied data.

// Create an Azure SQL Database dataset
Console.WriteLine("Creating dataset " + sqlDatasetName + "...");
DatasetResource sqlDataset = new DatasetResource(
    new AzureSqlTableDataset
    {
        LinkedServiceName = new LinkedServiceReference
        {
            ReferenceName = sqlDbLinkedServiceName
        },
        TableName = azureSqlTableName
    }
);

client.Datasets.CreateOrUpdate(
    resourceGroup, dataFactoryName, sqlDatasetName, sqlDataset
);
Console.WriteLine(
    SafeJsonConvert.SerializeObject(sqlDataset, client.SerializationSettings)
);

创建管道Create a pipeline

Main 方法中添加用于创建 包含复制活动的管道 的以下代码。Add the following code to the Main method that creates a pipeline with a copy activity. 在本教程中,此管道包含一个活动:CopyActivity,它接受 Blob 数据集作为源,接受 SQL 数据集作为接收器。In this tutorial, this pipeline contains one activity: CopyActivity, which takes in the Blob dataset as source and the SQL dataset as sink. 若要了解复制活动详情,请参阅 Azure 数据工厂中的复制活动For information about copy activity details, see Copy activity in Azure Data Factory.

// Create a pipeline with copy activity
Console.WriteLine("Creating pipeline " + pipelineName + "...");
PipelineResource pipeline = new PipelineResource
{
    Activities = new List<Activity>
    {
        new CopyActivity
        {
            Name = "CopyFromBlobToSQL",
            Inputs = new List<DatasetReference>
            {
                new DatasetReference() { ReferenceName = blobDatasetName }
            },
            Outputs = new List<DatasetReference>
            {
                new DatasetReference { ReferenceName = sqlDatasetName }
            },
            Source = new BlobSource { },
            Sink = new SqlSink { }
        }
    }
};

client.Pipelines.CreateOrUpdate(resourceGroup, dataFactoryName, pipelineName, pipeline);
Console.WriteLine(
    SafeJsonConvert.SerializeObject(pipeline, client.SerializationSettings)
);

创建管道运行Create a pipeline run

Main 方法中添加用于触发管道运行的以下代码。Add the following code to the Main method that triggers a pipeline run.

// Create a pipeline run
Console.WriteLine("Creating pipeline run...");
CreateRunResponse runResponse = client.Pipelines.CreateRunWithHttpMessagesAsync(
    resourceGroup, dataFactoryName, pipelineName
).Result.Body;
Console.WriteLine("Pipeline run ID: " + runResponse.RunId);

监视管道运行Monitor a pipeline run

现在请插入代码,以便检查管道运行状态并获取有关复制活动运行的详细信息。Now insert the code to check pipeline run states and to get details about the copy activity run.

  1. Main 方法中添加以下代码用于持续检查管道运行状态,直到它完成数据复制为止。Add the following code to the Main method to continuously check the statuses of the pipeline run until it finishes copying the data.

    // Monitor the pipeline run
    Console.WriteLine("Checking pipeline run status...");
    PipelineRun pipelineRun;
    while (true)
    {
        pipelineRun = client.PipelineRuns.Get(
            resourceGroup, dataFactoryName, runResponse.RunId
        );
        Console.WriteLine("Status: " + pipelineRun.Status);
        if (pipelineRun.Status == "InProgress")
            System.Threading.Thread.Sleep(15000);
        else
            break;
    }
    
  2. Main 方法中添加以下代码用于检索复制活动运行详细信息,例如,读取或写入的数据的大小。Add the following code to the Main method that retrieves copy activity run details, such as the size of the data that was read or written.

    // Check the copy activity run details
    Console.WriteLine("Checking copy activity run details...");
    
    RunFilterParameters filterParams = new RunFilterParameters(
        DateTime.UtcNow.AddMinutes(-10), DateTime.UtcNow.AddMinutes(10)
    );
    
    ActivityRunsQueryResponse queryResponse = client.ActivityRuns.QueryByPipelineRun(
        resourceGroup, dataFactoryName, runResponse.RunId, filterParams
    );
    
    if (pipelineRun.Status == "Succeeded")
    {
        Console.WriteLine(queryResponse.Value.First().Output);
    }
    else
        Console.WriteLine(queryResponse.Value.First().Error);
    
    Console.WriteLine("\nPress any key to exit...");
    Console.ReadKey();
    

运行代码Run the code

通过选择“生成” > “生成解决方案”来生成应用程序。Build the application by choosing Build > Build Solution. 然后通过选择“调试” > “开始调试”来启动应用程序,并验证管道执行情况。Then start the application by choosing Debug > Start Debugging, and verify the pipeline execution.

控制台会输出数据工厂、链接服务、数据集、管道和管道运行的创建进度。The console prints the progress of creating a data factory, linked service, datasets, pipeline, and pipeline run. 然后,检查管道运行状态。It then checks the pipeline run status. 请等到出现包含数据读取/写入大小的复制活动运行详细信息。Wait until you see the copy activity run details with the data read/written size. 然后,可以使用 SQL Server Management Studio (SSMS) 或 Visual Studio 之类的工具连接到目标 Azure SQL 数据库,并检查指定的目标表是否包含复制的数据。Then, using tools such as SQL Server Management Studio (SSMS) or Visual Studio, you can connect to your destination Azure SQL Database and check whether the destination table you specified contains the copied data.

示例输出Sample output

Creating a data factory AdfV2Tutorial...
{
  "identity": {
    "type": "SystemAssigned"
  },
  "location": "China East 2"
}
Creating linked service AzureStorageLinkedService...
{
  "properties": {
    "type": "AzureStorage",
    "typeProperties": {
      "connectionString": {
        "type": "SecureString",
        "value": "DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>;EndpointSuffix=core.chinacloudapi.cn"
      }
    }
  }
}
Creating linked service AzureSqlDbLinkedService...
{
  "properties": {
    "type": "AzureSqlDatabase",
    "typeProperties": {
      "connectionString": {
        "type": "SecureString",
        "value": "Server=tcp:<servername>.database.chinacloudapi.cn,1433;Database=<databasename>;User ID=<username>@<servername>;Password=<password>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30"
      }
    }
  }
}
Creating dataset BlobDataset...
{
  "properties": {
    "type": "AzureBlob",
    "typeProperties": {
      "folderPath": "adfv2tutorial/",
      "fileName": "inputEmp.txt",
      "format": {
        "type": "TextFormat",
        "columnDelimiter": "|"
      }
    },
    "structure": [
      {
        "name": "FirstName",
        "type": "String"
      },
      {
        "name": "LastName",
        "type": "String"
      }
    ],
    "linkedServiceName": {
      "type": "LinkedServiceReference",
      "referenceName": "AzureStorageLinkedService"
    }
  }
}
Creating dataset SqlDataset...
{
  "properties": {
    "type": "AzureSqlTable",
    "typeProperties": {
      "tableName": "dbo.emp"
    },
    "linkedServiceName": {
      "type": "LinkedServiceReference",
      "referenceName": "AzureSqlDbLinkedService"
    }
  }
}
Creating pipeline Adfv2TutorialBlobToSqlCopy...
{
  "properties": {
    "activities": [
      {
        "type": "Copy",
        "typeProperties": {
          "source": {
            "type": "BlobSource"
          },
          "sink": {
            "type": "SqlSink"
          }
        },
        "inputs": [
          {
            "type": "DatasetReference",
            "referenceName": "BlobDataset"
          }
        ],
        "outputs": [
          {
            "type": "DatasetReference",
            "referenceName": "SqlDataset"
          }
        ],
        "name": "CopyFromBlobToSQL"
      }
    ]
  }
}
Creating pipeline run...
Pipeline run ID: 1cd03653-88a0-4c90-aabc-ae12d843e252
Checking pipeline run status...
Status: InProgress
Status: InProgress
Status: Succeeded
Checking copy activity run details...
{
  "dataRead": 18,
  "dataWritten": 28,
  "rowsCopied": 2,
  "copyDuration": 2,
  "throughput": 0.01,
  "errors": [],
  "effectiveIntegrationRuntime": "DefaultIntegrationRuntime (China East 2)",
  "usedDataIntegrationUnits": 2,
  "billedDuration": 2
}

Press any key to exit...

后续步骤Next steps

此示例中的管道将数据从 Azure Blob 存储中的一个位置复制到另一个位置。The pipeline in this sample copies data from one location to another location in an Azure blob storage. 你已了解如何执行以下操作:You learned how to:

  • 创建数据工厂。Create a data factory.
  • 创建 Azure 存储和 Azure SQL 数据库链接服务。Create Azure Storage and Azure SQL Database linked services.
  • 创建 Azure BLob 和 Azure SQL 数据库数据集。Create Azure Blob and Azure SQL Database datasets.
  • 创建包含复制活动的管道。Create a pipeline containing a copy activity.
  • 启动管道运行。Start a pipeline run.
  • 监视管道和活动运行。Monitor the pipeline and activity runs.

若要了解如何将数据从本地复制到云,请转到以下教程:Advance to the following tutorial to learn about copying data from on-premises to cloud: