教程:通过 Azure CLI 使用 DMS 将 PostgreSQL 联机迁移到 Azure DB for PostgreSQLTutorial: Migrate PostgreSQL to Azure DB for PostgreSQL online using DMS via the Azure CLI

可以使用 Azure 数据库迁移服务在尽量缩短停机时间的情况下,将数据库从本地 PostgreSQL 实例迁移到 Azure Database for PostgreSQLYou can use Azure Database Migration Service to migrate the databases from an on-premises PostgreSQL instance to Azure Database for PostgreSQL with minimal downtime. 换而言之,实现这种迁移只会对应用程序造成极短暂的停机。In other words, migration can be achieved with minimal downtime to the application. 本教程介绍如何在 Azure 数据库迁移服务中使用联机迁移活动将 DVD Rental 示例数据库从 PostgreSQL 9.6 的本地实例迁移到 Azure Database for PostgreSQL。In this tutorial, you migrate the DVD Rental sample database from an on-premises instance of PostgreSQL 9.6 to Azure Database for PostgreSQL by using the online migration activity in Azure Database Migration Service.

本教程介绍如何执行下列操作:In this tutorial, you learn how to:

  • 使用 pg_dump 实用程序迁移示例架构。Migrate the sample schema using pg_dump utility.
  • 创建 Azure 数据库迁移服务的实例。Create an instance of the Azure Database Migration Service.
  • 使用 Azure 数据库迁移服务创建迁移项目。Create a migration project by using the Azure Database Migration Service.
  • 运行迁移。Run the migration.
  • 监视迁移。Monitor the migration.

Note

使用 Azure 数据库迁移服务执行联机迁移需要基于“高级”定价层创建实例。Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier. 我们对磁盘进行加密,以防止在迁移过程中数据被盗。We encrypt disk to prevent data theft during the process of migration.

Important

为获得最佳迁移体验,Azure 建议在目标数据库所在的 Azure 区域中创建 Azure 数据库迁移服务的实例。For an optimal migration experience, Azure recommends creating an instance of Azure Database Migration Service in the same Azure region as the target database. 跨区域或地理位置移动数据可能会减慢迁移过程并引入错误。Moving data across regions or geographies can slow down the migration process and introduce errors.

先决条件Prerequisites

要完成本教程,需要:To complete this tutorial, you need to:

  • 下载并安装 PostgreSQL 社区版 9.5、9.6 或 10。Download and install PostgreSQL community edition 9.5, 9.6, or 10. 源 PostgreSQL 服务器版本必须是 9.5.11、9.6.7、10 或更高版本。The source PostgreSQL Server version must be 9.5.11, 9.6.7, 10, or later. 有关详细信息,请参阅支持的 PostgreSQL 数据库版本一文。For more information, see the article Supported PostgreSQL Database Versions.

    另请注意,目标 Azure Database for PostgreSQL 版本必须等于或晚于本地 PostgreSQL 版本。Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. 例如,PostgreSQL 9.6 只能迁移到 Azure Database for PostgreSQL 9.6、10 或 11,而不能迁移到 Azure Database for PostgreSQL 9.5。For example, PostgreSQL 9.6 can only migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.

  • 在 Azure Database for PostgreSQL 中创建实例Create an instance in Azure Database for PostgreSQL.

  • 使用 Azure 资源管理器部署模型创建 Azure 数据库迁移服务的 Azure 虚拟网络,它将使用 ExpressRouteVPN 为本地源服务器提供站点到站点连接。Create an Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either ExpressRoute or VPN. 有关创建虚拟网络的详细信息,请参阅虚拟网络文档,尤其是提供了分步详细信息的快速入门文章。For more information about creating a virtual network, see the Virtual Network Documentation, and especially the quickstart articles with step-by-step details.

    Note

    在设置虚拟网络期间,如果将 ExpressRoute 与 Azure 的网络对等互连一起使用,请将以下服务终结点添加到将在其中预配服务的子网:During virtual network setup, if you use ExpressRoute with network peering to Azure, add the following service endpoints to the subnet in which the service will be provisioned:

    • 目标数据库终结点(例如,SQL 终结点、Cosmos DB 终结点等)Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
    • 存储终结点Storage endpoint
    • 服务总线终结点Service bus endpoint

    Azure 数据库迁移服务缺少 Internet 连接,因此必须提供此配置。This configuration is necessary because Azure Database Migration Service lacks internet connectivity.

  • 确保虚拟网络网络安全组 (NSG) 规则未阻止到 Azure 数据库迁移服务的以下入站通信端口:443、53、9354、445、12000。Ensure that your virtual network Network Security Group (NSG) rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. 有关虚拟网络 NSG 流量筛选的更多详细信息,请参阅使用网络安全组筛选网络流量一文。For more detail on virtual network NSG traffic filtering, see the article Filter network traffic with network security groups.

  • 配置针对数据库引擎访问的 Windows 防火墙Configure your Windows Firewall for database engine access.

  • 打开 Windows 防火墙,使 Azure 数据库迁移服务能够访问源 PostgreSQL 服务器(默认情况下为 TCP 端口 5432)。Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL Server, which by default is TCP port 5432.

  • 在源数据库的前面使用了防火墙设备时,可能需要添加防火墙规则以允许 Azure 数据库迁移服务访问要迁移的源数据库。When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.

  • 为 Azure Database for PostgreSQL 创建服务器级防火墙规则,以允许 Azure 数据库迁移服务访问目标数据库。Create a server-level firewall rule for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. 提供用于 Azure 数据库迁移服务的虚拟网络子网范围。Provide the subnet range of the virtual network used for Azure Database Migration Service.

  • 有一种方法可调用 CLI:There is a method for invoking the CLI:

    • 在本地安装并运行 CLI。Install and run the CLI locally. CLI 2.0 是用于管理 Azure 资源的命令行工具。CLI 2.0 is the command-line tool for managing Azure resources.

      若要下载 CLI,请按照安装 Azure CLI 2.0 一文中的说明操作。To download the CLI, follow the instructions in the article Install Azure CLI 2.0. 本文还列出了支持 CLI 2.0 的平台。The article also lists the platforms that support CLI 2.0.

      若要设置适用于 Linux 的 Windows 子系统 (WSL),请按照 Windows 10 Installation Guide(Windows 10 安装指南)中的说明操作To set up Windows Subsystem for Linux (WSL), follow the instructions in the Windows 10 Installation Guide

  • 在 postgresql.config 文件中启用逻辑复制,并设置以下参数:Enable logical replication in the postgresql.config file, and set the following parameters:

    • wal_level = logicalwal_level = logical
    • max_replication_slots = [槽数],建议设置为“5 个槽” max_replication_slots = [number of slots], recommend setting to five slots
    • max_wal_senders =[并发任务数] - max_wal_senders 参数设置可以运行的并发任务数,建议设置为“10 个任务” max_wal_senders =[number of concurrent tasks] - The max_wal_senders parameter sets the number of concurrent tasks that can run, recommend setting to 10 tasks

迁移示例架构Migrate the sample schema

若要完成所有数据库对象(例如表架构、索引和存储过程),需从源数据库提取架构并将其应用到此数据库。To complete all the database objects like table schemas, indexes and stored procedures, we need to extract schema from the source database and apply to the database.

  1. 使用 pg_dump -s 命令为数据库创建架构转储文件。Use pg_dump -s command to create a schema dump file for a database.

    pg_dump -o -h hostname -U db_username -d db_name -s > your_schema.sql
    

    例如,若要转储 dvdrental 数据库的架构文件,请执行以下命令:For example, to dump a schema file dvdrental database:

    pg_dump -o -h localhost -U postgres -d dvdrental -s  > dvdrentalSchema.sql
    

    若要详细了解如何使用 pg_dump 实用程序,请参阅 pg-dump 教程中的示例。For more information about using the pg_dump utility, see the examples in the pg-dump tutorial.

  2. 在目标环境中创建一个空数据库,即 Azure Database for PostgreSQL。Create an empty database in your target environment, which is Azure Database for PostgreSQL.

    有关如何连接和创建数据库的详细信息,请参阅在 Azure 门户中创建 Azure Database for PostgreSQL 服务器一文。For details on how to connect and create a database, see the article Create an Azure Database for PostgreSQL server in the Azure portal.

  3. 通过还原架构转储文件,将架构导入已创建的目标数据库。Import the schema into the target database you created by restoring the schema dump file.

    psql -h hostname -U db_username -d db_name < your_schema.sql 
    

    例如:For example:

    psql -h mypgserver-20170401.postgres.database.chinacloudapi.cn  -U postgres -d dvdrental < dvdrentalSchema.sql
    
  4. 如果架构中有外键,则迁移的初始加载和连续同步会失败。If you have foreign keys in your schema, the initial load and continuous sync of the migration will fail. 请在 PgAdmin 或 psql 中执行以下脚本,以便在目标 (Azure Database for PostgreSQL) 中提取 drop foreign key 脚本和 add foreign key 脚本。Execute the following script in PgAdmin or in psql to extract the drop foreign key script and add foreign key script at the destination (Azure Database for PostgreSQL).

    SELECT Queries.tablename
           ,concat('alter table ', Queries.tablename, ' ', STRING_AGG(concat('DROP CONSTRAINT ', Queries.foreignkey), ',')) as DropQuery
                ,concat('alter table ', Queries.tablename, ' ',
                                                STRING_AGG(concat('ADD CONSTRAINT ', Queries.foreignkey, ' FOREIGN KEY (', column_name, ')', 'REFERENCES ', foreign_table_name, '(', foreign_column_name, ')' ), ',')) as AddQuery
        FROM
        (SELECT
        tc.table_schema,
        tc.constraint_name as foreignkey,
        tc.table_name as tableName,
        kcu.column_name,
        ccu.table_schema AS foreign_table_schema,
        ccu.table_name AS foreign_table_name,
        ccu.column_name AS foreign_column_name
    FROM
        information_schema.table_constraints AS tc
        JOIN information_schema.key_column_usage AS kcu
          ON tc.constraint_name = kcu.constraint_name
          AND tc.table_schema = kcu.table_schema
        JOIN information_schema.constraint_column_usage AS ccu
          ON ccu.constraint_name = tc.constraint_name
          AND ccu.table_schema = tc.table_schema
    WHERE constraint_type = 'FOREIGN KEY') Queries
      GROUP BY Queries.tablename;
    

    运行查询结果中的 drop foreign key(第二列)。Run the drop foreign key (which is the second column) in the query result.

  5. 数据中的触发器(插入或更新触发器)会赶在源中的已复制数据之前在目标中强制实施数据完整性。Triggers in the data (insert or update triggers) will enforce data integrity in the target ahead of the replicated data from the source. 建议在迁移期间禁用目标的所有表中的触发器,然后在迁移完成后重新启用这些触发器。It's recommended that you disable triggers in all the tables at the target during migration and then re-enable the triggers after migration is complete.

    若要禁用目标数据库中的触发器,请使用以下命令:To disable triggers in target database, use the following command:

    select concat ('alter table ', event_object_table, ' disable trigger ', trigger_name)
    from information_schema.triggers;
    
  6. 如果任何表中有 ENUM 数据类型,建议在目标表中将其临时更新为“character varying”数据类型。If there are ENUM data type in any tables, it's recommended that you temporarily update it to a ‘character varying’ datatype in the target table. 数据复制操作完成以后,请将数据类型还原为 ENUM。After data replication is done, revert the datatype to ENUM.

使用 CLI 预配 DMS 的实例Provisioning an instance of DMS using the CLI

  1. 安装 dms 同步扩展:Install the dms sync extension:

    • 通过运行以下命令登录到 Azure:Sign in to Azure by running the following command:

      az cloud set --name AzureChinaCloud
      az login
      
    • 系统提示时,请打开 Web 浏览器并输入用于对设备进行身份验证的代码。When prompted, open a web browser and enter a code to authenticate your device. 按照列出的说明操作。Follow the instructions as listed.

    • 添加 dms 扩展:Add the dms extension:

      • 若要列出可用的扩展,请运行以下命令:To list the available extensions, run the following command:

        az extension list-available –otable
        
      • 若要安装扩展,请运行以下命令:To install the extension, run the following command:

        az extension add –n dms-preview
        
    • 若要验证是否已正确安装 dms 扩展,请运行以下命令:To verify you have dms extension installed correct, run the following command:

      az extension list -otable
      

      应会看到以下输出:You should see the following output:

      ExtensionType    Name
      ---------------  ------
      whl              dms
      

      Important

      确保扩展版本高于 0.11.0。Make sure that your extension version is above 0.11.0.

    • 任何时候都可以通过运行以下命令来查看所有在 DMS 中受支持的命令:At any time, view all commands supported in DMS by running:

      az dms -h
      
    • 如果有多个 Azure 订阅,请运行以下命令,以便设置需要用来预配 DMS 服务实例的订阅。If you have multiple Azure subscriptions, run the following command to set the subscription that you want to use to provision an instance of the DMS service.

      az account set -s 97181df2-909d-420b-ab93-1bff15acb6b7
      
  2. 通过运行以下命令来预配 DMS 的实例:Provision an instance of DMS by running the following command:

    az dms create -l [location] -n <newServiceName> -g <yourResourceGroupName> --sku-name Premium_4vCores --subnet/subscriptions/{vnet subscription id}/resourceGroups/{vnet resource group}/providers/Microsoft.Network/virtualNetworks/{vnet name}/subnets/{subnet name} –tags tagName1=tagValue1 tagWithNoValue
    

    例如,以下命令会创建一项具有下述参数的服务:For example the following command will create a service in:

    • 位置:ChinaEast2Location: ChinaEast2
    • 订阅:97181df2-909d-420b-ab93-1bff15acb6b7Subscription: 97181df2-909d-420b-ab93-1bff15acb6b7
    • 资源组名称:PostgresDemoResource Group Name: PostgresDemo
    • DMS 服务名称:PostgresCLIDMS Service Name: PostgresCLI
    az dms create -l chinaeast2 -g PostgresDemo -n PostgresCLI --subnet /subscriptions/97181df2-909d-420b-ab93-1bff15acb6b7/resourceGroups/ERNetwork/providers/Microsoft.Network/virtualNetworks/AzureDMS-CORP-USC-VNET-5044/subnets/Subnet-1 --sku-name Premium_4vCores
    

    创建 DMS 服务的实例需要大约 10-12 分钟。It takes about 10-12 minutes to create the instance of the DMS service.

  3. 若要确定 DMS 代理的 IP 地址,以便将其添加到 Postgres pg_hba.conf 文件,请运行以下命令:To identify the IP address of the DMS agent so that you can add it to the Postgres pg_hba.conf file, run the following command:

    az network nic list -g <ResourceGroupName>--query '[].ipConfigurations | [].privateIpAddress'
    

    例如:For example:

    az network nic list -g PostgresDemo --query '[].ipConfigurations | [].privateIpAddress'
    

    获得的结果应该类似于以下地址:You should get a result similar to the following address:

    [
      "172.16.136.18"
    ]
    
  4. 将 DMS 代理的 IP 地址添加到 Postgres pg_hba.conf 文件。Add the IP address of the DMS agent to the Postgres pg_hba.conf file.

    • 在 DMS 中预配完以后,记下 DMS IP 地址。Take note of the DMS IP address after you finish provisioning in DMS.

    • 将 IP 地址添加到源中与以下条目类似的 pg_hba.conf 文件:Add the IP address to pg_hba.conf file on the source, similar to the following entry:

      host    all     all     172.16.136.18/10    md5
      host    replication     postgres    172.16.136.18/10    md5
      
  5. 接下来请运行以下命令,创建 PostgreSQL 迁移项目:Next, create a PostgreSQL migration project by running the following command:

    az dms project create -l <location> -g <ResourceGroupName> --service-name <yourServiceName> --source-platform PostgreSQL --target-platform AzureDbforPostgreSQL -n <newProjectName>
    

    例如,以下命令会创建一个使用下述参数的项目:For example, the following command creates a project using these parameters:

    • 位置:ChinaEast2Location: ChinaEast2

    • 资源组名称:PostgresDemoResource Group Name: PostgresDemo

    • 服务名称:PostgresCLIService Name: PostgresCLI

    • 项目名称:PGMigrationProject name: PGMigration

    • 源平台:PostgreSQLSource platform: PostgreSQL

    • 目标平台:AzureDbForPostgreSqlTarget platform: AzureDbForPostgreSql

      az dms project create -l chinaeast2 -n PGMigration -g PostgresDemo --service-name PostgresCLI --source-platform PostgreSQL --target-platform AzureDbForPostgreSql
      
  6. 使用以下步骤创建 PostgreSQL 迁移任务。Create a PostgreSQL migration task using the following steps.

    此步骤包括使用源 IP、UserID 和密码,目标 IP、UserID 和密码以及任务类型来建立连接。This step includes using the source IP, UserID and password, destination IP, UserID, password, and task type to establish connectivity.

    • 若要查看选项的完整列表,请运行以下命令:To see a full list of options, run the command:

      az dms project task create -h
      

      源和目标连接的输入参数都引用一个包含对象列表的 json 文件。For both source and target connection, the input parameter is referring to a json file that has the object list.

      适用于 PostgreSQL 连接的连接 JSON 对象的格式。The format of the connection JSON object for PostgreSQL connections.

      {
                  "userName": "user name",    // if this is missing or null, you will be prompted
                  "password": null,           // if this is missing or null (highly recommended) you will
              be prompted
                  "serverName": "server name",
                  "databaseName": "database name", // if this is missing, it will default to the 'postgres'
              server
                  "port": 5432                // if this is missing, it will default to 5432
              }
      
    • 此外还有一个数据库选项 json 文件,该文件列出 json 对象。There's also a database option json file that lists the json objects. 对于 PostgreSQL,数据库选项 JSON 对象的格式如下所示:For PostgreSQL, the format of the database options JSON object is shown below:

      [
          {
              "name": "source database",
              "target_database_name": "target database",
          },
          ...n
      ]
      
    • 使用记事本创建一个 json 文件,复制以下命令并将其粘贴到文件中,然后将文件保存在 C:\DMS\source.json 中。Create a json file with Notepad, copy the following commands and paste them into the file, and then save the file in C:\DMS\source.json.

      {
                 "userName": "postgres",    
                 "password": null,           
             be prompted
                 "serverName": "13.51.14.222",
                 "databaseName": "dvdrental", 
                 "port": 5432                
             }
      
    • 创建另一个名为 target.json 的文件,将其另存为 C:\DMS\target.json。Create another file named target.json and save as C:\DMS\target.json. 包括以下命令:Include the following commands:

      {
              "userName": " dms@builddemotarget",    
              "password": null,           
              "serverName": " builddemotarget.postgres.database.chinacloudapi.cn",
              "databaseName": "inventory", 
              "port": 5432                
          }
      
    • 创建一个数据库选项 json 文件,该文件将要迁移的数据库的选项以清单形式列出:Create a database options json file that lists inventory as the database to migrate:

      [
          {
              "name": "dvdrental",
              "target_database_name": "dvdrental",
          }
      ]
      
    • 运行以下命令,其中包含源、目标和 DB 选项 json 文件。Run the following command, which takes in the source, destination, and the DB option json files.

      az dms project task create -g PostgresDemo --project-name PGMigration --source-platform postgresql --target-platform azuredbforpostgresql --source-connection-json c:\DMS\source.json --database-options-json C:\DMS\option.json --service-name PostgresCLI --target-connection-json c:\DMS\target.json –task-type OnlineMigration -n runnowtask    
      

      现在已成功提交一个迁移任务。At this point, you've successfully submitted a migration task.

  7. 若要显示任务进度,请运行以下命令:To show progress of the task, run the following command:

    az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
    

    OROR

    az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask --expand output
    
  8. 也可从 expand output 中查询 migrationState:You can also query for the migrationState from the expand output:

    az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask --expand output --query 'properties.output[].migrationState | [0]' "READY_TO_COMPLETE"
    

了解迁移任务状态Understanding migration task status

在输出文件中,有多个指示迁移进度的参数。In the output file, there are several parameters that indicate progress of migration. 有关示例,请查看下面的输出文件:For example, see the output file below:

  "output": [                                 Database Level
        {
          "appliedChanges": 0,        //Total incremental sync applied after full load
          "cdcDeleteCounter": 0       // Total delete operation  applied after full load
          "cdcInsertCounter": 0,      // Total insert operation applied after full load
          "cdcUpdateCounter": 0,      // Total update operation applied after full load
          "databaseName": "inventory",
          "endedOn": null,
          "fullLoadCompletedTables": 2,   //Number of tables completed full load
          "fullLoadErroredTables": 0, //Number of tables that contain migration error
          "fullLoadLoadingTables": 0, //Number of tables that are in loading status
          "fullLoadQueuedTables": 0,  //Number of tables that are in queued status
          "id": "db|inventory",
          "incomingChanges": 0,       //Number of changes after full load
          "initializationCompleted": true,
          "latency": 0,
          "migrationState": "READY_TO_COMPLETE",  //Status of migration task. READY_TO_COMPLETE means the database is ready for cutover
          "resultType": "DatabaseLevelOutput",
          "startedOn": "2018-07-05T23:36:02.27839+00:00"
        },
        {
          "databaseCount": 1,
          "endedOn": null,
          "id": "dd27aa3a-ed71-4bff-ab34-77db4261101c",
          "resultType": "MigrationLevelOutput",
          "sourceServer": "138.91.123.10",
          "sourceVersion": "PostgreSQL",
          "startedOn": "2018-07-05T23:36:02.27839+00:00",
          "state": "PENDING",
          "targetServer": "builddemotarget.postgres.database.chinacloudapi.cn",
          "targetVersion": "Azure Database for PostgreSQL"
        },
        {                                     Table 1
          "cdcDeleteCounter": 0,
          "cdcInsertCounter": 0,
          "cdcUpdateCounter": 0,
          "dataErrorsCount": 0,
          "databaseName": "inventory",
          "fullLoadEndedOn": "2018-07-05T23:36:20.740701+00:00",  //Full load completed time
          "fullLoadEstFinishTime": "1970-01-01T00:00:00+00:00",
          "fullLoadStartedOn": "2018-07-05T23:36:15.864552+00:00",    //Full load started time
          "fullLoadTotalRows": 10,                    //Number of rows loaded in full load
          "fullLoadTotalVolumeBytes": 7056,               //Volume in Bytes in full load
          "id": "or|inventory|public|actor",          
          "lastModifiedTime": "2018-07-05T23:36:16.880174+00:00",
          "resultType": "TableLevelOutput",
          "state": "COMPLETED",                   //State of migration for this table
          "tableName": "public.catalog",              //Table name
          "totalChangesApplied": 0                //Total sync changes that applied after full load
        },
        {                                     Table 2
          "cdcDeleteCounter": 0,
          "cdcInsertCounter": 50,
          "cdcUpdateCounter": 0,
          "dataErrorsCount": 0,
          "databaseName": "inventory",
          "fullLoadEndedOn": "2018-07-05T23:36:23.963138+00:00",
          "fullLoadEstFinishTime": "1970-01-01T00:00:00+00:00",
          "fullLoadStartedOn": "2018-07-05T23:36:19.302013+00:00",
          "fullLoadTotalRows": 112,
          "fullLoadTotalVolumeBytes": 46592,
          "id": "or|inventory|public|address",
          "lastModifiedTime": "2018-07-05T23:36:20.308646+00:00",
          "resultType": "TableLevelOutput",
          "state": "COMPLETED",
          "tableName": "public.orders",
          "totalChangesApplied": 0
        }
      ],                          DMS migration task state
      "state": "Running", //Migration task state – Running means it is still listening to any changes that might come in                  
      "taskType": null
    },
    "resourceGroup": "PostgresDemo",
    "type": "Microsoft.DataMigration/services/projects/tasks"

直接转换迁移任务Cutover migration task

完全加载以后,即可对数据库进行直接转换。The database is ready for cutover when full load is complete. 完全加载以后,当新事务传入时,DMS 任务可能仍在应用所做的更改,具体取决于源服务器的繁忙程度。Depending on how busy the source server is with new transactions is coming in, the DMS task might be still applying changes after the full load is complete.

若要确保所有数据都已捕获,请验证源和目标数据库的行计数。To ensure all data is caught up, validate row counts between the source and target databases. 例如,可以使用以下命令:For example, you can use the following command:

"migrationState": "READY_TO_COMPLETE", //Status of migration task. READY_TO_COMPLETE means database is ready for cutover
 "incomingChanges": 0,  //continue to check for a period of 5-10 minutes to make sure no new incoming changes that need to be applied to the target server
   "fullLoadTotalRows": 10, //full load for table 1
    "cdcDeleteCounter": 0,  //delete, insert and update counter on incremental sync after full load
    "cdcInsertCounter": 50,
    "cdcUpdateCounter": 0,
     "fullLoadTotalRows": 112,  //full load for table 2
  1. 使用以下命令执行直接转换数据库迁移任务:Perform the cutover database migration task by using the following command:

    az dms project task cutover -h
    

    例如:For example:

    az dms project task cutover --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask  --object-name Inventory
    
  2. 若要监视直接转换进度,请运行以下命令:To monitor the cutover progress, run the following command:

    az dms project task show --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
    

服务、项目、任务清理Service, project, task cleanup

如需取消或删除任何 DMS 任务、项目或服务,请按以下顺序进行取消:If you need to cancel or delete any DMS task, project, or service, perform the cancellation in the following sequence:

  • 取消任何正在运行的任务Cancel any running task
  • 删除任务Delete the task
  • 删除项目Delete the project
  • 删除 DMS 服务Delete DMS service
  1. 若要取消正在运行的任务,请使用以下命令:To cancel a running task, use the following command:

    az dms project task cancel --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
    
  2. 若要删除正在运行的任务,请使用以下命令:To delete a running task, use the following command:

    az dms project task delete --service-name PostgresCLI --project-name PGMigration --resource-group PostgresDemo --name Runnowtask
    
  3. 若要取消正在运行的项目,请使用以下命令:To cancel a running project, use the following command:

    az dms project task cancel -n runnowtask --project-name PGMigration -g PostgresDemo --service-name PostgresCLI
    
  4. 若要删除正在运行的项目,请使用以下命令:To delete a running project, use the following command:

    az dms project task delete -n runnowtask --project-name PGMigration -g PostgresDemo --service-name PostgresCLI
    
  5. 若要删除 DMS 服务,请使用以下命令:To delete DMS service, use the following command:

    az dms delete -g ProgresDemo -n PostgresCLI
    

后续步骤Next steps