将数据管理升级到 SDK v2

在 V1 中,AzureML 数据集可以是 FiledatasetTabulardataset。 在 V2 中,AzureML 数据资产可以是 uri_folderuri_filemltable。 你可以从概念上将 Filedataset 映射到 uri_folderuri_file,将 Tabulardataset 映射到 mltable

  • URI(uri_folderuri_file)- 统一资源标识符,它是对本地计算机中或云中的存储位置的引用,使用它可以轻松访问作业中的数据。
  • MLTable - 一种方法,用于抽象表格数据的架构定义,以便数据使用者更容易将表具体化为 Pandas/Dask/Spark 数据帧。

本文比较 SDK v1 和 SDK v2 中的数据方案。

创建 filedataset/URI 类型的数据资产

  • SDK v1 - 创建 Filedataset

    from azureml.core import Workspace, Datastore, Dataset
    
    # create a FileDataset pointing to files in 'animals' folder and its subfolders recursively
    datastore_paths = [(datastore, 'animals')]
    animal_ds = Dataset.File.from_files(path=datastore_paths)
    
    # create a FileDataset from image and label files behind public web urls
    web_paths = ['https://azureopendatastorage.blob.core.chinacloudapi.cn/mnist/train-images-idx3-ubyte.gz',
                 'https://azureopendatastorage.blob.core.chinacloudapi.cn/mnist/train-labels-idx1-ubyte.gz']
    mnist_ds = Dataset.File.from_files(path=web_paths)
    
  • SDK v2

    • 创建 URI_FOLDER 类型的数据资产

      from azure.ai.ml.entities import Data
      from azure.ai.ml.constants import AssetTypes
      
      # Supported paths include:
      # local: './<path>'
      # blob:  'https://<account_name>.blob.core.chinacloudapi.cn/<container_name>/<path>'
      # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.chinacloudapi.cn/<path>/'
      # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
      
      my_path = '<path>'
      
      my_data = Data(
          path=my_path,
          type=AssetTypes.URI_FOLDER,
          description="<description>",
          name="<name>",
          version='<version>'
      )
      
      ml_client.data.create_or_update(my_data)
      
    • 创建 URI_FILE 类型的数据资产。

      from azure.ai.ml.entities import Data
      from azure.ai.ml.constants import AssetTypes
      
      # Supported paths include:
      # local: './<path>/<file>'
      # blob:  'https://<account_name>.blob.core.chinacloudapi.cn/<container_name>/<path>/<file>'
      # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.chinacloudapi.cn/<path>/<file>'
      # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>/<file>'
      my_path = '<path>'
      
      my_data = Data(
          path=my_path,
          type=AssetTypes.URI_FILE,
          description="<description>",
          name="<name>",
          version="<version>"
      )
      
      ml_client.data.create_or_update(my_data)
      

创建表格数据集/数据资产

  • SDK v1

    from azureml.core import Workspace, Datastore, Dataset
    
    datastore_name = 'your datastore name'
    
    # get existing workspace
    workspace = Workspace.from_config()
    
    # retrieve an existing datastore in the workspace by name
    datastore = Datastore.get(workspace, datastore_name)
    
    # create a TabularDataset from 3 file paths in datastore
    datastore_paths = [(datastore, 'weather/2018/11.csv'),
                       (datastore, 'weather/2018/12.csv'),
                       (datastore, 'weather/2019/*.csv')]
    
    weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)
    
  • SDK v2 - 通过 yaml 定义创建 mltable 数据资产

    type: mltable
    
    paths:
      - pattern: ./*.txt
    transformations:
      - read_delimited:
          delimiter: ,
          encoding: ascii
          header: all_files_same_headers
    
    from azure.ai.ml.entities import Data
    from azure.ai.ml.constants import AssetTypes
    
    # my_path must point to folder containing MLTable artifact (MLTable file + data
    # Supported paths include:
    # local: './<path>'
    # blob:  'https://<account_name>.blob.core.chinacloudapi.cn/<container_name>/<path>'
    # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.chinacloudapi.cn/<path>/'
    # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
    
    my_path = '<path>'
    
    my_data = Data(
        path=my_path,
        type=AssetTypes.MLTABLE,
        description="<description>",
        name="<name>",
        version='<version>'
    )
    
    ml_client.data.create_or_update(my_data)
    

在试验/作业中使用数据

  • SDK v1

    from azureml.core import ScriptRunConfig
    
    src = ScriptRunConfig(source_directory=script_folder,
                          script='train_titanic.py',
                          # pass dataset as an input with friendly name 'titanic'
                          arguments=['--input-data', titanic_ds.as_named_input('titanic')],
                          compute_target=compute_target,
                          environment=myenv)
    
    # Submit the run configuration for your training run
    run = experiment.submit(src)
    run.wait_for_completion(show_output=True)
    
  • SDK v2

    from azure.ai.ml import command
    from azure.ai.ml.entities import Data
    from azure.ai.ml import Input, Output
    from azure.ai.ml.constants import AssetTypes
    
    # Possible Asset Types for Data:
    # AssetTypes.URI_FILE
    # AssetTypes.URI_FOLDER
    # AssetTypes.MLTABLE
    
    # Possible Paths for Data:
    # Blob: https://<account_name>.blob.core.chinacloudapi.cn/<container_name>/<folder>/<file>
    # Datastore: azureml://datastores/paths/<folder>/<file>
    # Data Asset: azureml:<my_data>:<version>
    
    my_job_inputs = {
        "raw_data": Input(type=AssetTypes.URI_FOLDER, path="<path>")
    }
    
    my_job_outputs = {
        "prep_data": Output(type=AssetTypes.URI_FOLDER, path="<path>")
    }
    
    job = command(
        code="./src",  # local path where the code is stored
        command="python process_data.py --raw_data ${{inputs.raw_data}} --prep_data ${{outputs.prep_data}}",
        inputs=my_job_inputs,
        outputs=my_job_outputs,
        environment="<environment_name>:<version>",
        compute="cpu-cluster",
    )
    
    # submit the command
    returned_job = ml_client.create_or_update(job)
    # get a URL for the status of the job
    returned_job.services["Studio"].endpoint
    

SDK v1 和 SDK v2 中关键功能的映射

SDK v1 中的功能 SDK v2 中的粗略映射
SDK v1 中的方法/API SDK v2 中的方法/API

后续步骤

有关详细信息,请参阅以下文档: