将 GPU 附加到 Azure Stack HCI 上的 Ubuntu Linux VMAttaching a GPU to an Ubuntu Linux VM on Azure Stack HCI

适用于:Windows Server 2019Applies to: Windows Server 2019

本主题分步说明如何使用 Ubuntu 虚拟机 (VM) 的离散设备分配 (DDA) 技术在 Azure Stack HCI 中安装和配置 NVIDIA 图形处理单元 (GPU)。This topic provides step-by-step instructions on how to install and configure an NVIDIA graphics processing unit (GPU) with Azure Stack HCI using Discrete Device Assignment (DDA) technology for an Ubuntu virtual machine (VM). 本文档假定已部署 Azure Stack HCI 群集且已安装 VM。This document assumes you have the Azure Stack HCI cluster deployed and VMs installed.

安装 GPU,然后在 PowerShell 中将其卸除Install the GPU and then dismount it in PowerShell

  1. 按照 OEM 说明和 BIOS 建议,将 NVIDIA GPU 物理安装到相应的服务器。Install the NVIDIA GPU(s) physically into the appropriate server(s) following OEM instructions and BIOS recommendations.
  2. 打开每台服务器。Power on each server.
  3. 使用具有管理员权限的帐户登录到已安装 NVIDIA GPU 的服务器。Sign in using an account with administrative privileges to the server(s) with the NVIDIA GPU installed.
  4. 打开“设备管理器”,然后导航到“其他设备”部分。Open Device Manager and navigate to the other devices section. 此时应显示列出为“3D 视频控制器”的设备。You should see a device listed as "3D Video Controller."
  5. 右键单击“3D 视频控制器”以打开“属性”页。Right-click on "3D Video Controller" to bring up the Properties page. 单击“详细信息”。Click Details. 在“属性”下的下拉列表中,选择“位置路径”。From the dropdown under Property, select "Location paths."
  6. 请注意字符串 PCIRoot 的值,以下屏幕截图中高亮显示了该值。Note the value with string PCIRoot as highlighted in the screen shot below. 右键单击“值”以将其复制/保存。Right-click on Value and copy/save it. 位置路径屏幕截图
  7. 使用提升的特权打开 Windows PowerShell,然后执行 Dismount-VMHostAssignableDevice cmdlet 将 DDA 的 GPU 设备卸除到 VM。Open Windows PowerShell with elevated privileges and execute the Dismount-VMHostAssignableDevice cmdlet to dismount the GPU device for DDA to VM. 将 LocationPath 值替换为在步骤 6 中为设备获取的值。Replace the LocationPath value with the value for your device obtained in step 6.
    Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(16)#PCI(0000)#PCI(0000)" -force
    
  8. 确认设备在“设备管理器”的系统设备下列为“已卸除”。Confirm the device is listed under system devices in Device Manager as Dismounted. “已卸除设备”屏幕截图

创建和配置 Ubuntu 虚拟机Create and configure an Ubuntu virtual machine

  1. 下载 Ubuntu 桌面版本 18.04.02 ISODownload Ubuntu desktop release 18.04.02 ISO.

  2. 在已安装 GPU 的系统节点上打开 Hyper-V 管理器。Open Hyper-V Manager on the node of the system with the GPU installed.

    Note

    DDA 不支持故障转移DDA doesn't support failover. 这是 DDA 的虚拟机限制。This is a virtual machine limitation with DDA. 因此,建议使用 Hyper-V 管理器(而不是“故障转移群集管理器”)在节点上部署 VM 。Therefore, we recommend using Hyper-V Manager to deploy the VM on the node instead of Failover Cluster Manager. 结合使用“故障转移群集管理器”与 DDA 会导致失败,并显示一条错误消息,表示 VM 的设备不支持高可用性。Use of Failover Cluster Manager with DDA will fail with an error message indicating that the VM has a device that doesn't support high availability.

  3. 在步骤 1 中下载的 Ubuntu ISO 中,使用“Hyper-V 管理器”中的“新建虚拟机向导”新建虚拟机,以创建具有 2GB 内存且附有网卡的 Ubuntu Gen 1 VM 。Using the Ubuntu ISO downloaded in step 1, create a new virtual machine using the New Virtual Machine Wizard in Hyper-V Manager to create a Ubuntu Gen 1 VM with 2GB of memory and a network card attached to it.

  4. 在 PowerShell 中,使用以下 cmdlet 将已卸除的 GPU 设备分配到 VM,并将 LocationPath 值替换为设备的值。In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets below, replacing the LocationPath value with the value for your device.

    # Confirm that there are no DDA devices assigned to the VM
    Get-VMAssignableDevice -VMName Ubuntu
    
    # Assign the GPU to the VM
    Add-VMAssignableDevice -LocationPath "PCIROOT(16)#PCI(0000)#PCI(0000)" -VMName Ubuntu
    
    # Confirm that the GPU is assigned to the VM
    Get-VMAssignableDevice -VMName Ubuntu
    

    成功将 GPU 分配到 VM 后将显示以下输出: 分配 GPU 屏幕截图

    按照此处的 GPU 文档配置其他值:Configure additional values following GPU documentation here:

     # Enable Write-Combining on the CPU
     Set-VM -GuestControlledCacheTypes $true -VMName VMName
    
     # Configure the 32 bit MMIO space
     Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName
    
     # Configure greater than 32 bit MMIO space
     Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName
    

    Note

    值 33280Mb 应足以满足大多数 GPU 的需求,但应将其替换为大于 GPU 内存的值。The Value 33280Mb should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.

  5. 使用 Hyper-V 管理器连接到 VM 并开始安装 Ubuntu OS。Using Hyper-V Manager, connect to the VM and start the Ubuntu OS install. 选择默认值,在 VM 上安装 Ubuntu OS。Choose the defaults to install the Ubuntu OS on the VM.

  6. 完成安装后,请使用 Hyper-V 管理器关闭 VM,并配置 VM 的“自动停止操作”以关闭来宾操作系统,如以下屏幕截图所示 : 来宾操作系统关闭屏幕截图

  7. 登录到 Ubuntu 并打开终端以安装 SSH:Log in to Ubuntu and open the terminal to install SSH:

     $ sudo apt install openssh-server
    
  8. 使用 ifconfig 命令查找 TCP/IP 地址以安装 Ubuntu,并复制 eth0 接口的 IP 地址 。Find The TCP/IP address for the Ubuntu installation using the ifconfig command and copy the IP address for the eth0 interface.

  9. 使用 Putty 等 SSH 客户端连接到 Ubuntu VM,以便进行进一步的配置。Use an SSH client such as Putty to connect to the Ubuntu VM for further configuration.

  10. 通过 SSH 客户端登录时,请发出 lspci 命令,并验证 NVIDIA GPU 是否以“3D 控制器”列出。Upon login through the SSH client, issue the command lspci and validate that the NVIDIA GPU is listed as "3D controller."

    Important

    如果 NVIDIA GPU 未显示为“3D 控制器”,请勿继续操作。If The NVIDIA GPU is not seen as "3D controller," please do not proceed further. 继续操作之前,请确保已遵循上述步骤。Please ensure that the steps above are followed before proceeding.

  11. 在 VM 中,搜索并打开“软件和更新”。Within the VM, search for and open Software & Updates. 导航到“其他驱动程序”,然后选择列出的最新 NVIDIA GPU 驱动程序。Navigate to Additional Drivers, then choose the latest NVIDIA GPU drivers listed. 单击“应用更改”按钮,完成驱动程序的安装。Complete the driver install by clicking the Apply Changes button. 驱动程序安装屏幕截图

  12. 完成驱动程序的安装后,重启 Ubuntu VM。Restart the Ubuntu VM after the driver installation completes. VM 启动后,通过 SSH 客户端进行连接,并发出 nvidia-smi 命令,以验证 NVIDIA GPU 驱动程序安装是否已成功完成。Once the VM starts, connect through the SSH client and issue the command nvidia-smi to verify that the NVIDIA GPU driver installation completed successfully. 输出应类似于以下屏幕截图: nvidia-smi 屏幕截图

  13. 使用 SSH 客户端,设置存储库并安装 Docker CE 引擎:Using the SSH client, set up the repository and install the Docker CE Engine:

    $ sudo apt-get update
    $ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
    

    添加 Docker 的官方 GPG 密钥:Add Docker's official GPG key:

    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    

    搜索指纹的后八位字符,以验证目前是否拥有具有指纹 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 的密钥:Verify that you now have the key with the fingerprint 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 by searching for the last eight characters of the fingerprint:

    $ sudo apt-key fingerprint 0EBFCD88
    

    输出应如下所示:Your output should look similar to this:

    pub   rsa4096 2017-02-22 [SCEA]
    9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
    uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
    sub   rsa4096 2017-02-22 [S]
    

    为 Ubuntu AMD64 体系结构设置稳定存储库:Set up the stable repository for Ubuntu AMD64 architecture:

    $ sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    

    更新包并安装 Docker CE:Update packages and install Docker CE:

    $ sudo apt-get update
    $ sudo apt-get install docker-ce docker-ce-cli containerd.io
    

    验证 Docker CE 安装:Verify the Docker CE install:

    $ sudo docker run hello-world
    

配置 Azure IoT EdgeConfigure Azure IoT Edge

若要为此配置做好准备,请查看 NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano GitHub 存储库中包含的常见问题解答,其中说明了为何需要安装 Docker 而不是 Moby。To prepare for this configuration, please review the FAQ contained in the NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano GitHub repo, which explains the need to install Docker instead of Moby. 查看后,请继续执行以下步骤。After reviewing, proceed to the steps below.

安装 NVIDIA DockerInstall NVIDIA Docker

  1. 从 SSH 客户端添加包存储库:From the SSH client, add package repositories:

    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
    sudo apt-key add -
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
    sudo tee /etc/apt/sources.list.d/nvidia-docker.list
    
    sudo apt-get update
    
  2. 安装 nvidia-docker2 并重载 Docker 守护程序配置:Install nvidia-docker2 and reload the Docker daemon configuration:

    sudo apt-get install -y nvidia-docker2
    sudo pkill -SIGHUP dockerd
    
  3. 重新启动 VM:Reboot the VM:

    sudo /sbin/shutdown -r now
    
  4. 重新启动后,验证 NVIDIA Docker 是否安装成功:Upon reboot, verify successful installation of NVIDIA Docker:

    sudo docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
    

    如果安装成功,输出将类似于以下屏幕截图: Docker 安装成功屏幕截图

  5. 按照此处的说明操作,继续安装 Azure IoT Edge,跳过运行时安装:Following the instructions here, proceed to install Azure IoT Edge, skipping the runtime install:

    curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
    
    sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
    
    curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
    sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
    sudo apt-get update
    
    sudo apt-get install iotedge
    

    Note

    安装 Azure IoT Edge 后,验证 config.yaml 是否位于 Ubuntu VM 的 /etc/iotedge/config.yamlAfter installing Azure IoT Edge, verify that the config.yaml is present on the Ubuntu VM at /etc/iotedge/config.yaml

  6. 按照此处的指导操作,在 Azure 门户中创建 IoT Edge 设备标识。Create an IoT Edge device identity in the Azure portal following guidance here. 接下来,为刚创建的 IoT Edge 复制设备连接字符串。Next, copy the device connection string for the newly created IoT Edge.

  7. 使用 SSH 客户端在 Ubuntu VM 上更新 config.yaml 中的设备连接字符串:Using the SSH client, update the device connection string in config.yaml on the Ubuntu VM:

    sudo nano /etc/iotedge/config.yaml
    

    找到文件的预配配置,并取消注释“手动预配配置”节。Find the provisioning configurations of the file and uncomment the "Manual provisioning configuration" section. 使用 IoT Edge 设备的连接字符串更新 device_connection_string 的值。Update the value of device_connection_string with the connection string from your IoT Edge device. 请确保注释掉任何其他预配部分。请确保 provisioning: 行前面没有空格,并且嵌套项缩进了两个空格:Make sure any other provisioning sections are commented out. Make sure that the provisioning: line has no preceding whitespace and that nested items are indented by two spaces:

    手动预配配置屏幕截图

    若要将剪贴板内容粘贴到 Nano,请按住 shift 键并右键单击或按 shift 并按 insert。To paste clipboard contents into Nano, shift+right click or press shift+insert. 保存并关闭 (Ctrl + X, Y, Enter) 文件。Save and close the file (Ctrl + X, Y, Enter).

  8. 使用 SSH 客户端重启 IoT Edge 守护程序:Using the SSH client, restart the IoT Edge daemon:

    sudo systemctl restart iotedge
    

    验证安装并检查 IoT Edge 守护程序的状态:Verify the installation and check the status of the IoT Edge daemon:

    systemctl status iotedge
    
    journalctl -u iotedge --no-pager --no-full
    
  9. 使用 SSH 客户端在 Ubuntu VM 上创建以下目录结构:Using the SSH client, create the following directory structure on the Ubuntu VM:

    cd /var
    sudo mkdir deepstream
    mkdir ./deepstream/custom_configs
    cd /var/deepstream
    sudo mkdir custom_streams
    sudo chmod -R 777 /var/deepstream
    cd ./custom_streams
    
  10. 确保工作目录是 /var/deepstream/custom_streams,并通过在 SSH 客户端中执行以下命令下载演示视频文件Ensure your working directory is /var/deepstream/custom_streams and download the demo videos file by executing the following command in the SSH client:

    wget -O cars-streams.tar.gz --no-check-certificate https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588371&authkey=AAavgrxG95v9gu0
    

    解压视频文件:Un-compress the video files:

    tar -xzvf cars-streams.tar.gz
    

    目录 /var/deepstream/custom_streams 的内容应类似于以下屏幕截图:The contents of the directory /var/deepstream/custom_streams should be similar to the screenshot below:

    自定义流屏幕截图

  11. 在 /var/deepstream/custom_configs 目录中,新建名为 test5_config_file_src_infer_azure_iotedge_edited.txt 文件。Create a new file called test5_config_file_src_infer_azure_iotedge_edited.txt in the /var/deepstream/custom_configs directory. 使用文本编辑器打开文件并粘贴以下代码,然后保存并关闭文件。Using a text editor, open the file and paste in the following code, then save and close the file.

    # Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
    #
    # NVIDIA Corporation and its licensors retain all intellectual property
    # and proprietary rights in and to this software, related documentation
    # and any modifications thereto.  Any use, reproduction, disclosure or
    # distribution of this software and related documentation without an express
    # license agreement from NVIDIA Corporation is strictly prohibited.
    
    [application]
    enable-perf-measurement=1
    perf-measurement-interval-sec=5
    #gie-kitti-output-dir=streamscl
    
    [tiled-display]
    enable=1
    rows=2
    columns=2
    width=1280
    height=720
    gpu-id=0
    #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
    #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
    #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
    #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
    #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
    nvbuf-memory-type=0
    
    [source0]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI
    type=3
    uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
    num-sources=2
    gpu-id=0
    nvbuf-memory-type=0
    
    [source1]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI
    type=3
    uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
    num-sources=2
    gpu-id=0
    nvbuf-memory-type=0
    
    [sink0]
    enable=0
    
    [sink3]
    enable=1
    #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
    type=4
    #1=h264 2=h265
    codec=1
    sync=0
    bitrate=4000000
    # set below properties in case of RTSPStreaming
    rtsp-port=8554
    udp-port=5400
    
    [sink1]
    enable=1
    #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
    type=6
    msg-conv-config=../configs/dstest5_msgconv_sample_config.txt
    #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
    #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
    #(256): PAYLOAD_RESERVED - Reserved type
    #(257): PAYLOAD_CUSTOM   - Custom schema payload
    msg-conv-payload-type=1
    msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_azure_edge_proto.so
    topic=mytopic
    #Optional:
    #msg-broker-config=../../../../libs/azure_protocol_adaptor/module_client/cfg_azure.txt
    
    [sink2]
    enable=0
    type=3
    #1=mp4 2=mkv
    container=1
    #1=h264 2=h265 3=mpeg4
    ## only SW mpeg4 is supported right now.
    codec=3
    sync=1
    bitrate=2000000
    output-file=out.mp4
    source-id=0
    
    [osd]
    enable=1
    gpu-id=0
    border-width=1
    text-size=15
    text-color=1;1;1;1;
    text-bg-color=0.3;0.3;0.3;1
    font=Arial
    show-clock=0
    clock-x-offset=800
    clock-y-offset=820
    clock-text-size=12
    clock-color=1;0;0;0
    nvbuf-memory-type=0
    
    [streammux]
    gpu-id=0
    ##Boolean property to inform muxer that sources are live
    live-source=0
    batch-size=4
    ##time out in usec, to wait after the first buffer is available
    ##to push the batch even if the complete batch is not formed
    batched-push-timeout=40000
    ## Set muxer output width and height
    width=1920
    height=1080
    ##Enable to maintain aspect ratio wrt source, and allow black borders, works
    ##along with width, height properties
    enable-padding=0
    nvbuf-memory-type=0
    
    [primary-gie]
    enable=1
    gpu-id=0
    batch-size=4
    ## 0=FP32, 1=INT8, 2=FP16 mode
    bbox-border-color0=1;0;0;1
    bbox-border-color1=0;1;1;1
    bbox-border-color2=0;1;1;1
    bbox-border-color3=0;1;0;1
    nvbuf-memory-type=0
    interval=0
    gie-unique-id=1
    model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_int8.engine
    labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
    config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
    #infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/
    
    [tracker]
    enable=1
    tracker-width=600
    tracker-height=300
    ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
    #ll-config-file required for DCF/IOU only
    #ll-config-file=tracker_config.yml
    #ll-config-file=iou_config.txt
    gpu-id=0
    #enable-batch-process applicable to DCF only
    enable-batch-process=0
    
    [tests]
    file-loop=1
    
  12. 导航到 Azure 门户。Navigate to the Azure portal. 选择“IoT 中心预配”,单击“自动设备管理”,然后单击 IoT Edge :Select IoT Hub Provisioned, click on Automatic Device Management, then click on IoT Edge:

    自动设备管理屏幕截图

  13. 在右侧窗格中,选择上文所使用的设备连接字符串的设备标识。In the right-hand pane, select the device identity whose device connection string was used above. 单击“设置模块”:Click on set modules:

    设置模块屏幕截图

  14. 在 IoT Edge 模块下,单击并选择“市场模块”:Under IoT Edge Modules, click and choose Marketplace Module:

    市场模块屏幕截图

  15. 搜索 NVIDIA 并选择 DeepStream SDK,如下所示:Search for NVIDIA and choose DeepStream SDK like displayed below:

    DeepStream SDK 屏幕截图

  16. 确保 NvidiaDeepStreamSDK 模块在 IoT Edge 模块下列出:Ensure NvidiaDeepStreamSDK module is listed under IoT Edge Modules:

    IoT Edge 模块屏幕截图

  17. 单击“NVIDIADeepStreamSDK”模块,并选择“容器创建选项”。Click on The "NVIDIADeepStreamSDK" module and choose "Container Create Options." 默认配置如下所示:The default configuration is shown here:

    “容器创建选项”屏幕截图

    将以上配置替换为以下配置:Replace the configuration above with the configuration below:

    {
      "ExposedPorts": {
        "8554/tcp": {}
      },
      "Entrypoint": [
        "/usr/bin/deepstream-test5-app",
        "-c",
        "test5_config_file_src_infer_azure_iotedge_edited.txt",
        "-p",
        "1",
        "-m",
        "1"
      ],
      "HostConfig": {
        "runtime": "nvidia",
        "Binds": [
          "/var/deepstream/custom_configs:/root/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test5/custom_configs/",
          "/var/deepstream/custom_streams:/root/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test5/custom_streams/"
        ],
        "PortBindings": {
          "8554/tcp": [
            {
              "HostPort": "8554"
            }
          ]
        }
      },
      "WorkingDir": "/root/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test5/custom_configs/"
    }
    
  18. 单击“查看和创建”,然后在下一页中单击“创建” 。Click Review and Create, and on the next page click Create. 此时在 Azure 门户中应显示 IoT Edge 设备的以下三个模块:You should now see the three modules listed below for your IoT Edge device in the Azure portal:

    模块和 IoT Edge 中心连接屏幕截图

  19. 使用 SSH 客户端连接到 Ubuntu VM,并验证是否正在运行正确的模块:Connect to the Ubuntu VM using the SSH client and verify that the correct modules are running:

    sudo iotedge list
    

    iotedge 列表屏幕截图

    nvidia-smi
    

    nvidia-smi 屏幕截图

    Note

    下载 NvidiaDeepstream 容器需要几分钟时间。It will take a few minutes for the NvidiaDeepstream Container to be downloaded. 可以使用命令“journalctl -u iotedge --no-pager --no-full”查看 iotedge 守护程序日志,以验证下载内容。You can validate the download using the command "journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.

  20. 确认 NvdiaDeepStreem 容器是否正常运行。Confirm that the NvdiaDeepStreem Container is operational. 以下屏幕截图中的命令输出表示成功。The command output in the screenshots below indicates success.

    sudo iotedge list
    

    iotedge 列表屏幕截图

    sudo iotedge logs -f NVIDIADeepStreamSDK
    

    iotedge 列表屏幕截图

    nvidia-smi
    

    iotedge 列表屏幕截图

  21. 使用 ifconfig 命令确认 Ubuntu VM 的 TCP/IP 地址,并查看 eth0 接口旁边的 TCP/IP 地址 。Confirm the TCP/IP address for your Ubuntu VM using the ifconfig command and look for the TCP/IP address next to the eth0 interface.

  22. 在工作站上安装 VLC 播放机。Install the VLC Player on your workstation. 在 VLC 播放机中,单击“媒体 -> 打开网络流”,并使用以下格式键入地址:Within the VLC Player, click Media -> open network stream, and type in the address using this format:

    rtsp://ipaddress:8554/ds-testrtsp://ipaddress:8554/ds-test

    其中,ipaddress 是 VM 的 TCP/IP 地址。where ipaddress is the TCP/IP address of your VM.

    VLC 播放机屏幕截图

后续步骤Next steps

有关 GPU 和 DDA 的详细信息,另请参阅:For more on GPUs and DDA, see also: