'NC' sub-family GPU accelerated VM size series
Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
The 'NC' sub-family of VM size series are one of Azure's GPU-optimized VM instances. They're designed for compute-intensive workloads, such as AI and machine learning model training, high-performance computing (HPC), and graphics-intensive applications. Equipped with powerful NVIDIA GPUs, NC-series VMs offer substantial acceleration for processes that require heavy computational power, including deep learning, scientific simulations, and 3D rendering. This makes them particularly well-suited for industries such as technology research, entertainment, and engineering, where rendering and processing speed are critical to productivity and innovation.
Workloads and use cases
AI and Machine Learning: NC-series VMs are ideal for training complex machine learning models and running AI applications. The NVIDIA GPUs provide significant acceleration for computations typically involved in deep learning and other intensive training tasks.
High-Performance Computing (HPC): These VMs are suitable for scientific simulations, rendering, and other HPC workloads that can be accelerated by GPUs. Fields like engineering, medical research, and financial modeling often use NC-series VMs to handle their computational needs efficiently.
Graphics Rendering: NC-series VMs are also used for graphics-intensive applications, including video editing, 3D rendering, and real-time graphics processing. They are particularly useful in industries such as game development and movie production.
Remote Visualization: For applications requiring high-end visualization capabilities, such as CAD and visual effects, NC-series VMs can provide the necessary GPU power remotely, allowing users to work on complex graphical tasks without needing powerful local hardware.
Simulation and Analysis: These VMs are also suitable for detailed simulations and analyses in areas like automotive crash testing, computational fluid dynamics, and weather modeling, where GPU capabilities can significantly speed up processing times.
Series in family
NCv3-series
NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.5x the computational performance of the NCv2-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. The NC24rs v3 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads. In addition to the GPUs, the NCv3-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs.
View the full NCv3-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 6 - 24 vCPUs | Intel Xeon E5-2690 v4 (Broadwell) [x86-64] |
Memory | 112 - 448 GiB | |
Local Storage | 1 Disk | 736 - 2948 GiB IOPS (RR) MBps (RR) |
Remote Storage | 12 - 32 Disks | 20000 - 80000 IOPS 200 - 800 MBps |
Network | 4 - 8 NICs | Mbps |
Accelerators | 1 - 4 | Nvidia Tesla V100 GPU (16GB) |
NCasT4_v3-series
The NCasT4_v3-series virtual machines are powered by Nvidia Tesla T4 GPUs and AMD EPYC 7V12(Rome) CPUs. The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores(base frequency of 2.45 GHz, all-cores peak frequency of 3.1 GHz and single-core peak frequency of 3.3 GHz) and 440 GiB of system memory. These virtual machines are ideal for deploying AI services- such as real-time inferencing of user-generated requests, or for interactive graphics and visualization workloads using NVIDIA's GRID driver and virtual GPU technology. Standard GPU compute workloads based around CUDA, TensorRT, Caffe, ONNX and other frameworks, or GPU-accelerated graphical applications based on OpenGL and DirectX can be deployed economically, with close proximity to users, on the NCasT4_v3 series.
View the full NCasT4_v3-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 4 - 64 vCPUs | AMD EPYC 7V12 (Rome) [x86-64] |
Memory | 28 - 440 GiB | |
Local Storage | 1 Disk | 176 - 2816 GiB IOPS (RR) MBps (RR) |
Remote Storage | 8 - 32 Disks | IOPS MBps |
Network | 2 - 8 NICs | 8000 - 32000 Mbps |
Accelerators | 1 - 4 GPUs | Nvidia Tesla T4 GPU (16GB) |
Previous-generation NC family series
For older sizes, see previous generation sizes.
Other size information
List of all available sizes: Sizes
Pricing Calculator: Pricing Calculator
Information on Disk Types: Disk Types
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs.
Check out Azure Dedicated Hosts for physical servers able to host one or more virtual machines assigned to one Azure subscription.
Learn how to Monitor Azure virtual machines.