Running NVIDIA GRID in Azure

Lately I’ve been involved in some projects involving running CAD based workloads on Azure running on Citrix.
In Azure we have multiple GPU based virtual instances. When Microsoft first introduced GPU based instances with the NV and NC series they suffered from one big flaw, which is the lack of SSD backed disks, which means that you only got about 500 IOPS / 60 MBPS troughput of each disk.

The NV series are based upon virtual instances with NVIDIA M60 Tesla cards and the NC series are based upon the T80 card. All GPU cards are presented to the virtual machines using a Hyper-V technology called DDA (Discrete Device Assignment) which is similar to GPU passtrough on other hypervisor products.

Later on last year, Microsoft also introduced other instances with some new GPU cards as well, and earlier this year during Ignite, Microsoft also introduced a new beta of a new instance of the type NV called NVv2 which are the same GPU instances as the NV but now backed with SSD disks.(You can sign up for the beta program here, which is currently only in the US datacentres –> https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRwVddQxr2xFMmvIUls9ArudUQjRBNkQwNlVTVVRJQ1hVSTZZNkkxNVI0Vi4u) most of

NOTE: For some of the instance types you are assigned a zero quota by default and you need to request a vCPU Quota increase to get access –> https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request

azure

So here you can find the different GPU based instances in MIcrosoft Azure –> https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu and here are the supported OS’es and other cloud vendors by NVIDIA https://docs.nvidia.com/grid/cloud-service-support.html

Now in order to use a GPU with Remote Protocol in Azure you need to have a product that supports Azure and GPU’s using DDA.
If you are using Windows Server 2012 or Linux you can use Citrix (HDX) or VMware (BLAST) or Teradici (PCOIP). Or if you are using Windows Server 2016 or 2019 you can use regular RDP since it then supports GPU passtrough using RemoteFX.

To use GPU capabilities in a Terminal Server setup if you are running Citrix or VMware there are two things you need to activate for them to be able to use the GPU cards with DirectX/OpenGL in a Remote Session.

1: Enable use Hardware Adapter in Group Policy – User Configuration – Administrative Templates – Windows Components – Remote Session Environment – Use the hardware default graphics adapter for all Remote Desktop Services sessions – and from here you need to enable this feature.

Bilderesultat for Use hardware graphics adapters for all Remote Desktop Services sessions

2: Make sure you have enabled GRID Licensing – If you are using the NV (M60 cards) series in Azure you get as part of the installation drivers that Microsoft provides an GRID license attached to it, which is licensed to NVIDIA GRID Virtual Workstation (comes with a GRID license. This license gives you the flexibility to use an NV instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application scenario.). (You can find that driver here –>https://docs.microsoft.com/en-us/azure/virtual-machines/windows/n-series-driver-setup ) If you plan to use other GPU instance types you need to get a license from NVIDIA and setup a NVIDIA GRID License server, or you can get 60 days trial (from here –> https://www.nvidia.com/object/vgpu-evaluation.html)

NOTE: You can also add NVIDIA using an Azure Extension now, which will install the correct drivers based upon what kind of NVIDIA instance you are running.

az vm extension set ` –resource-group myResourceGroup ` –vm-name myVM ` –name NvidiaGpuDriverWindows ` –publisher Microsoft.HpcCompute ` –version 1.2 ` –settings ‘{ ` }’

Now by default if you just install a NVIDIA driver on a Tesla card it will be running in a mode called, TCC (Tesla Compute Cluster) and when it is running in TCC mode it cannot be used for remote display purposes. You can see which mode it is running in  you can view this by running the nvidia-smi tool from NVIDIA which is located in the NVIDIA installation path.

Current drivers require a GRID license to enable WDDM on Tesla devices, once that is installed either using a trial license or using a license server and WDDM mode is enabled you are good to go.

nvidia

 

Leave a Reply

Scroll to Top