So I’ve been fortunate enough to been able to test-drive the N-series virtual machine instances in Microsoft Azure. For those who are not aware, the N-series is a GPU enabled series of virtual machines. Which can either be setup with a M60 (Which is the NV-series) or a K80 (Which is the NC-series)
You can find the sizing and the specs here –> https://azure.microsoft.com/nb-no/documentation/articles/virtual-machines-windows-sizes/#n-series-preview
But as of now in the preview I only have access to the entry series which is the NV6 and the NC6 (Which is basically 1x M60 or 1x K80. So since I’ve part of the public preview, my subscription has been enabled for access. So just by going into the Azure Portal I have access to the N-series from the gallery
The virtual machine instances are only available using ARM. And as of now, the N-series are supported on Windows Server 2012 R2, Windows Server 2016 TP5 or Ubuntu 16..04 LTS. This feature is using Discrete Device Assignment which which allows for a PCI passtrough more from the hypervisor to the virtual machine instance.
Now after the virtual machine is spun up we need to add a driver to it. This is because that it still deploys the same virtual machine image from the gallery.
So after the installation is complete, it should look like this under the device manager.
I just got word from the PM that the NVIDIA drivers will also come as a VM extension which will allow for an easier VM driver installation after the VM has been provisioned. Also the driver installation includes nv-smi which allows us to show (NVIDIA System Management Interface program)
So far so good, the only issue for us in EMEA as of now, is the placement of the GPU instances which is only in South Central US datacenter, which is about 140 – 160 MS latency, and if we combine this with some jitter because of congestion well it is not going to be a viable solution for customers here yet! But I’m guessing we have only seen the start of GPU instances in Azure and I’m guessing that they are going to appear on datacenters elsewhere as well.
NOTE: Just got work from the PM that N-SERIES is also coming to the EMEA Datacenter as well, Before the end of the year
Now I would also like to get more monitoring matrix available in performance monitor so I could leveage OMS to do Performance monitoring on the GPU instance directly from OMS. Note that NV comes with a perfmon counter installed but not with NC yet.
NOTE: The drivers for nv are custom as of now but will be standard when GA and the drivers for nc are standard which can be downloaded from NVIDIA today.
If you are part of the preview and are having issues, head out to the forum –> https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=AzureGPU
and don’t remember to follow this guy on twitter –> https://twitter.com/Karan_Batta which is the PM for the N-series
So you want to use N-series to deliver high-end graphics to your users? Well there are some limitations as the moment… First of you have the latency issue to think about, depending on where you are located. And also using the drivers will give about 90 – 95% over the bare metal performance.
Citrix:
We could setup a Citrix infrastructure and leverage the support PCI passtrough that it has for XenServer and ESXi, since this is a typical PCI passtrough feature that is being leveraged it should work on paper….
But as of now it seems like the VDA does not detect properly the GPU (Seems to be confused by the first adapter which is tagget “Microsoft Hyper-V” Since this is a Windows Server 2016 feature I’m guessing that it should be supported from Citrix when they come with Windows Server 2016 support.
My other guess is that we will be able to use Windows Server 2012 R2 and 2016 as a guest OS when running the vNext version of XenDesktop, time will tell. But when the support comes for N-series from Citrix, we can either leverage Citrix Cloud and NetScaler Gateway Service which is a Windows based NetScaler component or NetScaler appliance. We can also from Citrix cloud now leverage the Azure Resource Manager support to do power management as well against our instances.
Microsoft:
So what other options we have? If we run native RDP on Windows Server 2012 we are out of luck, since the RDP engine will not be able to leverage the GPU properly, this is only with RemoteFX vGPU that It can use the GPU properly.
NOTE: When setting up any virtual machine in Azure with RDP you want to leverage RDP with full effect you need to open UDP as well as TCP when setting up the virtual machine (RDP using TCP is added by default) on the security group.
So I’ve setup a sample 2016 TP5 with M60 card and did a benchmark test, of course the issue isn’t that the GPU can process everything properly. Notice the benchmark was about 99 – 120 FPS the entire benchmarking session the issue at the moment is that the procotol has issues with transporting the
and also it is important to note that if running applications that require alot of rendering which requires alot of changes which needs to be transfered down the the client will generate alot of bandwidth usage.
not that this is going to be accurate for designer workloads, but is something you need to think about
So if setting up N-series GPU has a cost per minute, you will also need to think about the bandwidth usage (traffic that goes out of Azure’s datacenters)
I hate to be such an ass, but is this a blogpost about how to launch nvidia drivers setup package?
Hah! thanks for telling me! no… the Windows live writer “skipped” a section I had there, sorted now
I’m getting 10 fps on nc6, i installed the drivers do i need to do anything else to make it switch to the gpu?
defined the group policy to use the hardware adapter?