Test run of Teradici Cloud Access Software on Azure N-series

Earlier this year, Microsoft announced that they were partnering with Teradici on their N-series virtual machine instances in Azure, which I’ve blogged about previously here –> http://msandbu.org/n-series-testing-in-microsoft-azure-with-nvidia-with-k80/

Teradici are the creators behind the  PCoIP protocol, which its often leveraged in VMware Horizon View and supported by multple thin client vendors, which in most cases are also leveraging a Teradici SOC (System  on a chip) which provides a hardware decode of the pixel stream enabling faster frame rates and highly secure, simple to manage updates.

Now I have previously tested PCoIP vs for instance TCP Blast –> http://msandbu.org/remote-protocols-benchmarking-citrix-vmware-and-rdppart-one-pcoip-vs-blast-extreme/ just to compare TCP Blast which is by default TCP and PCoIP which is based upon using UDP. Now the downside with using UDP is that it does not have any form of reliable transport. That means that Teradici has to provide that within the PCoIP protocol itself, which might translate into “artifacts” when working on the workstation.  Also most UDP based remote protocols use MTU of 1200, to ensure that there is no fragmentation of packets during transmission. PCoIP has some good ways of sending reliable data, for instance USB packets are always reliable, dropped image packets are resent only if they have not been written over by a subsequent display update (so, for instance Heaven benchmark which is part of the video clip further down below, almost never since it is constantly updating the screen), and audio packets are too latency sensitive to retransmit, so PCoIP uses forward error correction (FEC) to correct missing audio packets and never retransmit dropped audio packets.

So I was curious to see how it worked on Azure with the N-series and support for Windows Server 2016, since the Azure datacenters are not located anywhere close to the nordics, it is crucial to have a remote display protocol which are able to leverage the GPU and deliver the best end-user experience without causing any huge overhead on the server, and of course being able to use it without having a large Azure infrastructure.

NOTE: In order for the agent to connect to the server, the following ports need to be open on the VM in Azure

o TCP 443
o TCP 4172
o UDP 4172

The way to fix this is to go into the Network Security Group of the VM and adjust the incoming rules with the addtional ports (RDP is default rule)image

The setup of Teradici Cloud Access is pretty simple, it consists of an endpoint client which can run on Mac or Windows, or any mobile platform (Android, Chrome and iOS). It is also embedded on a list of different Zero clients, but in this case I used an regular Windows endpoint which was a Windows 10.  We also need an agent install on the Windows Server we want to connect to. With this release Teradici supports Windows Server 2016 which is going to be my test platform on the N-series in Azure.

image

NOTE: That the agent installation process is going to be a lot more streamlined when it fully supports Azure later this year, and will be available as an extension when deploying virtual machines in Azure which can be easy leveraged when deploying using Azure ARM

image

After installing the agent, you need to reboot the virtual machine in Azure. Now if we want to do any customizations to the teradici agent we need to import the ADM files which are stored locally on the virtual machine C:\Program Files (x86)\Teradici\PCoIP Agent\configuration.

NOTE: Using N-series can only be used with a single 1920 Display and we also need to NVIDIA GRID drivers installed and they can be downloaded from the NVIDIA drivers support page.

image

For instance there is an initial bandwidth estimate is set to 10 MBps, which makes PCoIP appear sluggish to begin with, but this will ramp up and I’ve been told that this will be fixed in Q1 2017 release from Teradici

After the agent is installed and running we can connect to the virtual machine using the public IP/hostname set for the virtual machine in Azure (All virtual machines get a public IP address from Azure, by default this should be set to a static address which are by default dynamic.

image

After you have authenticated using domain credentials, you have the option to specify view mode which can be either windows or full-screen. The client can store the connection settings.

image

Now that since the high-image quality, the bandwidth usage might be high at times but this is to ensure optimal user-experience, you can also see in the graph that it ramps up slow and steady because of the 10 MBps initial bandwidth estimate, this can be changed using the group policy templates. I ran two tests during 60 seconds, therefore the graph spikes and start up again.  We could reduce the image quality to reduce the bandwidth usage.

image

So this video below, shows the user experience from my Windows 10 client to my Azure N-series instance which is running in the US Central datacenter where the N-series is currently available, which it needs also to be taken info consideration that I have about 150 MS latency from my location to the VM instance.

Leave a Reply

Scroll to Top