Monthly Archives: January 2015

Publishing vworkspace HTML 5 connector behind Citrix Netscaler

Since the release of vworkspace 8.5 I’ve been wanting to try out the HTML 5 connector properly! we have a lab enviroment where we have it deployed and it works amazingly fast inside the local network.

But… I also want it available from outside our local network, therefore I decided to publish it using our Netscaler. Now the HTML 5 connector from Dell is like the one on Storefront, it runs on top of the web access server and we can use that as an proxy to access applications and desktops.

Now initially I wanted to publish the connector using SSL offloading, meaning that users could access the HTML 5 connector on a SSL enabled vServer and that Netscaler would do the SSL processing and the web access server would get non encrypted traffic via port 80 but… when I got this up and running all I got was error messages.

clip_image002

Didn’t see alot of useful info in the logs as well which could lead me to the error.

2015-01-20 08:59:45.078 – 844 – RdpProxy – ERROR – Server exception.

System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host

   at Freezer.Common.Utils.readAll(Socket socket, Byte[]& data) in d:Build349vWorkspaceElblingSourcesSRCFreezerIISFreezerCommonUtils.cs:line 121

   at Freezer.Common.SocketStateObject.handleSocket(Object o) in d:Build349vWorkspaceElblingSourcesSRCFreezerIISFreezerCommonRdpServer.cs:line 160

2015-01-20 08:59:45.078 – 4780 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG –

2015-01-20 08:59:45.078 – 1000 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG –

2015-01-20 08:59:45.078 – 1692 – UserState – DEBUG – RDP ProcessExited for: [id_1421740273901]

2015-01-20 08:59:45.078 – 1692 – UserState – DEBUG – RDP ProcessExited: Cleaning up for [id_1421740273901]

2015-01-20 09:00:14.828 – 144 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG – Message received: AS00000704:      handle_print_cache( 00DEE778 )

2015-01-20 09:00:14.828 – 144 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG – 00000704:     ignoring an UPDATE PRINTER event

What I did see on the other hand was that my browser which was running the JS did try to open a connection directly to 443

clientSide: wss://demossoproxy.dsg-iam.com/vWorkspace/Freezer/api/Image?sessionId=id_1421175921207 (wss is SSL based websocket connection)

but since my web accesss server was running only on port 80 it didn’t work well. Therefore I changed the setup a bit. Instead of SSL offloading I tried with SSL bridging, so I moved the encryption back to the web access server and just used SSL multiplexing, which actually worked!

I’m guessing that the websocket connection requires the same port externally and internally, since I didn’t troubleshoot it anymore. So here Is a little clip of how fast the HTML5 connector for Dell vWorkspace is.

Participate in the Project VRC “State of the VDI and SBC union 2015” survey!

The independent R&D project ‘Virtual Reality Check’ (VRC) (www.projectvrc.com) was started in early 2009 by Ruben Spruijt (@rspruijt) and Jeroen van de Kamp (@thejeroen) and focuses on research in the desktop and application virtualization market. Several white papers with Login VSI (www.loginvsi.com) test results were published about the performance and best practices of different hypervisors, Microsoft Office versions, application virtualization solutions, Windows Operating Systems in server hosted desktop solutions and the impact of antivirus.

In 2013 and early 2014, Project VRC released the annual ‘State of the VDI and SBC union’ community survey (download for free at www.projectvrc.com/white-papers). Over 1300 people participated. The results of this independent and truly unique survey have provided many new insights into the usage of desktop virtualization around the world.


This year Project VRC would like to repeat this survey to see how our industry has changed and to take a look at the future of Virtual Desktop Infrastructures and Server Based Computing in 2015. To do this they need your help again. Everyone who is involved in building or maintaining VDI or SBC environments is invited to participate in this survey. Also if you participated in the previous two editions.


The questions of this survey are both functional and technical and range from “What are the most important design goals set for this environment”, to “Which storage is used”, to “How are the VM’s configured”. The 2015 VRC survey will only take 10 minutes of your time.


The success of the survey will be determined by the amount of the responses, but also by the quality of these responses. This led Project VRC to the conclusion that they should stay away from giving away iPads or other price draws for survey participants. Instead, they opted for the following strategy: only survey participants will receive the exclusive overview report with all results immediately after the survey closes.


The survey will be closed February 15th this year. I really hope you want to participate and enjoy the official Project VRC “State of the VDI and SBC union 2015” survey!


Visit www.projectvrc.com/blog/23-project-vrc-state-of-the-vdi-and-sbc-union-2015-survey to fill out the Project Virtual Reality Check “State of the VDI and SBC Union 2014” survey.

Test driving Cloudphysics

Having heard the buzz about Cloudphysics I decided to take it for a test drive, since they have a free edition which gives some limited options but it allows me to see how the software works. Cloudphysics is almost pure SaaS solution. It requires that we first download an virtual appliance that communicates against vCenter 4.5 <+ but it reports all the data back to cloudphysics which runs all the diagnostics and reporting. 

Cloudphysics has features like 
* Capacity planning
* Performance troubleshooting
* Health checks
* Alerting (and so on..)

So how to get started ? Sign up for a free edition here –> http://www.cloudphysics.com/get-cloudphysics/
Then download either the OVA or point the vCenter to fetch the OVF files from the portal.

On the vCenter side, you just have to import the machine and enter network information during the setup.

After that you just have to wait until it is finished installing. Then start it and configure the last parts. Just we need to enter the vCenter information and the UserID that is used to the cloudphysics account.

After that is done, it takes about 30 seconds and information will be pulled to CloudPhysics service. No CloudPhysics have a concept called “Cards” which are different reports, features and so on. For instance one of the cards is “Snapshots gone wild” 

There are a bunch more of these reports and well, but you get the picture. 

Now this is a golden example of how we can use a SaaS for report and monitoring purposes. CloudPhysics also has cost calculators for Amazon, Azure and Vmware AIR which allows us to see how much it will cost to move our VMs to one of these providers, but this is only available for premium customers. 

Azure G-series released and tested!

Today Microsoft released their G-series instances in Azure. This new instance is using a newer Intel Xeon based CPU and also with local SSD disk.

“G-Series VM Sizes availability

Today, we’re announcing the General Availability release of a new series of VM sizes for Azure Virtual Machines called the G-series. G-series sizes provide the most memory, the highest processing power and the largest amount of local SSD of any Virtual Machine size currently available in the public cloud. This extraordinary performance will allow customers to deploy very large and demanding enterprise applications.” and we can have up to 64 data disks as well. http://azure.microsoft.com/blog/2015/01/08/azure-is-now-bigger-faster-more-open-and-more-secure/

Create VM

So still we have a local SSD drive and Intel XEON CPU how does it perform compared to a regular A-instance ?

I did some regular disk benchmarking with HD tach tune.

Read benchmarking test on G-series SSD based instance

image

Read benchmarking test on A-series HDD based instance

image

So after comparison we can see that the CPU usage is lower on the Intel based instances because it is much more efficient then the AMD based instance. We can also see that is has better performance then a regular HDD based.  If we do a similar test on a attached data disk on both instances

G-series instance data disk READ

image

A-series instance data disk READ

image

We can see that the results are almost the same but the CPU usage is again lower. Now even if this instance can up to 64 data disks, don’t think of using it with storage spaces yet, wait until Storage premium available Smilefjes

Netscaler with mulitple packet engines and WHY you should size properly

For those working with Netscaler, I often stumple across those that don’t size packet engines properly on Netscaler VPXs.

By default, when deploying a Netscaler VPX is comes 2 vCPU and 2 GB memory. Of those 2 vCPU one is used for management purposes and the second vCPU is used for packet flow. It handles load balancing, compression, content switching and so on. (CPU 0 is the management core) 

So how can we can the utilization of these CPUs ? (and no we cannot use regular unix tools like top and so on, they will not display it properly since the packet engine core is always looking for work to do it will be reported as utilized even thou there isn’t any work for it, that’s why we need to use stat system)

We can use the commands stat cpu and stat system
In a regular VPX we can only see one packet engine CPU because of the two vCPUs.

Now for a regular VPX 1000 we can have a maximum of of 3 packet engines, meaning a total of 4 vCPU (also meaning that we need to add more memory to the VM) you can see the chart from Citrix here –> http://support.citrix.com/article/CTX139485

So let’s do a quick comparison if these changes improve our performance. The first here is displayed on a VPX 1000 with 2 vCPU and 2 GB memory. The second is further down in VPX 1000 with 4 vCPU and 8 GB memory.

(NOTE: Multiple packet engines are not available on Hyper-V, only Vmware and Xen) and note that this is CPU dependant as well, the better the CPU the better SSL performance)

Now in order to test this I used a benchmarking tool from apache called ab (stands for apachebench)
It creates multiple requests against a virtual load balanced vServer. It a regular HTTPS vServer which the benchmark is going run against. Since this test is going against a regular HTTPS traffic .

ab -n 50000 -c 1000 http://192.168.10.32/index.html (This will do a benchmarking test using HTTP GET) with 50000 requests with 1000 concurrent requests against a web address

Now notice this is the first run (The packet engine CPU is over 90%) a bit more packets here and my Netscaler would be unable to process the packages.

When I ran the same test against 4 vCPU (Where 3 are PE) I get a more load distributed result (Here I just used the stat cpu command to see load on each individual PE)

So remember, scale PE accordingly! if you are unsure if you need to scale out take a look at your current enviroment with stat CPU during the busiest part of the day. 

Lync 2013 setup Citrix Netscaler

I’m getting a lot of search words on my blog regarding “Lync and Netscaler setup” “load balancing Lync” “Lync and HA Netscaler” “Lync and Reverse proxy”. Probably because I have alot of content around Netscaler. But anyways to answer the question, can we use Netscaler to do all these things ? Load balancing, high-availability and reverse proxy for Lync 2013?

Sure we can, I even recommend it.

Citrix Netscaler is supported by Microsoft as a Load balancer for Lync http://technet.microsoft.com/en-us/office/dn788945 as a hardware appliance and as a pure virtual software appliance. Citrix has also made a deployment guide which shows how we can deploy Lync using Netscaler http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/microsoft-lync-2013-and-citrix-netscaler-deployment-guide.pdf

You can also read more about it, in this datasheet here –> https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/citrix-netscaler-datasheet-microsoft-lync-2013.pdf

So why should you use Netscaler for Lync ? according to Gartner, they are one of the few ADC recognized as a leading product. 

Remember to use different TCP profiles for outside traffic and inside traffic since this will drastically improve the network performance. (If you are using TCP for SIP traffic, and for other TCP based connections)

Trouble discovering vWorkspace database with Foglight running SQL Server 2012 R2

Had some issues today when setting up vWorkspace foglight against SQL Server 2012 R2, whatever I tried I always to the error message

Invalid Object name “trace_xe_action_map”

The problem is that SQL Server 2012 R” changed the trace_xe_action_map from dbo. to sys. In order to fix this we need to run a script on the vWorkspace database (or create a alias for it)

Run the query CREATE SYNONYM dbo.trace_xe_action_map FOR sys.trace_xe_action_map;

image

And voila!

image

Unidesk Tech Preview for Hyper-V

Already in 2015! can’t belive how time flies…
Well anyway back to blogging, and im starting the year with trying out Unidesk Tech Preview for Hyper-V. For those that aren’t familiar with Unidesk, the focus on desktop layering for VDI, so what does that mean ? 

* Desktop provisining
* Application Layering
* Image Management
* Persistent Personalization
* Storage Optimization

They have had support for Vmware since they started the company, but is now having a tech preview for Microsoft Hyper-V which I’m going to show how works during this blogpost.
So let’s start with the concept of a regular VDI machine, they are often either static, persistent or non-persistent. Some vendors like Citrix offers PVD which offers a layering option which allows us to maintain a single computer image while still having a persistent user experience for the user. Unidesk is drawing the line a bit longer, which allows us to have one image and deliver applications, user profiles using layering technology and still allowing us to have a single OS image.

So for instance inside a VDI virtual machine the C: drive and the applications might consist of many different layers since Unidesk has a filter drive which blends all the different layers into virtual C: which is built up of many virtual hard drives.

Dynamically built from layers

(From http://blog.unidesk.com/how-does-unidesk-layering-work-part-1-file-system

On my testing enviroment I have a Windows Server 2012 R2 running the latest tech preview from Unidesk.
The tech preview contains in essence three components. 

Hyper-V installer (Which contains Management appliance and checkpoint appliance)
Hyper-V broker agent
Gold Image tools 

So we start by running the installer

Then just keep going with the install wizard, pretty straight forward. Firstly we need the management appliance installed. This is used for all managent purposes and runs on a CentOS VM. And note it needs DHCP.

When the appliance is finished created you will get info regarding IP address and username and password. 

And oh yeah, the management console uses Silverlight( still it looks pretty cool!) 

Before we do anything else we need to add the Cachepoint appliance. This appliance maintains the master copy of all of the Operating System and Application Layers in enviroment. In Hyper-V, the layers are stored as VHDX files.  So go into settings and configuraiton and choose Cachepoint Settings and point to the appliance template that is part of the installation. 

Give the appliance a name. 

Select storage location for each storage tier. 

Choose what virtual network to be connected to and if we want DHCP or static adresses.

And voila! created cachepoint.

After we are done with the basic setup we need to setup a golden image that we can use to distribute our VDI collection. In my setup I’m setting up a simple Windows 7 VM. So im basically just following the steps listed here by unidesk http://www.unidesk.com/support/learn/2.5.5/deploy/create_an_os_layer/deploy_os_gold_image_prepare#Prepare (Important that instead of Vmware tools we need to install the latest integration services tools) and then run the setup and connect the golden image to the management appliance. 

 

And remember to take a snapshot before running the latest setup into Unidesk, if you don’t you have to recreate the golden image if you need to update.

Next we need to create a OS layer, give it a name, version and point it to the golden image that was just created. After that is done we go ahead and create a collection of that OS layer. 

Give it a name (I skipped the broker config since I didn’t want to connect it to another enviroment just yet..)

Define if you want persistent or non persistent. 

Then we need to assign this to a OS layer that we created (Which was created from the global image)

After that is done we can create a Desktop for a user, 

 

Then just finish the wizard ( I didn’t have any applications assigned yet, and I set user data to 8 GB for the first time) When the VM was finished created I saw that the c: was only 8 GB and in disk manager I saw that the VM consisted of multiple harddrives which Unidesk uses with its layering technology.

So far so good, stay tuned for more related to Application layering as well.