Monthly Archives: March 2016

Remote protocols benchmarking, Citrix, VMware and RDP–Part 2 Blast Extreme vs ThinWire

Yesterday I released the first post of my protocol benchmarking stating that I would do a follow up in part two in regards to packet loss, scenarioes. Well I lied a bit… After some feedback to my email I decided to alter it a bit. Instead of looking at PCoIP vs Blast Extreme I decided to change it to Blast Extreme (TCP) vs Citrix ThinWire (TCP) since they are both TCP based protocols and might bave more in common then comparing UDP / TCP based protocols.

Important to remember thou, that Blast Extreme can also be configured to be used with UDP mode, it isn’t enabled by default and you need to change it using Group Policy by downloading the GP bundle from VMware.


Now the testing scenario is the same as in the previous post,

I’m starting with no added packet loss or latency. The latency is between 5 – 10 MS in the initial test. This is testing using Citrix XenDesktop 7.8 on 2012 R2 RDS, Same with VMware Horizon 7 and latest view clients on 2012 R2 RDS hosts. No special configuration what so ever, this is plain default settings.

AGAIN If you have any comments/feedback/changes please send them to 
Another detail is that only test the display protocol is certain scenarios and does not conclude which one is better then the other.

Citrix ThinWire

Bandwidth (95,9 MP) Maximum (449 KBPS Which was during the Youtube testing)


CPU Usage ctxgfx process (Maximum 18,5% CPU Utilization)


On Average the CPU usage of the process was only at about 7,8%


Memory usage BrokerAgent.exe (Maximum usage)


Memory usage ctxgfx.exe (Maximum usage)


As you can see the average amount of RAM usage for the Citrix processes during the session in total was about 358 MB


I also decided to rerun the Horizon View tests to see if there very any deviations in the test results from yesterday.

Pretty much the same results

VMware Blast Extreme

Bandwidth total 243 MB, With a maximum of 1.6 MBPS (Which is the same maximum usage I noticed yesterday as well)


CPU Maximum


CPU Average


RAM Average


RAM Maximum


RAM Totalt in services  (245 MBS)


You can notice here that VMware uses alot less average RAM (about 100 MBS) per agent. Then Citrix does. Blast Extreme by default also uses more CPU% then ThinWire, this might be with the fact that is tries to maximize the bandwidth usage, which was especially noticeable on the Youtube part. Don’t pay to much attention to the CPU Maximum since this can also be a temporary spike in the agent.

Now after adjusting the maximum session bandwidth in the Group Policy for VMware Blast I managed to lower the CPU usage for the Blast endpoint. But from what I can see from the initial testing, ThinWire still does a better job chewing the data before moving it across the wire.

Stay tuned for part 3

Remote protocols benchmarking, Citrix, VMware and RDP–Part One PCoIP vs Blast Extreme

So I have been eargly awaing the Vmware Horizon 7 release and with it also a new display protocol called Horizon Blast, which has been a huge improvement from the former protocol PCoIP, but it is still available as an option when setting up remote desktop groups in Horizon 7 as well. Now since I have been eagerly testing the former protocol (PCoIP) I found that it was missing the edge like the other protocols had.

Now Remote Display protocols have the same goal, its like driving a car. You need to get from one location to another, the data needs to arrive safely and it needs to be able to handle the traffic (congestion) and needs to be safe (Encrypted, Authentication) and as a driver we need comfort which enhances the session (Remote display extensions)

But another important factor for the display procols is that it needs to ADAPT! which is the most common case for most of the protocols is that they lack a good way to adapt network wise to the traffic.

Good for cross contry where its bumpy road and so on, a bit slower for you know it will get there (TCP)

Good for the open road, where there are no bumpy roads little traffic and so on. High speed! (UDP)

But anyways back to the main point of this post, which is to display the differences between the different display protocols, and again alot of benchmarks focuses on stats which I will also do but then again add a bit of comments on each test to TELL you about the user-experience under the different scenarioes which it is all about!

So to give some context on what I’m going to test initially, i’m setting up plain Citrix XenDesktop 7.8, Vmware Horizon 7 and Windows Server RDS 2012 R2 without any modifications. I have a Citrix enviroment which will be placed behind a NetScaler Gateway, I have an RDS enviroment placed behind a RDS Gateway and and lastly I have my Horizon enviroment placed behind an security server. All stats will be gathered from to Uberagent which I use to get information about the different processes running on the instances CPU/Memory/Disk I/O and so on. This visio drawning sums it up.


The first simple test was setting up an RDS instance on each platform, with nothing else running on each host (just the required infrastructure so it couldn’t affect the results, also each test was performed seperatly to ensure nothing would affect the results as well, and of course all virtual machines and ESX hosts are configured in a similar manner with no less/more hardware for one or another. Take note that on the ESXi host for Horizon I had two virtual machines (One for Blast Extreme another for PCoIP)

However take note: This publication is more for my personal fun of finding this stuff out and not to tell which display protocol is better (Since I cannot possibly test every different scenario!)

And another important fact is that all the test from the enduser point of view is from a Windows 10 physical computer connected to another 100 MB broadband connection using Ethernet cable and having about 5 MS connection to the dedicated lab enviroment.

The tests includes the following user workload;

  • · Logging in and waiting 1 minute for the uberagent to gather data and getting the session up and ready.
  • · Open a PDF file, scrolling up and down for 1 minute. (The PDF is located locally on the VM to exclude network I/O)
  • · Connect to a webpage, which is a Norwegian newspaper which contains a lot of different objects and high graphics, and scrolling up and down for a 1 minute.
  • · We then open Microsoft Word and type randomly for 1 minute.
  • · Last but not least our favorite opening of Captain America Civil War trailer in fullscreen using Chrome for the full duration of 2 minutes.

This allows us to see which workloads generate how much bandwidth, CPU- and RAM usage with each of the different protocols.

To collect and analyze the data we were using the following tools

· Splunk – Uberagent (Get info we didn’t even think was possible!)

· Netbalancer (Show bandwidth, set packet loss, define bandwidth limits and define latency)

Blast Extreme

First test (5 MS latency, no packet loss) Blast Extreme

Bandwidth usage (248 MB total, Maximum usage 1,6 MBPS)


CPU Usage (Splunk, UberAgent) VMBlastW.exe (About 8.2%)


RAM Usage , Average Usage


RAM, Maximum Usage



Now the interesting thing about PCoIP is that is consistently packet sizes of 1198 bytes (It never beyond that point and it looks like it tries to buffer packets before it comes up to that byte size before it sends it across the wire.

First test (5 MS latency, no packet loss) PCoIP

Bandwidth usage (184 MB total, Maximum usage 999 KBPS)


CPU Usage (Splunk, UberAgent) PCoIP (About 24.2%)


RAM Usage , Average Usage


RAM, Maximum Usage


Conclusion of test 1: Blast Extreme had a much better user experience, and also uses a lot LESS resources on the endpoint. Note however that Blast Extreme uses more bandwidth, this can be because of TCP to adjust for packet loss, but I think that Blast Extreme does some initial tests to ensure of bandwidth capacity and tries to maximize the bandwidth based upon what it has available. This clearly showed on the Youtube clip where Blast Extreme delivered a nearly crips perfect picture the entire session.

So let’s do a test with some added latency so see how they compare.

Blast Extreme 200 MS

First test (200 MS latency, no packet loss) Blast Extreme

(Bandwith usage 43 MBPS, Maximum bandwidth 201 KBPS) Latency has a really bad effect on TCP, which is similar to when to do testing on ThinWire on Citrix. But I can see from Wireshark that the packet lenght is also fixed at a rate, but it uses smaller packets whenever the buffer is low or empty


But during the YouTube part is tries to maximize the byte size in the packet.


Memory maximum (Same goes for memory)


PCoIP 200 MS

First test (200 MS latency, no packet loss) PCoIP

(Bandwidth usage 118 MB, Maximum bandwidth 689 KBPS)


CPU Average (Looks like it does some measure of the bandwidth usage and it then causes the CPU to do alot less on the endpoint.


RAM Usage is about the same


Summary part one:

With Blast Extreme, VMware has gone a great job adding a new protocol to the Horizon suite, it does a great job providing great quality with even lower resource usage on the endpoints then PCoIP does, even thou it uses TCP instead of UDP.

In part two, i’m going to take a closer look at different packet loss scenarioes and add RDP to the mix to see how they differ in terms of performance. Since RDP is built-in to the OS the resources is uses is minimal but Blast as well does a great job at using little resources, and it also looks like VMware has did some work on the agent since the resource usage went down from 6.2 to Horizon 7.

New eBook project status–NetScaler Gateway deep-dive

As mentioned before I am currently working on a new eBook on NetScaler Gateway, based upon feedback on mail and some randon polls on Twitter. The ebook is good underway and is of now about 100+ pages. Don’t know yet how long the book is going to be, but again its a free ebook and I’m the one responsible for it I don’t have any pressure on the maximum lenght of the ebook Smilefjes

But there is one thing I wanted to ask people about. below is a list of the current index. Take a quick look at it and see if there is anything missing! If so, please notify me at, would love to get some feedback on it as of now (Which again would make it even better!)

  • NetScaler Gateway basics   
    Licensing and editions
    When to use what?
    NetScaler and traffic flow   
    General settings for NetScaler   
    External authentication for administrators   
    Setting up ICA-proxy   
    SSL Settings   
    Published Applications   
    Citrix Receiver policy   
    Citrix Receiver for Web Policy   
    ICA Proxy with two armed   
    Framehawk and Audio over DTLS   
    Double-hop configuration   
    RDP Proxy   
    VPN and Endpoint analysis   
    Full VPN with endpoint scanning   
    Preauthentication policy   
    Session policy   
    Split tunneling   
    Client IP pools   
    Clientless Access   
    Adding resources   
    SSO to internal resources   
    Unified Gateway   
    Group Based Access and control   
    Integrating with XenMobile   
    Integrating with ShareFile   
    High availability and clustering   
    Portal customization   
    Security settings   
    Other design examples   
    Multitenant ICA-Proxy   
    Multiple Active Directory Domains   
    Monitoring and AppFlow   
    Command Center   
    Appendix A: Expressions   
    Community blogs:   

Direct Restore to Microsoft Azure from Veeam!

Today Veeam annonced a new cool feature for all of us Microsoft Azure friends! Smilefjes

So Veeam Direct Restore is a free virtual appliance from Veeam which delivers cloud restore options for Veeam Backup and replication and also for Veeam Endpoint backup. (Now remember even thou it is free you still need to pay Microsoft for the compute resources!) but anyways this allows us to restore our on-premises Windows-based VMs, physical servers and endpoints directly to Azure!

About Veeam Direct Restore to Microsoft Azure

this is based upon a marketplace template in Azure which can be found here –>  (and no you cannot provision a marketplace VM with a free sub)

After the machine has been provisioned and you have logged inn you have two options available


Veeam B&R and Azure Recovery. Clicking on the Azure Recovery will put the entire screen in like a advanced start menu screen, and from there we are given different options. (Whoh! green!) this is just meant


So let’s setup the configuraiton.


Its pretty straight forward, using a publishsettings file to be able to connect to the Azure subscription


Now in order to actually use this, we need to ship a Veeam backup file up there in some way. There are multiple ways that we can do this. We can use the Veeam Azure FastSCP feature (which is the recommended way) or we can use the SMB 3 Share option which was just released in Azure (Which I have blogged about here –>


So from the Veeam backup server I just added the network share which points to a storage account which resides in the same region my virtual appliance is and started uploading (took about 15 minutes)

Now when you have copied to the Share and then done another copy to the appliance itself, we can do a restore.


So now we specify the Azure specific restore options, region and subscription it should use


Specify a Virtual machine size


And we are good to go!


And if I now go into my Azure Account, I can see my virtual machine


So this is indeed interesting, so if you are today already using Veeam with Cloud Connect to Azure you can actually ask you service provider as well to restore your virtual machines to an subscription of your choice!

This can also be used as a neat way to move virtual machines from an on-premise backup (physical, vmware, hyper) and just spin up virtual machines directly in Azure after backup. So great job Veeam, hopefully this feature will be part of B&R eventually!

some important notes, as of now which you should take note of as well!

  • Restored virtual machines are created in classic mode!
  • Only restore windows based virtual machines
  • Maximum disk size in Azure is 1023 GB (Hence virtual machines that are larger then that cannot be restored)
  • If getting virtual machines from Hyper-V only Generation 1 is supported.

A bit for my norwegian followers! Club Exclusive

So just a bit of advertisment for our upcoming event at Exclusive Networks here in Norway!

On the 27th of May we are having a whole day event with topics about Software-defined datacenter, deep-dive cybersecurity and datacenter networking, interested? You can read more about it here –>

We have alot of the coolest vendors out there which are attending as well, Nutanix, Fortinet, Arista, VMTurbo, AVI networks, Rubrik and so on.

So its a free event, so what are you waiting for?

Closer look at Liquidware labs and FlexApp

After I initially published my blogpost on Application Layering vs Application Virtualization,

I got more in-depth of the different technologies out there, and sure I didn’t cover all of them I just focused on the ones I found the most interesting. Now at this time when I was working on this type of technology I stumbled across Liquidware labs, when they just announced their Micro Isolation feature on Twitter.

My Initial thought was “huh…” That was my first mistake since I assumed that Liquidware Labs was only doing UEM, which I haven’t done alot of work on, but boy was I wrong…

After digging into the technology now for the course of the last month I noticed they have alot more to offer besides UEM functionality. Now one of the core products ProfileUnity is their UEM offering, as part of this they have they FlexApp feature which is their layering feature.

NOTE: that like other layering technologies FlexAapp uses filter drivers to control the merging of the different layers into the operating system.


Now FlexApp supports two types of distinct layering capabilites

  • FlexApp DIA (Applications that are prepackaged using the packing console which then can be delivered to a VDI/RSDH enviroment or to a clean physical desktop) Using the packing console we can easily do changes to the package, clone it, add scripts and so on. DIA can be attached to a desktop either using a VHD file or as an VMDK also known as a FlexDisk.


  • FlexApp UIA ( Allows endusers to install their own applications on a custom VHD or to a persistent disk without requering administrators to prepackage application to them.

Now these features are configured from the ProfileUnity server, and can be assigned to a AD group, user and can also be configured using the filter engine that liquidware labs has.


Using the filter management option in ProfileUnity I can pretty much make a filter based upon anything I want which allows me to customize my configuration and FlexApps deployments even further


Now on top of this they have for instance the Micro Isolation feature which is an issue for some of the other layering capabilities.

So what does the Micro Isolation feature do ?
So let us say that we have two application layers within each layer we have one single application. Both of these applications require the same DLL file but require different versions of this DLL file as part of their requirement.

If we then deploy both this layers to an endpoint we will get a conflict where for instance the last layer wins. Using micro isolation, what happnens is that the Micro Isolation feature will redirect the application request to the dll file back to within its own container. This feature will then essentially  make sure that we do not get any application conflicts, which in essence has been one of the key points for application virtualization.

Now this has been a quick glimse of Liquidware labs and their application layering feature. I do belive that having application layering as part of an UEM stack makes alot of sense. Since it all a part of delivering context to an end-user session, because at the end of the day we need a solution that empowers the users in order for them to work flexible and making it easier for us IT-admins to still be in control and still making management easy.

Need for speed–Benchmarking Citrix NetScaler

Citrix has announced the move to more NFV and pushing the limits of NetScaler towards up to 100GBps on a single VPX (Yet to see) but since this was a hot topic a week back I decided to benchmark a VPX to see what it can actually deliver in terms of performance over SSL.

Now I love the VPX is a good piece of software and is supported on most hypervisors, so therefore I wanted to give this a try using the Apache benchmarking tool abs (S is for secure)

Now from the Citrix datasheet, the VPX has the following attributes


The most important one is that it can handle UP to 750 SSL transactions/sec which contain 2K certs. This was my main objective to test the amount of transactions it could handle to see if it could live up to the claim and if we can do anything to tweak the results.

Now when setting this up I decided to do an isolated test within an ESXi host, to make sure I was not losing any perfomance on our crappy network cards. The host itself was not constrained in any matter with CPU/Memory or Storage hence is not included as part of the test.

The enviroment looked something like this (NetScaler VPX 1000)


To begin with I started some test against a simple vServer running with a self-signed cert which only has a 512 key against the default setup of NetScaler which has 2 GB ram and 2 vCPU (Where one is used for packet proecssing)
NOTE: During testing I always ran the same test twice to make sure there wasn’t any anomalis.

I did a simple abs.exe –n 50000 (amount of requests) –f TLS1.2 (SSL protocol to use –c 100 (Concurrent) https://host/samlefile.htm from the two host concurrently.

The packet CPU wasn’t really having a hard time with it


And the test finished rather quickly as well, after about 65 seconds on both targets.


I could also see that the NetScaler was currently processing about 1600 SSL transactions each second


Now then I replaced my cert with a trusted cert which has a 2048 Key size, then it became an entirly other story… The Packet CPU was hitting the roof at 100%


I could only get about 170 transactions pr second on that single vCPU


And the test took almost 600 seconds to complete using the same testing parameters


Now apparently the single packeting CPU is the limit here, therefore according to the packet engine CPU doc on Citrix I could adjust this to 4 vCPU (And have 3 packet processing CPUs) just needed to adjust the RAM as well. Then I did another test using the same parameters.


After adjusting the CPU and memory I could use the stat CPU command to see if my packet engines was available


And I was good to go! after spinning up the test I could see that I was getting a good load balancing process between the different packet engines


I could also see that I was able to increase the amount of transactions SSL/second on the NetScaler as well


And the test took about almost half the time the first one did


Not anywhere near what I want in SSL transactions thou, hence I started to do some modifications, since there had to be some sort of limit somewhere.

  • Adjuststed TCP profile (Had no effect, since TCP is not the limitation, got the same result) would have had another result if outside of the LAN)
  • Changes Protocol from TLS 1.2 to SSL 3 so see if that had any effect (No change at all)
  • Changed the quantum SSL size from 8 to 16 KB (No change)
  • Decided to add two more nodes to the abs testing cluster and do 4x paralell tests

This made the CPU engine have a lot of work to do and the SSL transcations went to about 400



Then  I tried to adjust the number of concurent connections to a higher number, but still not closer then 400 SSL transactions per second.

So this was part one of SSL performance benchmarking on NetScaler, stay tuned for part two!

HCI – The new kids on the block

So with Cisco and HP now accepting that HCI is now acceptable and with Cisco announcing their solution based upon Springpath last week and HP coming soon (already targeting Nutanix), and let us not forget VCE with the newly announced VXRAIL as well. It seems like everyone is jumping the boat for hyperconverged infrastructure these days.

So why are they doing this ? Why is everybody now jumping the gun on the HCI train? Well there is a reason why companis like Simplivity, Atlantis and Nutanix existing today….They had something to give to the market, and now the big guys are done losing money to these other companies and want to be able to sell HCI to their customers as well. That is one perspective on it, another is of course the idea that for instance HP wants to be able to be a full stack vendor (If a customer wants components from A to Z, then HP can be able to deliver the entire stack, but that again allows the big guns to make more money…

So what is the future going to look like for all these companies competing in the HCI marketspace? Let’s look at the hardware vendors first

  • Cisco : Which has bought a former HCI company Springpath, didn’t have much success with Whiptail when they bought it up, on the other hand Cisco has built up a solid reputation with their UCS portfolio but their roots are still in networking and now they are of course pitching ACI as well. Important to remember that Cisco also has a partnership with Simplivity, how is that going to play out ?
  • HP: Don’t know what their HCI stack is going to look like, but HP has been pretty silent in this space. They were part of the EVO:RAIL partnership but looked like they dropped out of that agreement pretty early, they have launched some solutions based upon StoreVirtual but I have no idea how that turned out.
  • Dell (Which now has GO! from EU for the company merger between Dell and EMC) which owns VCE and VMware is going to be a powerhouse of different hyperconverged solutions. Dell has pretty much a partnership with every HCI vendor out there (Atlantis, Simplivity, Nutanix (XC-series)
  • Lenovo: Which initially had a partnership with Simplivity in the start of 2015, but eventually signed up a deal with Nutanix which started in 2016 did they have a change of heart? From what I’m hearing Lenovo is focuing hard on Nutanix these days.

Now let’s think about this for a minute. HP Enterprise (Which is now a seperate company) when they get a HCI solution can pretty much sell you anything you want inside a datacenter, but their primary focus is not going to be HCI but a good mix of their entire stack, how can we trust that their HCI solution is going to be good enough? or if HP is going to invest properly into it so it can be a good solution in the long term.

and what about Cisco, which has existing partnerships with VCE and Simplivity and now are lauching their own HCI solution what’s going to happen there? Can they still continue no with their existing partnerships and still focus on their own HCI platform?

Dell which is going to be the largest storage company now with the purchase of EMC is going to have some hard choices to make since they can’t have like the largest resturant menu in the world with different storage options and have OEM partnerships will all the different vendors, how can they focus on HCI? They will most likely wont, but just be a reseller of the OEM partnerships.

Now let’s move away from the hardware vendors and look at the Software vendors, VMware and Microsoft. VMware is gaining more traction with the release of VSAN 6.2 (VSAN is also used in VXRail) and is of course tightly integrated into the VMware ecosystem. On the other hand we have Microsoft with their upcoming solution in server 2016, which I know is going to be a cost effective way to get HCI, and if you have a Windows Server 2016 datacenter well then you have to features to do HCI with Microsoft.

Let us not forget that VMware and Microsoft are also competing on virtualization space and with their SDS storage solution tightly integrated with each other is going to be interesting to watch. But from the looks of it Springpath and the upcoming HP stuff support only VMware which is going to help VMware have a steady market share for the time being, and also Simplivity also supports only VMware.

Now again, think for a minute if I neeeded a HCI vendor should I look at my hardware vendor and see what kind of solution they would give me? Of course its important to remember it is not their key focus area is “just” another solution in their portfolio. Should I let my hypervisor use, reflect on what HCI vendor I should pick? again it would be tempting since we then in most cases can maybe reuse existing equipment since (VMware and Microsoft) do all in software as long as you follow the HCL and the solution will be tightly integrated with the hypervisor.

But again we also have the other guys in this marketspace, Atlantis, Simplivity and Nutanix which their core focus in HCI. The only reason why they exist is because of their one-product (or strategy) in this emerging market.

Now whatever happens, it is indeed interesting times and looks like this is going to drive the HCI market even further in the times to come

Create a custom resource in Azurestack marketplace example XenDesktop

After dabbling around with AzureStack for some time I decided to see how easy it was to add custom resources to AzureStack marketplace so users themselves can provision virtual machines. As an example I choose XenDesktop (empty site with pre-installed SQL server)

There are a few steps needed to actually do this.

1: Download Azure Gallery Packaging Tool, here –>

2: Open the downloaded file and copy the “SimpleVMteplate” into another folder and rename it to something else.

3: AzureStack uses Resource Manager so you need a finished template which you can use to set this up, I need to complete my template for XenDesktop (will then place it on github) but until then we can use the sample azuredeploy-101-simple-windows-vm JSON file.

4: Make changes to the files, there are some things to make notice of. There are 3 folders

DeploymentTemplates —> JSON ARM files go here

Icons –> Icons which should be visiable in AzureStack portal should be placed here, there are 4 different sizes Large(115×115) Medium (90×90),  Small (40×40), Wide (255×115) all the images should be in PNG

Strings –> Here we have the resources JSON file where we can define the different things that should appear in the AzureStack portal, an example

  “displayName”: “XenDesktop”,
  “publisherDisplayName”: “Citrix”,
  “summary”: “Sets up a fully empty XenDesktop 7.8 site”,
  “longSummary”: “Sets up a fully empty XenDesktop 7.8 site with a local SQL express installed into it”,
  “description”: “<p>This is just a sample of the type of description you could create for your gallery item!</p><p>This is a second paragraph.</p>”,
  “documentationLink”: “Documentation”

We also have


The manifest.json file is which ties it all together,

{ “$schema”: “”,
    “name”: “XenDesktop_empty_site”,
    “publisher”: “Citrix_Systems”,
    “version”: “1.0.0”,

    “displayName”: “ms-resource:displayName”,
    “publisherDisplayName”: “ms-resource:publisherDisplayName”,
    “publisherLegalName”: “ms-resource:publisherDisplayName”,
    “summary”: “ms-resource:summary”,
    “longSummary”: “ms-resource:longSummary”,
    “description”: “ms-resource:description”,
    “longDescription”: “ms-resource:description”,
    “links”: [
        { “displayName”: “ms-resource:documentationLink”, “uri”: “″ }
    “artifacts”: [
            “name”: “xendesktop”,
            “type”: “Template”,
            “path”: “DeploymentTemplates\xendesktop_empty_site.json”,
            “isDefault”: true

    “icons”: {
      “Small”: “Icons\Small.png”,
      “Medium”: “Icons\Medium.png”,
      “Large”: “Icons\Large.png”,
      “Wide”: “Icons\Wide.png”
    “Desktop brokers”

    “uiDefinition”: {
      “path”: “UIDefinition.json”

So I have added the name of the template, published, version and changes artifacts which JSON template to use and such, also I defined another category which is called Desktop Brokers.

After you have successfully changed everything we need to convert it to a azpkg, this can be done using the AzureGalleryPackager.exe (Which is part of the download)

Using the following parameters

AzureGalleryPackager.exe –m c:somethingxendesktoptemplatemanifest.json –o c:output

Make sure there are no spaces in version or publisher this will generate an error when creating the azpkg. Now we need to import this into AzureStack. the way to do this is go RDP into the clientVM, open up PowerShell, use this script to connect to your local AzureStack resources. Remember to change your AadTenantID.

# Add the Microsoft Azure Stack environment


# Configure the environment with the Add-AzureRmEnvironment cmdlet
        Add-AzureRmEnvironment -Name ‘Azure Stack’ `
            -ActiveDirectoryEndpoint (“$AadTenantId/”) `
            -ActiveDirectoryServiceEndpointResourceId “
            -ResourceManagerEndpoint (“https://api.azurestack.local/”) `
            -GalleryEndpoint (“
https://gallery.azurestack.local/”) `
            -GraphEndpoint “”

        # Authenticate a user to the environment (you will be prompted during authentication)
        $privateEnv = Get-AzureRmEnvironment ‘Azure Stack’
        $privateAzure = Add-AzureRmAccount -Environment $privateEnv -Verbose
        Select-AzureRmProfile -Profile $privateAzure

        # Select an existing subscription where the deployment will take place
        Get-AzureRmSubscription -SubscriptionName “Default Provider Subscription”  | Select-AzureRmSubscription

Using the command Get-AzureRmSubscription we can check if we are connected take note of the subscriptionID . Next we use the command,

Add-AzureRMGalleryItem -SubscriptionId XXXXXXXXXXXXXXXX -ResourceGroup system -Name Citrix_Systems.XenDesktop_empty_site.1.0.0 -Path C:AzurestackCitrix_Systems.XenDesktop_empty_site.1.0.0.azpkg  -Apiversion “2015-04-01” –Verbose

Note that the NAME needs to match what we have defined in the manifest.json file

   “name”: “XenDesktop_empty_site”,
    “publisher”: “Citrix_Systems”,
    “version”: “1.0.0”,

And there we go, we can see the categori and the template resource we just created


Setting up SAML authetication for NetScaler and Storefront with SSO

After I’ve been dabling on a solution to try to fix a SSO solution between SAML and Citrix, I’ve been pretty much banging my head to the wall after trying out a bunch of different solutions. Then out of nowhere comes this along!


Yay! Finally an solution to what I have been trying to do for some time! So let’s deep-dive into it. I’m also going to use my own deployment as an example to show how this actually works. Note that this only works for Web receiver and not for native receiver setup. Now this example which was just for demonstrations I used a NetScaler SAML iDP setup in another site. Which was then setup using AD trust in the backend to make it simple. Now when a user tries to logon the NetScaler Gateway vServer it will be redirected to SAML iDP based upon the SAML authetication policy. The iDP vServer has a policy which triggers an AD auth policy and allows for LDAP authenticaiton against the remote Active Directory.


After auth is successfull the SAML assertion is returned to the NetScaler Gateway which then will take the token and apply the session policy and do SSO to Storefront. Storefront is configured only with NetScaler Gateway pass-trough setup and will then see the SAML assertion as a form of Smart Card. Because of the User Credential Service, Storefront is able to map the SAML identity assertion to convert that into a network virtual smart card logon for active directory.

So this solution is highly dependant on use of a Active Directory Certificate Services deployment interally, and using NetScaler for SAML iDP as well requires alot of certs to setup.

So just go trough a quick setup of this. (Note I have a wildcard certificate which handles the signing of the SP and for the aaa vServer)

1: NetScaler SAML iDP policy on the (samlidp.test.local) Note that the nsgw2.test.local is my NetScaler Gateway vServer which acts as an SAML SP.

Then I have an policy expression which looks like this, which means that if traffic which contains the URL (saml) it should trigger the samlIDP policy which has the action SAMLIDP.


I also have an LDAP policy bound to this SAML iDP vServer which has the expression of ns_true, because I want to have the SAMLIDP do LDAP based auth backend and then return an SAML assertion to the SAML SP (Which is NetScaler Gateway)

The SAML Policy on the Gateway vServer


Then we just need a regular Session Policy which does the SSO to web applications and add the Storefront Web Receiver URL as well.

In my case I installed both of  these components  on my StoreFront server.


the other MSI components needs to be installed on the VDA (Requires 7.8!!!) Also you need to configure the domain controllers to have ready for smart card authentication (Which is described here –> )

Configure that password validation should be done on the NetScaler gateway, and define the Store for Passtrough via NetScaler Gateway.


After installing this on the Storefront server you need to get a policy ADMX file from the installed and place in on your central store repository, it can be found here –>


Then after setup open GP manager and change the default domain policy (It is only needed for Storefront and VDA agents which need to see the UCS DNS server)


Configure the User Credential Service and Virtual Smartcards setting. Note under User Credential Service you need to specify the DNS server of the server that runs the UCS service! In my case it is the StoreFront server.

After that you need to run an GPupdate before you continue the process. Next you need to run the Citrix User Credential


I have already done the steps (hence the green color) but what it does is basically publish some certificate templates to the AD CS (PKI enviroment) but on the third task you need to go to the AD CS and click Issue certificate (Since the USC service will ask for a certificate from the CA)

After this is done all 3 parts will have a green light, if we go into roles we can adjust which storefront servers and users / vda


So the traffic flow will look like this.

1: Users goes to NSGW

2: NSGW SAML Policy redirects logon to SAMLiDP

3: User enters username & password, SAMLiDP has AD auth

4: Successfull login the SAMLiDP redirects SAML assertion to SAMLSP which is the NSGW

5: NSGW forwards to SAML Assertion to Storefront

6: Storefront takes the SAML Assertion and generates an communicates with the USC service

7: UCS service generates a user cert from the CA

8: Storefront presents the cert to the VDA agent

9: Good to go!

However there are some gotchas you need to be aware of as of now.

BUG0606299 — When logging in to a Windows Server (2012 R2) VDA, the “Smart card service” must be running on the client machine.

BUG0608266 — Non-Windows Receiver clients cannot log into a Windows Server (2012 R2) VDA.

BUG0606512 — StoreFront requires that users have an explicit userPrincipalName name attribute set in Active Directory to use the User Credential Service.

BUG0612372 — UCS needs to create inbound firewall rule to allow connections during installation.

BUG0610430 — UCS logon occasionally gets stuck during logon process Welcome screen after authentication is completed.