Monthly Archives: January 2016

Azure Stack and the rest of the story

Now part two of my Azure Stack and Infrastructure story. Now Microsoft is doing a big leap in the Azure Stack release. With the release the current setup is moving towards more use of software-defined solution which are included in Windows Server 2016. This includes features like micro segmentation, load balancing, VXLAN, storage spaces direct (which is a hyperconverged confiuration of storage spaces)

We also have ARM which does the provisioning feature, using APIs and DSC for custom setup of virtual machine instances.

More details on the PaaS features will come during the upcoming weeks, and in the first release only Web Apps will be added.

So Microsoft is now providing features which before often being done by a third party and unlike Azure Pack this does not require any System Center parts and runs natively on Hyper-V


Now what else is missing in this picture? Since if we want to run this in a larger scale we need to think about the larger picture in a datacenter, using VXLAN will also require some custom setup.

Also with using Storage Spaces Direct in a Azure Stack fabric will also require RDMA networking infrastructure

(NOTE: Storage Spaces Direct has a limit in terms of max nodes)

(“Networking hardware Storage Spaces Direct relies on a network to communicate between hosts. For production deployments, it is required to have an RDMA-capable NIC (or a pair of NIC ports”) ref

This will also allow use the latest networking capabilities which is called SET Switch Embedded Teaming.

So in both cases you need a RDMA based infrastructure. So remember that! You need to rethink the physical networking. Another thing of the puzzle is backup. Now since Azure Stack delivers the management / proviosing and some fundamental services we need backup of our tenants data. Storage Spaces Direct deliver resilliency, but not backup. So for instance Arista has some good options in terms of RDMA, also since they support OMI which will allow for automation.

We need to enable a backup solution which can integrate to Hyper-V and have an REST API which can then allow us to build custom solutions into Azure Stack.

Also, a monitoring solution needs to be in place, Azure Stack adds alot of extra complecity in terms of infrastructure and alot of new services which are crucial especially in the networking/storage provider space. As of now I’m guessing that System Center will be the first monitoring solution which will support Azure Stack monitoring.

Another thing is load balancing, since we have more web based services for the different purposes and not MMC based consoles like we have in System Center, to deliver high-availability, (for instance the portal web, ARM interface and so on)

So in my ideal world, the Azure Stack drawning should look like this in my case.


Running Azure Stack on Vmware workstation nested

Well, Since the release of Azure Stack preview earlier today, I’ve been quite the busy bee… The only problem is that I didn’t have the adequate hardware to play around with it… Or so I thought.. I setup a virtual 2016 server on my Vmware Workstation.

Added some SATA based disks since I know this is “recommended hardware” as part of the PoC


Also remember to set it to Hyper-V (Unsupported)


After that I had to change some parameters in some of the scripts, since there is a PowerShell script which basically checks if the host has enough memory installed. This I changed within the Invoke-AzureStackDeploymentPreCheck.ps1


Now when you run the first AzureDeploy script it will mount the PoC install as a readonly VHD, and since the Invoke-AzureStackDeploymentPreCheck.ps1 is stored on the read only VHD you cannot do any changes to it. So you first need to change the DeployAzureStack script to mount the disk as read/write


You should also change the PoCFabric.xml which is located under AzureStackInstallerPoCFabricInstaller and change the CPU and memory settings or else you won’t be able to complete the setup


After that, just look at it go!


Windows Azure Stack–What about the infrastructure Story?

There is no denying that Microsoft Azure is a success story, from being the lame silverlight portal with limited capabilities that it was to become a global force to be reckoned with in the cloud marketspace.

Later today Microsoft is releasing their first tech preview of their Azure Stack. Which allow us to bring the power of Azure platform to our pwn datacenters. It brings the same consistent UI and feature set of Azure resource manager which allows us to use the same tools and resource we have used in Azure against our own local cloud.

This of course will allow large customers and hosting providers to deliver Azure platform from their own datacenter. The idea seems pretty good thou. But what is actually Azure Stack ? It only deliver half of the promise of a Cloud like infrastructure. So I would place Azure stack within the category of cloud management platform. Since it is giving us the framework and portal experience

Now when we eventually have this setup and configured, we are given some of the benefits of the cloud which are

  • Automation
  • Self-Service
  • A common framework and platform to work with

Now if we look at the picture above there are some important things we need to think about in terms of fitting within the cloud aspect which is the computer fabric / network fabric and storage fabric which is missing from the Microsoft story. Of course Microsoft is a software company, but they are moving forward with their CPS solution with Dell and moving a bit towards the hardware space, but no where close yet.

When I think about Azure I also think about the resources which are beneath, they are always available, non-silo based and can scale up and down as I need to. Now if we think about the way that Microsoft has built their own datacenters there are no SAN archietecture at all, just a bunch of single machines with local storage with using software as the way to connect all this storage and compute into a large pool of resources, which is the way it should be since the SAN architecture just cannot fit into a full cloud solution. This is also the way it should be for an on-premises solution. If we were to deploy a Azure CloudStack to deliver the benefits of a cloud solution, the infrastructure should reflect that. As of right now Microsoft cannot give a good enough storage/compute solution with Storage Spaces in 2012 R2 since there are limits to the scale, and points of failure which a public cloud does not have.

Now Nutanix are one of the few providers which deliver support for Hyper-V and SMB 3.0 and does not have any scale limits and have the same properties as a public cloud solution. It agreegates all storage on local drives within each node into a pool of storage and with redundancy in all layers including an REST API which can easily integrate into Azure Stack, I can easily see that as the best way to deliver an on-premises cloud solution and a killer-combination.

Setting up Veeam Managed backup portal in Azure

Veeam now has available a new managed backup portal in the Azure marketplace, which will make it easier to do on-boarding / monitoring and multi-tenancy.

Integrated with Veeam Cloud Connect for Service Providers and available in the Microsoft Azure Marketplace, Veeam Managed Backup Portal for Service Providers makes it easy to acquire new customers and build new revenue streams through the following capabilities:

  • Simplified customer on-boarding: With a service provider administration portal, creating new customer accounts, provisioning services, and even managing customer billing and invoicing is easier than ever 
  • Streamlined remote monitoring and remote management: Daily monitoring and management of customers’ jobs is made simple and convenient, and can be done securely through a single port over SSL/TLS (no VPN required)
  • Multi-tenant customer portal: Clients remain engaged with a customer portal where they can set up users and locations, easily monitor backup health, review cloud repository consumption and manage monthly billing statements.

Now this as of now in tech preview available from Azure marketplace.


Which can deployed either using resource manager or using classic mode. After the deployment is done, you should do one last configuraiton which is to add a custom endpoint to be allowed to manage the setup externally over https. Which can be done under the security group endpoint settings.


NOTE: Before managing anything from the portal you need to add a license to the Veeam console, you can get a trial license here –> (Then connect to the virtual machine using RDP)

NOTE: The cloud connect seutp is already enabled, ports are also setup.

After adding the firewall rules for (destination port:443) source any we can configure the portal using the public IP address and port 443 (From there we login with our machine username and password, which was provisioned using the Azure portal)


After logging in into the portal I am greeted with the configuration wizard.


So we can start by creating a new customer


So we go trough the settings like a reguler setup and we choose a subscription plan


Next time I now logout and login again, I have a new portal dashboard, which gives me the quick overview.


We can also see that there is a new user created with description Veeam portal


now after we add a cloud gateway on the Azure machine, we can connect to it using an existing Veeam infrastructure


And configure and backup copy job and start doing copies to Azure. The end customer has its own portal (website) that they can access to see their status. They need to login using companynameusername and password on the same portal.


This is just a small post on what is to come!

Speaking at NIC 2015

In a couple of weeks I am lucky enough to be presenting two sessions at NIC 2015, here in Norway.
The first session I have is about Application virtualization vs Application layering. Where I will go a bit in-depth on how the differences are between the two technologies, and discuss a bit about the different strengths / weaknesses so for instance I will cover App-V, ThinApp, layering technologies such as Vmware AppVolumes, UniDesk and Citrix AppDisks

The second session is about delivering Office365 in a terminal server enviroment, where I will cover stuff like delivery options of Office, optimizing Skype/Outlook, bandwidth and IP requirements, and will also cover a bit more about Citrix Optimization Pack for Skype which now has a lot better support for Office365 and lastly, troubleshooting slow Office which is a common thing…

Besides that I will be standing in the Nutanix booth, so if you have time come and say hi!

Why is VMware NSX so cool?

Yeah, I’ve been silent for a while but that is because I have been attending a training course on NSX this week, and ohh boy it has been a great learning experience.

The first real reason why I wanted to attend the NSX training course was because of the microsegmentation capabilities, but as I quickly understood this was only a small portion of the feature set that is available in the product.

So why is it so cool? firstly lets look at an tradisional network setup using firewalls, routers, switches and so on. We have our different VLANs configured on each of the switches and on the VDS. We have our firewall further up the chain which is checking all the traffic going back and forth between the different zones, this might just as well be ACLs on the switches. This is a simple drawning of something that is of course much more complex in a datacenter enviroment


Let us say that we have an e-commerece website which consists of a two tier setup, an web-tier and DB-tier. Which are placed on different subnets. External traffic is going via the external load balancing which load balances between the different web-servers. When there is a query from the web-servers they will go back across the wire, back to the firewall and then to the other subnet which is the DB-tier. Which is this case means that traffic going between two virtual machines on the same host needs to go out the wire, be processed by the firewall then back to the same host to another virtual machine. Whcih generates alot of north-south traffic, also this isn’t really scaleable for the firewall, since in many cases is a large box with alot of troughput which has to process everything for each VLAN/Subnet.

So how would this look like in a NSX enviroment?

Well to sum it up, NSX consists of different modules, such as Distributed Logical Router (Which is an in-kernel modul) what this does, is at allows for in-kernel routing between logical networks (running VXLAN) so for instance, for virtual machines that need to communicate between Logical Network 5000 and 5001 that might be processed on the SAME ESX Host, meaning that the traffic does not need to out to the physical networks.


Same goes for firewall, which also is a distributed firewall on each ESX host, rules are processed before a virtual machine is allowed to send a traffic and before it is received to another virtual machine on the same esx host. Using VXLAN we can also stretch L2 across different networks, without doing much on the physical network (No VLAN configuration required) which in essence we can have a flat l2 network. Distributed load balancing is also a feature which is coming later, but as we can see more and more features which has earlier been a dedicated appliance can now be processed directly in the VMkernel, which also allows for pure scale out architecture.

NOTE: Load balancing is still in tech preview

Does it cost alot of money? Sure it does! but you have to think about what you are actually saving here in terms of overhead.

  • Routing traffic going north/south handeld by the DLR which might be on the same host.
  • Spanning L2 traffic over L3 network, without having to define “anything” on the switches in term of VLANs, when for instance moving a VM using vMotion to another location.
  • Distributed firewall which scales with each ESX host and all policies are handled by each ESX host before entering and leaving the host. (Aka Microsegmentation)
  • Load balancing using HAproxy
  • SSL VPN and L2VPN
  • Security composer which allow us to create security group and security policies which can be trigged based upon different criteria and moving objects to different groups.
  • REST API to automate anything!
  • Integrate with thirdparty vendors to get access to features like IDS/IPS, Antivirus, DLP, vurnleability scan, more advanced capabilities.

And there is more to come!
Distributed load balancing within the kernel module is in preview –>

And also moving forward, VMware is expanding the NSX capabilities into different cloud providers as well


Which will in essence allow L2 networking across any cloud provider, managed from the same solution.

Leverage OMS to analyze, predict and protect your Linux workloads

Powered by Microsoft’s Azure Cloud Platform, Operations Management Suite (OMS) provides more predictive & analytical capabilities to keep your data center healthy. Savision’s newest whitepaper is written by Microsoft MVP Janaka Rangama and is entitled: ‘Born in the Cloud: Monitoring Linux Workloads with OMS’. It provides insights on how organizations can combine existing System Center Operations Manager (SCOM) environments with OMS to gain control over modern hybrid clouds.

Savision’s newest whitepaper focuses on monitoring Linux workloads with OMS for analytics, proactive monitoring and resource utilization in your heterogeneous data center environment. In addition, MVP Janaka shows you how you can extend OMS with business service management information to ensure you can directly know how all that detailed log data impacts your business and service levels.

With this whitepaper, you will learn:

  • What Microsoft Operations Management Suite is and how it can simplify data center management.
  • How to leverage OMS Log Analytics to analyze, predict and protect your Linux workloads.
  • How to integrate System Center Operations Manager with OMS for extended monitoring.
  • How to harness the power of Business Service Monitoring of Savision Live Maps Unity using Microsoft OMS.

You can download the whitepaper here:

Azure RemoteApp vs RDS Azure IaaS vs Citrix XenDesktop

This is a question that is appearing again, again and again. If I  want an easy way to deliver apps to my customers what should I choose if they are interested in using Azure? and Ive seen so many failing to graps what each of these solutions actually manage to deliver, so hence this blogpost.

So first of let’s explore what Azure RemoteApp is. This is a feature which allows us to deliver Applications using RDP. You use an custom client from Microsoft ontop of the regular MSTSC client, which in essence wraps inn Azure AD authentication and resources on top.

It comes in four flavours. Basic, Standard, Premium and Premium plus. One thing to be aware of is that For Basic and Standard tiers, there is minimum requirement of 20 users for each App Collection. For Premium and Premium Plus, the minimum requirement is 5 users for each App Collection.
So if we choose Basic and only have one user we will be billed for 20 users, same goes with Premium where the minimum is 5 users, other then that we do not need any other licenses, and the subscription model is easy a user/month$

Another thing to think about is that with RemoteApp all users a given 50GB of personal storage using Microsoft’s own User Profile disk, but there is another reason for that which is that Azure RemoteApp consists of dynamic machines, so if we need to update the base image or Microsoft decided to do maintance or update the OS, the machines running the remoteapp service for our customers might be taken down and recreated, which makes it hard to use Azure RemoteApp with services which requires static data such as an database service.

We can of course change this by setting up a hybrid Azure RemoteApp and integrate it with an another Azure IaaS setup or on-premises setup. Another issue that it can only publish applications and not full desktops, and that even thou it leverages Microsoft RDP without the use of UDP with TCP, just TCP, and if you are getting up to about 80/100 MS latency to Azure datacenter and services this might affect the experience for the end-users, but still RemoteApp delivers an simple and in most cases a cheap application delivery system. Also that it enables single-image management.

On the other hand we have use of regular RDS within Azure, what does this give us ?

With regular IaaS we can setup this as an “regular” RDS solution, we can also leverage other Azure features such as ARM using templates to automatically provision more resources/RDS servers needed and publish endpoints.


We can also define different server sizes that we can choose from of the templates. Now this is in most case like a VM template features even thou it extends outside of the IaaS feature in Azure, but it does not help us with patch management and single image management.

But there are many different sized and editions we can choose from, which allow us to easily to provision resources on demand.

Another upside to using regular RDS is that we can also leverage SQL based applicationss and with the upcoming release of N-series we can also leverage RemoteFX vGPU features which allow usage of OpenGL and DirectX based applications, and with IaaS in Azure we can shut down resources when we are not using the compute power and not needing to pay for it. Which can also be automated using Azure Automation.

Also if we are planning on setting up Azure IaaS with RDS we can also leverage OMS to allow for simple logs and network analysis. Since this is free for up to 500MB and can for instance be leveraged in an IaaS enviroment to see how much traffic is going back and forth and from which service and so on. This is also now supported on Azure RemoteApp as well.


Using regular IaaS we can also leverage UDP when setting up endpoints for each resource. Which allow us to use RemoteFX features available for RDS.


Now since we already have these options why should we even consider Citrix in Azure?

With the release of XenDesktop 7.7, Citrix has introduced alot of new features, including integration with Azure in terms of proviosning.

Some important details around this.

  • Only supports MCS
  • Only available against SRM not ARM resources

Which allows for simple provisioning using Citrix Studio

On the other hand Citrix has another feature which can be easily integrated within Azure which is Workspace Cloud. So instead of using ARM to do the provisioning pieces of Azure, we can use Workspace Cloud Lifecycle Management to do the provisioning.

Citrix has created a finished blueprint which allows of a full deployment of Citrix in Azure.


But that is still for the provisioning part of the deployment. Other cool features is the different protocols that we can use in Citrix. For instance we can setup use of ThinWire and Framehawk against Azure, only issue is that we cannot use it against Netscaler, since Netscaler in Azure Marketplace is still on a custom 10.5 build. Framehawk is supported on NetScaler Gateway 11.0 build 62.10.

But still the protocol is much more efficient on Citrix which will allow for a much better user experience against Azure. And will the continius development on Citrix happening I also guessing that support for the GPU N-series using GPU Passtrough will allow for HDX 3d PRO support as well.

Ref ThinWire / Framehawk vs RDS

But in the end, both RDS IaaS and Citrix running on Azure IaaS will create a different cost since this involves other components in Azure

  • Compute
  • Storage
  • Storage Transactions
  • Bandwidth
  • VPN Gateway (Optional)

So before thinking about setting up Citrix / RDS or RemoteApp know about the limitations that are in place, get an overview of the costs assosiated and what are your requirements for a solution.

The integrations in place from Citrix points of view are still lacking in terms of support for the latest feature in Azure, but they are moving forward, but Microsoft is also investing alot of development on Azure RemoteApp which will soon include alot of new features but it still is lacking the features needed for larger buisnesses.

Storefront 3.1 Technical Preview and configuration options

With the release of Storefront 3.1, Citrix made alot of options which were earlier only available in PowerShell or a configfile available in the GUI, which makes alot more sense since WebInterface has always had alot of options available in the GUI. Now I was a bit dazzled with the numerous options that are available, so what do they all mean?? Hence this post which is used to explain what the different options do, and even what error messages that bit appear because of them.
First of let’s explore the store options in Storefront.

Store Options

User Subscription (This defines if users are allowed to Subscribe to applications or if applications are being mandatory)


For instance Self-service store (GUI Changes to this)


Mandatory Store (GUI Changes to this)


Kerberos Delegation (Allows ut to use Kerberos Constrained Delegation from StoreFront to Controllers)


Optimal HDX Routing (Defines if ICA traffic should be routed to Netscaler Gateway even if users are going directly to the StoreFront) We can define a Gateway and attach it to a Farm/Controller, so if we have multiple controllers on different geographic regions we can specify multiple gateways and attach it to the correct delivery controller.

We can also define Direct Access (Which we can enable for each Optimal Gateway) which defines if users which are trying to authenticate internally direct against storefront will also have traffic redirected to the Gateway.

We can also define Optimal Gateway and attach it with Stores which are part of XD 7.7


Citrix Online Integration (Defines if GoTo applications should appear in the Store)


Advertise Store (Defines if the Store should be available to select from Citrix Receiver client, if we choose to hide the Store the only way to access the store is to setup manually, or using provisioning file)


Advanced Settings (Address Resolution Type: Defines which type of address the XML service will respond to Storefront with, by default it is DNS based return, or we can change this to IPv4)

Allow font smoothing: Defines if font smoothing should be enabled in the ICA session

Allow Session Reconnect: Also known as Workspace control, which defines if users can reconnect to existing sessions without restart applications

Allow special folder redirection: Defines if Document & Desktops on the local computer should be used in the redirected session. By default the servers profile Documents Desktop folder are used

Time-out: Define how long time it should go before the connection times out.

Enable Desktop Viewer: Defines if the Desktop Viewer should be visible in the connection

Enable Enhanced Enumeration: If we have a Storefront configured with mulitple stores, Storefront will contact these Stores in sequencial so if there are alot of resouces this might take some time. With Enhanced Enumeration, Storefront will contact these Stores in Parralell

Maximum Concurrent enumerations: How many concurrent enumeration connections to the Store resources, by default this is 0 which means unlimited

Override ICA client name: Overrides the default ICA client name

Require token consistency: Validates authenticaiton attempts on the Netscaler Gateway and on the Storefront Server, this must be enabled if we want to use Smart Access. This is typically disabled if we want to disable authentication on the Netscaler and do authentication directly to the Storefront server


Server Communication attempts: How many times Storefront should try to communicate with a Controller before it marks it at down (default: 1)

Next we also have web site receiver configuration in Storefront

Receiver Experience (If we should use the regular Green bubble theme or using the unified experience) Disabling classic experience will also give other options such as configuring apperance as well.


Authentication methods (Defines what kind of authentications we can use against Storefront)


Website Shortcuts


If you wish to add Storefront to another web portal using for instance as an iFrame(will be shown as this)
you need to enter the URL which is allowed to connect to Storefront as an iFrame in the WebSite Shourtcuts.image

Deploy Citrix Receiver (what kind of Receiver should Storefront offer to the authenticated user)


And if we choose install locally we have a number of options



Session settings (How long a session is active before it times out against Storefront)


Workspace Control (What should do if a clients is inactive/logs out) Here we can define so that if a user moves from one device to another the user should reconnect to their existing session)


Client interface settings (Here we can define certion options such as, if a desktop should be auto launched, if Desktop viewer should be enabled, if users are allowed to download Receiver configuraiton from within Receiver for web, and also what kind of panes should be default and shown within Receiver for web)


Advanced settings


 Enable Fiddler Tracing: Enables use of fiddler between Receiver for web and other storefront services. Loopback must also be disable.

Enable Folder view: If folders should be used in Receiver for web

Enable loopback communication: Storefront uses adapter for communication between Receiver for web and other storefront services

Enable protcol handler: Enables use of client detection in Google Chrome

Enable strict transport security: Enables the use of HSTS

ICA file cache expiry: The amount of seconds before an ICA file should be stored in memory

Icon resolution: Default pixel size of an application

Loopback port when using HTTP: Which port should be used for communicaiton with loopback adapter for other storefront services

Prompt for untrusted shortcuts: Prompt the user for permissions to launch apps shortcuts from sites that have not been directly setup as trusted.

Resource details:

Strict transport security policy duration: Time policy for HSTS

No last but not least there are some new interesting features on the authentication site, first of there is the password expiration option under Password Options



When a user logs inn it will look like this.


Another new option is the Password validation feature, in a regular scenario we might now have storefront in the same domain as Xenapp or XenDesktop services, and we might not always be able to setup Active directory trusts, instead we need to setup XML service-based authentication, which will allow Storefront to communicate with XML instead of Active Directory and leave the autheticaiton process to the DDCs. Which is typically the case if we have multi-tenant enviroments.


Another option that we have is when defining Gateways in Storefront, we can now define if Gateways should have the role of HDX routing only, Authenticaiton only or both. If we choose HDX routing only, we cannot use this gateway for remote access for the store.


As we see here (It does not show) The reason for that is that if we want a regular ICA proxy setup to work with Receiver for web and regular receiver we need to configure auth at the Gateway, which means that we need to define auth at the Gateway to be able to use it for remote access against the store.


The latest COOL features which is now part of the GUI Storefront is the ability to do User farm mapping. Which in essence Is used to assign a group of users to a selection of Sites/farms. So if we have multiple farms we can define a certain group of users which should be mapped to that farm. This is done on the controller settings


Then choose map users to controllers


Define AD group


Then define which controllers it should contact to display resources.


And voila! alot of cool new features in the TP which I makes it to GA soon!
There are some bugs in the GUI but I think we have a fully WI replacement!

Multitenant guide setup for Storefront and Netscaler with ICA-proxy

This is something I have been working on for quite some time… Fact it has been quite a pain in the ass to setup, but I think I finally managed to solve it properly. If anyone sees any issues or something that I haven’t adressed, please leave me a comment either below in the post or on twitter @msandbu
  One of the issues with trying to setup Netscaler and Storefront in a multi-tenant are in some cases the:

  • Amount of authentication policies needed to hit all the specific domains in a multi-tenant enviroment
  • Theme customization, this is by default set at a vServer level, which means that we need a vServer pr customer if we want customization
  • We could solve this with multiple Gateway vServers, but with Multiple vServers also means that we need many IP-addresses, which we might not have.
  • Multiple customer domains

Now it is possible to bypass Netscaler authentication, and setup the Gateway vServer just act as a ICA-proxy, so authentication happens at the Storefront but this setup does not work for Receiver. Since in a Netscaler Gateway setup, the Receiver needs to authenticate against the Gateway first.

NOTE: This might not be a supported configuration from Citrix, but it works and it requires a regular Netscaler for it to function (Not Gateway VPX)

So from an overview, how does it work?

  • We publish Storefront as a LB vServers behind the Netscaler (Meaning that Storefront is accessable from the external network)
  • We configure an Gateway vServer, which will handle the ICA traffic.
  • We use Responder, Rewrite policies to handle the redirect to the correct URLs.
  • We configure Optimal Gateway Routing with direct access on Storefront (Which basically means that all ICA traffic regardless of beacons will be redirected using a Gateway. This feature is not new, but with Storefront 3.1 tech preview this is available in the GUI. We also define that Gateways are being used for HDX routing only, all other auth will happen on Storefront.
  • We have one or multiple Storefront stores depending on the requirements for backend setup for instance if we have multiple isolated active directory, and we have defined password verification against DDCs instead of Active directory. This might vary from deployment to deployment but important to remember what are Store specific settings and what are Receiver for web specific settings.
  • We can have multiple Gateway vServers to handle communication, but customers still need one URL for storefront setup.

So if we look at the screenshot below, this is a test deployment I did. So when a user starts receiver for the first time and tries to configure his Receiver it will be communicating directly with the Storefront endpoint and configures properly. Depending on what kind of Store the user is accessing this might be done using DDC validation or using Active Directory. Same goes if using Receiver for web, the user connects and my typing his customer name is redirected to the customer website on Storefront. When the user tries to start an application or desktop session, the session will generate an ICA file contaning the Optimal Gateway setting (Which means that even thou in theory it is labeled as inside because of setup) the session will be routed using the Gateway.


So how to set this up?

  • First setup a load balanced vServer containing the Storefront servers, using HTTPS/443
  • Now I can’t address all configurations on the Storefront with stores and such so I gonna setup a generic Storefront setup where we have the Storefront in a untrusted domain using XML based auth against the DDC, and one simple store and where we have two customer URLs ( and

First of the base URL should not contain any customer specific reference so in case it should be just an indicator of the service, this is not something the enduser will see unless he for instance opens the receiver configuraiton file.

In my case its just (Setup a wildcard cert on the Storefront server or we can use a SAN or SNI based cert) in my case I have a wildcard cert for the domain


Create a Storefront Store with internal access only, leave everything at default as of now. Create the Receiver for web sites needed for the end customers.

NOTE: I did some changes on the different websites to show how this works from the end-user experience

Note: We can alter what we want for each website, portal customization can be done under the c:inetpubwwwrootcitrix(nameofwebsite) or using the Storefront GUI.

Next we define the Gateway that this store is going to use, this can be done by going into the Store settings –> Optimal HDX Routing


Specify HDX routing usage only and add the external FQDN of the Gateway. (And no the Storefront does not need to be able to communicate with the Gateway, since Auth is done completely at Storefront. After you have added the gateway, click for Direct Access and define which controllers should be used against the optimal gateway


So after this is setup, we need to add rewrite rules and URL transformation for each customer to their website on the Storefront.

Rewrite rules: these are pretty simple just replaces a URL prefix at the end


Then I have an expression that looks at the host name and specificies that the URL must be at the root to it continue


These policies needs to be added to the Storefront LB vServer. Next we also need to have URL transformation policies to define HTTP to HTTPS rules.

Simplest is to add a Netscaler URL transformation profile and add the different URLs


when creating the URL Transformation Profile, the simplest way is to use the HTTP.REQ.IS_VALID expression, since this policy is only being applied once to a storefront vServer, before the end users are being redirected to the HTTPS version


Setup a HTTP vServer Storefront on the same VIP as the SSL based vServer and add the transformation policy. This means that when a user logs inn to (The HTTP vServer will respond and redirect the user to and the rewrite will add the /Citrix/Website URL at the end.

After this is setup we can verify that Reciever for web is working



NOTE: (ill come back with how to make the URL much more pretty… )

Now that we can verify that this works we need to configure the Gateway which we described earlier. Go into Netscaler Gateway and setup a new vServer with a VIP which responds on the FQDN that we used in Storefront.

Now you need to define a ICA only vServer, with SSL certificate and STA server. No need for session policies. Now when we log into Storefront and try to start an ICA session we can see the following:

InitialProgram=#TS $S1-1

So we can see that the ICA files indicates that we are going using the Netscaler Gateway. Problem solved!

So what are we losing in this setup?

  • All auth happens on Storefront, so if we need to have two-factor that has to be integrated with Storefront directly.
  • We are using Netscaler Gateway only for routing purposes, which means that VPN goes away.