Monthly Archives: April 2016

Networking SIG on My CUGC

So as of late I have been involved in setting up and creating a SIG (Special Interest Group) under MYCUG aimed at Networking (Can be found here –> . Now you might think “err does that mean Citrix Networking?” no! our goal is to create a open community around all types of different networking content

  • Network virtualization
  • Network Security
  • Cloud networking
  • Datacenter networking

And of course Citrix Networking is going to be a big part of it, since it is natural that many of the users there are familiar and used to working with Citrix networking.

Now one of the objectives as well is to bridge the gap between Citrix and the community, so we will have Citrix employees in direct contact to help us create content, answer questions doing webinars and so on. For instance if someone asks a question are we aren’t able to answer it, we might get an official answer from Citrix on the forums.

Now our hope is that this is going to be an “active” community where we have fresh content pusblished, and a bit exclusive compared to the other place where NetScaler content gets published.

For my part is going to be organizing events, content and so on. If you have an idea about content or that you want to publish or stuff you would like to learn or hear more about, please reach out to me. Also I will start to publish more NetScaler related blogs there, so this blog might be a bit quiet on the NetScaler front Smilefjes

Other then that I hope you join our community!

Running RDS 2016 TP5 Connection Broker Database in Azure AD

As part of the RDS 2012, Microsoft introduced the support for high-availability for the connection broker role, this on the other hand required an external SQL database for the connection brokers to share the connection tables between them.
Yesterday Microsoft released a new Tech Preview for Windows Server 2016, which introduces a new feature for the RDS Connection Broker, the ability to use an Azure SQL Database as the DB source for the Connection Broker.

To set this up we need to precreate a new SQL database in Azure, which can be easily located from the Marketplace


So when defining the parameters of the SQL database you can choose the Basic Pricing tier, which costs about 4$ each month.


Now we just have to wait for the deployment to finish from the Azure portal. After the deployment is done you can find the SQL database inside the created resource group


From here click on the “Show database connection strings” and save the ODBC line.
Now when we want to create a high availabilty connection broker, we first need to create a deployment. From within server manager right click on the connection broker


And click on “configure high availability” From the wizard choose Shared Database


NOTE: You need to have the SQL native client installed on the connection broker server to make this feature work, you can download it from here –> this needs to be installed before you can finish the wizard or else it will not connect.

Also you might also need to configure firewall rules against the SQL database, to allow access from the connection brokers.


After this is done you will be able to complete the wizard



If you are getting some issues, pay attention to the Terminal Servers Connection Broker event log


Load balancing features in Azure

Deploying distributed applications and services within Azure can be cumbersome, atleast when you are not familiar with the different options/features you have available. Therefore I wanted to use to post to explain the different options we have for load balancing within Azure, both for internal services and external facing services.

Now there are three different load balancing features available directly in Azure

  • Azure Load Balancer
  • Application Gateway
  • Traffic Manager

There is also third party solutions from different vendors available in the marketplace which has alot more features but again, this in most cases requires additional licenses and compute resources, but ill come back to those later in the blogpost.

Now there are some distinct differerences between the different load balancing features in Azure and the third party vendors.

Service Azure Load Balancer Application Gateway Traffic Manager Third Party (NetScaler, etc)
Layer Layer 4 support Layer 7 DNS Layer 4 to Layer 7, DNS etc
Protocol support TCP, UDP (Any applications based upon these protocol) HTTP, HTTPS All services which use DNS Most Protocols
Endpoints Azure VMs and Cloud Services instances Azure VMs and Cloud Services instances Azure endpoints, On-premises endpoints Azure VM endpoints, On-premise, Internal Instancs
Internal & External support Can be used for internal and External communication Can be used for internal and external communication Externally Interal and External traffic
Health Monitoring Probes: TCP or HTTP Probes: HTTP or HTTPS Probes: HTTP or HTTPS Custom based, TCP, HTTP, HTTPS, Inline HTTP, PING, UDP etc
Pricing Free $0,07 per gateway-hour per INSTANCE (Medium size) which  is default size $0.54 per million queries (Cheaper above 1 billion queries) Depending on the Vendors, regular compute instance + license from the vendor

So depending on the requirements we have it is also possible to do a combination of the different services, and in some cases it might also be more cost effective to do a combination. For instance that we have Traffic Manager to do GEO based load balancing between different regions and that we have NetScaler HA-pair setup on each site to deliver local load balancing capabilities in each region.

Azure Load Balancing

NOTE: That Azure Load Balancing requires that we have an availability-set in place for our virtual machines that we want to load balance. If you have existing virtual machiens that you want to add to an availability set which are created using ARM you cannot do this yet.

Which as mentioned this can be created either for external or internal purposes. For this example I’m going to use an external load balancing service against a couple of web servers against an ARM deployment.


We have two IIS web servers, which are deployed in the same availability-set, which are deployed  to the same virtual network. They are deployed in different network security groups, which allows us to handle ACLs based upon each vNIC.

To create an Azure Load balancer, we can just select and create a new Azure Load balancer resources from within the UI.

First we need to specify a probe, which is used to check health of the backend pool. Here we just specify what kind of protocol we want to use for the health test and port and path to check on against the backend pool.


Also when setting up the Azure Load balancer we have to specify a backend pool which consists of the virtual machines we want to load balance.


Then we need to create the load balancing rules which ties all the different settings together. Here we specify the backend pool, probe, backend port (For Reverse proxy features we can for instance load balance services externally on port 8080, while the servers are listening to port 80 interally)


And we define the the session persistency. NOTE: We cannot define the load balancing technique her, for instanse based upon load or amount of connections. This can be done using PowerShell to alter the distribution mode –>

SSL Offloading is also not supported using Azure Load balancing, it is also missing important features like Content Switching and rewrite settings.

Application Gateway

Application Gateway is a layer 7 HTTP load balancing solution, Important to note however is that this feature is built upon IIS/AAR in Windows Server

As of now it is only available using the latest Azure PowerShell, but moving forward it will become available in the portal. New-AzureApplicationGateway –Name AppGW –Subnets –vNetname vNet01 or we can use an ARM template from here –>

And we can now see that the AppGW is created

NOTE: The default value for InstanceCount is 2, with a maximum value of 10. The default value for GatewaySize is Medium. You can choose between Small, Medium and Large.


Next we need to do the configuration, this is by using an XML file where the declare all the speicifcs like external ports, what kind of protocol and if for instance cooke based persistency should be enabled

The XML file should look like this

<?xml version=»1.0″ encoding=»utf-8″?>
<ApplicationGatewayConfiguration xmlns:i=»« xmlns=»«>

If we want to change the config file we can use the command Set-AzureApplicationGatewayConfig -Name AppGwTest -ConfigFile “D:config.xml”

You can also create custom probes against each backend –>

Traffic Manager

Traffic Manager is a DNS based load balancing solution. It is as simple as when a client requests a particular resource, Azure will respond with one or another backend resources. Even thou it is an Azure features it can also be used for on-premises load balancing.

The logic behind it is pretty simple we create an resource in Azure which is the traffic manager resources, which will be bound to an FQDN ( when someone asks for this particular resource the traffic manager object will look at the available endspoints that it has to determine which are healthy and not and which is the preffered endpoint for this client. The endspoint are just other FQDN which could be a resource from an Azure datacenter on an on-premises web services which is available on an external solution.


Traffic Manager can be used to direct clients to their closest datacenter as well using the default performance based routing method


So how to we use this externally? Since we can’t actually use an URL for our external users. The way to do this is to point the as an CNAME alias for the official FQDN. So for instance would be a CNAME alias to which would again resolve into our endpoints which might be another address entirely, dependong on the routing method and availability.

So now we have gone trough all the different load balancing features in Azure, can we combine and mix different load balancing features to achive even more uptime and better performance?  Yes!

A good example is using Traffic Manager in Combination with Azure Load Balancing, think about having an e-commerence solution where we have multiple instances deploying across different Azure regions.


Where we can for instance have Traffic Manager to load balance between different regions which will point the end-user to the closest location and from there we have Azure Load Balancing to load balance between resources inside each region. This is by using the combination of DNS + TCP based load balancing.

Now since we have all these nifty features in Azure why would we even need third party vendor load balancing features at all ?

  • Sure Connect (Meaning that the load balancing will queue incoming requests that the backend resources are unable to process)
  • Better granular load balancing (For instance load balancing with custom probes and monitors, SIP traffic, Database load balancing, group based load balancing on multiple ports. Different load balancing methods)
  • Content switching (Allows us to load balance requests based upon which URL for instance a end-user is going to)
  • Web Application Firewall capabilities (Now Azure only has Security Groups ACLs but it has no deep insight into what kind of traffic is hitting the external services, most ADCs have their own WAP feature which can be used to migiate attacks)
  • Rewrite and Responder policies (Being able to alter HTTP requests inline before hitting the services, when moving from one site to another for instance without needing to change the external FQDN, or changing HTTP headers) We can also use Responder policies to respond directly to blocked endpoints
  • HTTP Caching and Compression (Most ADC’s can cache often access data, it can also compress outbound data in order to optimize traffic flow)
  • Web Optimization (In most cases these solutions can also do optimization of the web traffic going outbound to further optimize the traffic)
  • GSLB (Most ADCs also have their own Global Server Load balancing features which does the same feature as traffic manager does)

Now most of these solution are available from the Azure marketplace, and most cases require their own license as well but they all run as a virtual machine instance and has the same limitations as they do. So how would they fit in?

In most cases it will be a combination of many elements. Since the ADC runs as virtual appliance they have the same limitations. They need to be placed in their own availbility set. We in most cases cannot use their built-in HA feature because of the Azure networking limitations for failover, therefore we need to use the Azure Load balancing do distribute traffic between them and use the probe to check if they are online or not.  Also in this scenario we have all web servers and NetScaler in the same virtual network, but only the NetScaler will be bound to the Azure Load balancing as the backend pool, since the NetScaler will handle internal communication with the web servers


Now we have a 3 way load balancing feature setup, so an example traffic flow would look like this.

1: User requests
2: Traffic Manager responds with a Public IP from the Azure region which is closest the end-user
3: User initiats connection with the IP closest in the Azure region
4: The Azure Load balancing feature which serves the VIP looks at the connection and fowards it to one of the active NetScalers
5: The NetScaler looks at the request and travers any policies for load balancing, content switching, url rewrites or WAP policies
6: The traffic is served from one of the web-servers back to the NetScaler and back to the client.

So now we have an combination of DNS, TCP, and HTTP / SSL load balancing to make sure that the content is optimized when deliverd.

AVINetworks – Architecture

I have previously blogged about AVINetworks –>
Where I wrote briefly on how their scale-out architecture differs from their competitors, where most ADC vendors have a single virtual appliance which contains all the features + management. Avi networks consists of a Cloud Controller which is the management layer where we do management either using the UI or using REST APIs.

This example below shows an VMware architecture. The Avi Controller can either be a stand-alone virtual appliance or be configured in a 3 nodes controller cluster.


The Controller nodes can be setup with an integration with VMware with either read or read/write privileges. With read/write priviliges the Avi Controller can read and discover VMs, data centers, networks, and hosts. This also allows for automatic deployment of the service engine.

When we configure a virtual service, for instance an load balanced service for our web servers AVI will automatically deploy two SEs (Service Engines) based upon a OVA file. After the deployment there will be one active service engine and the other is passive, which will be responsible for serving the requests on the VIP.

Now the Service Engines also report server health, client connection statistics, and client-request logs to the Controllers, which share the processing of the logs and analytics information.

The cool thing with Avi is the auto-scaling feature! Now let’s say for instance if our Service Engines run out of resources, this might be CPU, Memory or the amount of packets per second. Then AVI can for instance move the virtual service to an unused Service Engine or scale out the service across multiple service engines.

By default the primary service engines reponds to all ARP requests to the VIP, if the service needs to scale out. The primary SE will move the overflow traffic to a secondary SE on layer two, where the secondary Service engine will terminate the TCP connection and then process the connection and responds back to the end-client.

Now if for instance an primary SE will fail, a secondary SE will repond using GARP to take over existing connections. Any existing connections by the Primary SE will need to be recreated in the session table.

Now I can alter the default behaviour for the SE’s in a deployment


For instance the default memory for an Service Engine is 2048 MB, SSL connections consume alot more memory then regular L4 connections, and therefore we might even alter the memory capacity if we have available. NOTE: The memory limit can be between 1 GB and 128 GB per service engine.

I can also from the service window, migrate the service to another service engine or scale-out or scale-in


Doing Migrate or scale-in will handle all new connections, forwarding any older connections to the secondary SE.

Now we can easily see the benefits of this type of architecture, where we have a distributed management layer and a distributed data plane unlike the other ADCs which have effectivly moved their existing appliance model to a virtual appliance where they suffer from the limited resources they have available.

Introduction to KEMP ADC

So I’ve been busy testing out different load balancing (ADC) vendors as of the last month, later it will reveal itself why I have done so, but anways I want to write a quick introduction to KEMP technologies, which is one of the vendors I have worked with the last time.
For those that aren’t aware of KEMP they are in the same street as Citrix NetScaler and F5 BIG-IP and so on. They do ADC and that’s their focus and passion.

Now for someone that has been working alot of time with Citrix NetScaler it took some time getting used to the navigation and how they built up their product.


Now some key aspects to KEMP which I found interesting

  • Their support for most cloud vendors (Amazon, Azure and even vCloud Air)
  • Their support for most hypervisors (VMware, Hyper-V,XenServer,Oracle Virtualbox)
  • Bare metal support (Use an X86 server as an fully loaded ADC)
  • Deployment templates (For mostly all different product like Exchange, SQL, Lync, ADFS)
  • KEMP Condor –>
  • Integration with the VMware operations suite

Now just the support using Deployment templates make this alot easier. Templates can be found from their website –> And they actually already have one for Exchange 2016 which shows at they are early adopters to new products.


Now use templates makes it easier for IT-admins to deploy load balancing service for different products/services without hasseling with all the different options.

For instance if I wanted to deploy a virtual server based upon the RDS template I just had to create a new virtual server


Setting up services and specifying features were pretty simple using the web management UI. This also included WAF (Web Applicaiton Firewall), SSO, GSLB, Content Switching and so on.

I will do some more testing on the KEMP ADC VLM which is the virtual appliance and see how it performs compared to other alternatives, and how it operates and what I feel it is lacking from a ADC point-of-view.

But still they have good support for most all different platforms, and a simple UI combined with finished templates which makes it easy to get started.

Auto-provisioning NetScaler Virtual Appliance on Hyper-V

Did you know that Citrix just included support for Auto-provsioning of NetScaler on Hyper-V ? Well me neither until I scrolled trough the entire release notes on the Citrix website when they released the latest build. The auto-provisioning feature allows us to preconfigure the initial IP setup on the NetScaler.

So instead of going into the CLI and configuring the NSIP, Subnet and Gateway we can have a preconfigured file which sets this for us when booting and when that is done we can continue on with the rest of the configuration using NITRO APIs.

NOTE: This is still only supported for Hyper-V as far as I am aware of, after speaking with Citrix. So I’m guessing that XenServer and VMware ESX is coming eventually.

The setup to do this is simple:

1: Create a file called userdata.xml (Which contains this information, we can alter the different values here)

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”<Environment xmlns:oe=”″
<Property oe:key=”com.citrix.netscaler.ovf.version” oe:value=”1.0″/>
<Property oe:key=”com.citrix.netscaler.platform” oe:value=”NS1000V”/>
<Property oe:key=”com.citrix.netscaler.orch_env” oe:value=”cisco-orch-env”/>
<Property oe:key=”com.citrix.netscaler.mgmt.ip” oe:value=”″/>
<Property oe:key=”com.citrix.netscaler.mgmt.netmask” oe:value=”″/>
<Property oe:key=”com.citrix.netscaler.mgmt.gateway” oe:value=”″/></PropertySection>

2: Create this file on a ISO using for instance free tools like ImgBurn, then mount the ISO file under the IDE controller for the NetScaler VPX


And we are good to go!

NetScaler Use of Rewrite, Responder and URL transformation

Now when I started working with NetScaler I was always thinking what the hell are the differences the features Rewrite, Responder and URL transformation which were like different options in the NetScaler AppExpert field. Now after using these features for some time and scrolling in the discussion forums I notice the same question being asked over and over again.

What are the differences? and where should I use one over the other? So my purpose of this post is to try to explain the differences. So using different scenarios I will try to explain with some creative visio drawings..


The responder feature can be used to redirect URL requests to another page or respond back with random text for instance when doing maintance for instance. As an example based upon the expression we configure, users from a particular IP-segment will automatically be redirected to a particular URL.
NOTE: Responder only looks at HTTP traffic, so it can only be used for those types of servicesimage

The responder feature happens only at incoming requests so it does not change anything inline to the backend resource. So for instance if the end-user goes to the virtual server of and it has a responder policy that is set to redirect to another URL, the NetScaler will reply to the HTTP request with an HTTP 302 STATUS code and respond back to the client, which will then establish a new request to the new URL


Another option we have is to display back static content, for instance if we have maintance and we want to display some content to the end-users which will be served from the NetScaler


We can also use this with for instance blocked IP addresses if we have an pattern set, so when blocked IP addresses try to connect to our site they will be shown a static HTML page.


Now the responder feature, think of it as a raw feature, it comsumes little CPU and only handle incoming requests, it cannot handle response traffic, but it allows for simple redirects to other sites using HTTP 302 commands, and be used do display static content.


Rewrite is a much more powerful feature which can be used for alot of things, besides HTTP it can also be used for SIP and DNS for instance. Rewrite is an Inline feature which allows it to change more of the content that is passing trough besides just looking at the URL a user wants to go to. So much more complex and because of that it will require more resources from the NetScaler if we compare it to URL responder. But! digging in, it can also do deeper into the HTTP stack so for instance it can alter the HTTP headers coming back from the web-server, as an example if we have a rewrite policy bound to a virtual server which is based upon response and we need to remove some data from the HTTP header.


This for instance will remove the server information from the HTTP header coming from the backend IIS web-server.

Which used to look like this from an end-user


Will now look like this


It can also be used to change HTTP data coming back from the server, for instance if we have an HTTP transaction containing the field password in it, we want it to change the data it responds back with something else. So much more powefull since it can handle request & peresponses and also HTTP headers


URL Transformation

Is a more blunt feature again, where the purpose of the service is simple do URL transformation from both requests and responses. Think of the following you have an particular web service which only handles requests to a particular hostname which is something http://webapp2.domain.local If you want to publish this externally you have some issues since that is an intneral only DNS hostname. This is where URL Transformation comes in.We can tell the NetScaler to change the incoming URL address before it is sent to the backend web server.


This can simply be changed for external access using URL transformation. When a user enters the URL for which points to a VIP on the NetScaler the URL transformation policy will alter the URL to another URL before it is sent to the backend server. When the backend server sees the request it will be with the original URL. It will return the data, but since the .local URL is not available externally the NetScaler needs to change the URL again before it comes back to the end-user.

This can be used in migration examples if we want to redirect users to another site on another server but still use the same external URL this is a great feature to use for this type of purpose.

Client Certificate authentication against XenDesktop using Storefront and NetScaler Gateway

so this is a question that I was asked the other day, and to be honest I wasn’t quite sure that this would work. I know that Smart Cards and so on works against a XenDesktop enviroment but just plain Client Certificates? not the same..

The purpose was that some admins want to have a simple way to start Citrix without the need for authentication. Now I started by setting up a Certificate policy and define the Client Cert authentication feature in the SSL profile. This gave me full authentication against NetScaler and to Storefront. The issue was when I tried to start an application, then I would get SSL errors which I have never seen before and again not so much information on Google on it. Therefore I needed to try another approach to it. Since the client certificate authentication worked internally, maybe there was an issue with NetScaler doing the authentication validation which seems to break the authentication against the VDA agents.


NOTE: If someone else has this working I would love to know about it!

But anyways I decided another approach, where I published StoreFront using the NetScaler with pure SSL_BRIDGE, Since Storefront was only going to be used as an authentication point anyways, I decided to give it a try.

From there it was just a matter of setting up certificates on Storefront and on the user-device. Which was a user-certificate.

First enable Smart Card authentication on the Storefront Store


And then specify this on the Receiver Web Site as well.


NOTE: This solution only works for Receiver for Web, since Citrix Receiver self-service cannot authenticate using Client Cert.

Specify a NetScaler Gateway which will be used for Remote Access only


Then go back to the store settings and specify the gateway appliance under optimal gateway routing feature.


So what will happen is that when a user authenticates to Storefront and click an application or desktop, it will trigger an ICA file where the NSGW.TEST.LOCAL will be defined as remote proxy solution for all traffic.

From there we only need to create a NetScaler Gateway virtual server, which only has an

  • IP
  • STA servers defined
  • Certificate

It need no authentication policy since this has already been done via Storefront. We do not need a session policy, besides defining ICA-proxy settings enabled.

So now when I go to Storefront web URL I get presented with this screen


Then when I click Log On I need to select a user certificate which is placed on my end-user device


In this case I need to have a certificate which is from the same Root CA which has been issued to my Storefront server, and volia! I’m in


Security settings–NetScaler Gateway

NOTE: This is content from my eBook but to make it easier to search, based upon the number of queries I get I decided to publish it on my blog

Security settings

When setting up a NetScaler Gateway it will be in most cases open externally for remote access to deliver Citrix to remote workers. Now by exposing a service externally you also open up yourself for attacks. There are many possible attack vectors

· Bruteforce attacks

· DDoS

· Protocol weakness

· Security exploits

Therefore, it is important to think about this when setting up NetScaler Gateway virtual server. Now when setting up a smart access server and allowing full VPN access for your endpoints you need to take extra care when setting up our policies. Therefore, this section is separated into different groups which list different settings we can configure to have a higher level of security on our virtual server.

General settings:
Under NetScaler Gateway à Global Settings à Change authentication AAA settings à Define Max Login Attempts and then define Failed Login Timeout. This help to avoid dictionary attacks by locking out authentication attempts after a certain amount of attempts.

Here we also have the enhanced authentication feedback button, which helps end users by notifying them what is wrong when they try to login, but it can also expose some critical information to malicious attackers.


This setting can either be defined globally or per virtual server, but it we are using multiple virtual servers the best is to configure this globally so it affects all virtual servers.

Session Policies:

If we are implementing full VPN solution, we can also specify multiple settings depending on what we want. The best practice is to not specify full access but use Split tunneling and specify intranet applications for those application that the end-users needs access to. This way only traffic destined to those applications will be processed by the NetScaler Gateway plugin.

In most cases also an end-user might not require access for a really long period of time and might forget to disconnect the session. In that case we can setup a timeout which decides when a session should be forcefully disconnected. This is done under session policies à Network Configuration à Advanced Settings.


It is also useful to have more specific session policies depending on what type of resource is trying to connect. For instance, we can have a session policy using OPSWAT expressions to avoid non-healthy endpoints to connecting to our environment.

For instance, a session policy with OPSWAT rules to determine if the endpoint is running an authentic antivirus solution


If the endpoint does not match the requirements, they will not get any access to the Citrix environment. The problem with this is that it happens after authentication has occurred, we can also use Preauthentication policies to do health checks before authentication, but then we cannot filter based upon AAA groups and users for instance.

In addition, we can use these settings in conjunction with Smart Access to control how the access to the Citrix environment and which group policies should be processed.

We can also specify idle-time out values, in the session profile together with split tunneling and session time-out


Now again an issue is if an attack has access to an end-users username and password and even has access to the end users device, then the attack will be able to access the environment. When possible try to add a two-factor authentication feature to minimize these types of attacks.

That way even if an attacker has access to the end users username and password they will not be able to login to the environment.

In addition, if we are not using Split tunneling, we should configure Authorization rules, which we can bind to the NetScaler gateway to define, ALLOW/DENY rules to internal resources using client expressions, which are then bound to AAA users or groups.

If this is not possible. Define ACL rules based upon the Intranet IP range that is defined as part of the NetScaler Gateway.

Now a lot of people focus on the SSL/TLS configuration of the virtual server, while that itself is important it should be part of the bigger picture since that only addresses the protocol exploits of SSL/TLS and might allow a malicious attacks to decrypt the secure connection and then do MiTM, while theoretically possible not easily achieved.

Now by default when configuring SSL/TLS Settings on NetScaler we can either use SSL profiles or use SSL parameters for each virtual server. If we use profiles, we cannot configure SSL parameters and the other way around.

NOTE: We also have the option to enable a global default SSL profile, which will be attached to all SSL protocol based virtual servers. This will use the ns_default_ssl_profile_frontend policy for front-end facing virtual servers. This can be enabled under Traffic Management à SSL à Change advanced SSL settings à enable default SSL profile, and take note after you enable it you cannot disable it.

The different SSL profiles can be viewed under System à Profiles à SSL Profile, by default there are two profiles one for front-end connections (for instance virtual servers) while the other are for backend connections (services, service groups)

Now there four main features that effect the security using TLS/SSL protocol

· Certificate (Private Key size, what does the certificate support?)

· Protocol Use (SSL or TLS?)

· Ciphers (Define how strong algorithm that should be used for encryption and which algorithm should be used for authenticity and authentication) Ciphers are attached to an SSL profile as well.

NOTE: There is a website called, which is commonly used in conjunction with testing SSL/TLS security level on web services, where the score goes from F to A+ where A+ is the best possible score. This can only be achieved on the Gateway virtual server if it only uses the more secure protocols and Ciphers which give a high level of encryption and if we have a valid certificate. Again, I have to emphasize that this only addresses protocol weaknesses.

For our virtual server to score A+ on test there are some modifications that need to be done again the SSL Profile or using SSL parameters.

· Bind the entire certificate chain to the virtual server, which means the certificate, any intermediate certificates and root certificates

· Deny SSL Renegotiation (This is used from a client to renegotiate which protocol to use, which might be used for attackers to lower a session from TLS 1.2 to a SSL version with lower security. Settings it to FRONTEND_CLIENTSERVER will disallow renegotiation.


· Make sure that SSL3 is disabled (This is disabled by default in the default profiles and should be reflected in the frontend profile)


· Specify a supported Cipher group, which ensures a high-level of encryption, this is added under the SSL profile as well. A Cipher group specified which SSL/TLS protocol that should be used and which type of encryption.

Another thing to be aware of is that some options are available for only front-end connections, but not backend connections. Another thing is that not all ciphers are available for VPX editions. If you try to create an cipher group of ciphers which are not supported on the VPX you will get an error message.

· The simplest way is to create a cipher group using CLI:
VPX Example:
add ssl cipher vpx-ciphers
bind ssl cipher vpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher vpx-ciphers
-cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher vpx-ciphers -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher vpx-cipher-list -cipherName TLS1-AES-256-CBC-SHA
bind ssl cipher vpx-cipher-list -cipherName TLS1-AES-128-CBC-SHA

· MPX Example:
add ssl cipher mpx-ciphers
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES256-GCM-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES128-GCM-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-256-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1.2-DHE-RSA-AES256-GCM-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-DHE-RSA-AES128-GCM-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1-DHE-RSA-AES-256-CBC-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA

· Implement HSTS and HTTP -> HTTPS redirection

One of the last things we need to configure is HSTS (HTTP Strict Transport Security) which is a security mechanism which is in place to protect websites against protocol downgrade attacks and cookie hijacking. It allows the NetScaler to notify web browsers that it should only interact which its services using HTTPS. This is a feature which Google implemented into Chrome, but other browsers such Firefox and Internet Explorer now support it. In order to configure it there are multiple steps.

· Have a valid certificate on the web-service (Root, any intermediate and server CA)

· Redirect all traffic from HTTP to HTTPS

· Serve an HSTS header on the base domain for HTTPS requests with header
Strict-Transport-Security: max-age=10886400; includeSubDomains; preload

· After this is done we can submit this to the Google chrome preload list here à

Now to first do implement HTTP to HTTPS the simplest way is to setup a simple load balancing virtual server on HTTP port 80 using the same IP as the NetScaler Gateway virtual server and then setting up a redirect.

NOTE: If you use the NetScaler Gateway wizard in NetScaler to configure NetScaler Gateway it uses this setup to configure HTTP to HTTPS redirect.

Go into Traffic Management à Load balancing à Virtual Servers. Click Add and give it a descriptive name and enter the same IP address of the NetScaler Gateway virtual server, and using HTTP as protocol as port 80.


Click OK, when asked to bind service to the virtual server click Continue. Click on the protection pane on the right side and there under Redirect URL, ether the FQDN of the NetScaler Gateway virtual server using HTTPS.


After that click OK and we are done.
Then we need to implement and HTTP rewrite policy that can insert the HSTS header. Go into AppExpert à Rewrite à Go into Actions first and click Add.

Give it a name like INSERT_HSTS_HEADER, under type choose INSERT_HTTP_HEADER, under header name enter Strict-Transport-Security under expression enter “max-age=157680000” and then click Create.


Then go back to the rewrite menu. Go into Policies and then click Add. Give it a name IMPLEMENT_HSTS_HEADER for instance and under Action choose the rewrite action we created, under expression use the expression true


Then click add. After we are done with this we need to add the rewrite policy to the NetScaler Gateway virtual server. To into NetScaler Gateway à Virtual Server à Choose the existing virtual server click edit à Policies, choose Rewrite and choose Response.


And then bind the existing rewrite rule we created, and click OK, and then we are done with the HSTS configuration.

The simplest way to confirm the HSTS settings and ciphers are properly setup is either using and do a test or using developer tools in Internet Explorer. This can be accessed clicking F12 within Internet Explorer and look at the HTTP header when connecting to the NetScaler Gateway virtual server.




NOTE: The simplest way to test ciphers groups when configuring NetScaler Gateway is using OpenSSL, which can be used for this purpose more info on this blogpost here à

Splunk and NetScaler together

I was reached out to a while back and was asked if Splunk and NetScaler worked together? To be honest I haven’t tried that combination yet. So last night we decided to give it a try, using our regular Splunk setup.

Now in order to setup Splunk with NetScaler we need an IPFIX collector setup on the Splunk server, and this is possible using this Splunk addon for IPFX which can be found here –>

This allows us to gather data using IPFIX into Spunk. And for those that are not aware, Citrix Appflow is basically just an IPFIX protocol which is raw binary-encoded data. In order for the IPFIX collector to be able to intertept this data the IPFIX sender needs to send across the templates to the collector. So when we first setup Splunk and NetScaler we will notice that data is not immediately interpret because it does not have the templates available and data will be listed as

TimeStamp=”2014-07-16T21:00:04″; Template=”264″; Observer=”1″; Address=”″; Port=”2203″; ParseError=”Template not known (yet).”;

We can specify on the Appflow settings of the NetScaler how often it should send across the templates settings and we can also specify which settings we want in the AppFlow data that is exported to the collector


which is by default set to 10 minutes (I’ve changed it down to 1 minute) but no worry you do not have to change the defautl value, just want because Splunk will buffer the templates, after this we are good to go. Now if we are used to using Citrix Insight for instance it will only get information which is related to ICA sessions or web session, the IPFIX flow from NetScaler actually delivers alot of more useful information.

First add Splunk as an AppFlow Collector


Configure an AppFlow Action which is bound to the collector


Lastly define a Policy which defines which action to trigger if the NetScaler should generate IPFIX flow for a session.


We can use the general expression true which will in essence generate IPFIX traffic for everything… Load balancing, AppFirewall, Syslog, ICA sessions and so on. If we want to filter what we want NetScaler to send to the collector we can use general HTTP expressions like URL, User-Agents like we typically use for session policies to filter out based upon Citrix ICA session for instance.

NOTE: After we have created the policy we have to bind it to a gateway virtual server or globally.

After that is done we have to do the Splunk part. Log into the splunk console and go into the apps menu and choose Install Apps from file. From there point to the IPFIX file which can be found in the link I listed earlier.

When that is configured you should notice that there is an IPFIX data input. By going into Settings –-> Data Inputs


If there isn’t a number that just click Add New and enter the default settings and give it a name. Now when you see that AppFlow records are generated on NetScaler (Which can be easily seen by using the command

Stat Appflow


They should also be appearing in Splunk.  Go back to the main menu and choose the Search and reporting option


In the search option there we can just use the search prefix source=”NSIPOFNETSCALER:*” to see which data that has come from the NetScaler.


So notice there is a lot of data here since I choose the true expression in the appflow settings, but I can easily do sorting between the different settings. So let’s say I want to get all users which have accessed Citrix NetScaler Gateway?

source = “*” | stats count by netscalerAaaUsername


Citrix Receiver versions connecting?


Endless possibilities with this module and being able to instant search data and it can also do syslog check as well, for instance if we made some changes to NetScaler.