Monthly Archives: April 2016

Networking SIG on My CUGC

So as of late I have been involved in setting up and creating a SIG (Special Interest Group) under MYCUG aimed at Networking (Can be found here –> https://www.mycugc.org/p/bl/bl/blogid=104) . Now you might think “err does that mean Citrix Networking?” no! our goal is to create a open community around all types of different networking content

  • Network virtualization
  • Network Security
  • Cloud networking
  • Datacenter networking

And of course Citrix Networking is going to be a big part of it, since it is natural that many of the users there are familiar and used to working with Citrix networking.

Now one of the objectives as well is to bridge the gap between Citrix and the community, so we will have Citrix employees in direct contact to help us create content, answer questions doing webinars and so on. For instance if someone asks a question are we aren’t able to answer it, we might get an official answer from Citrix on the forums.

Now our hope is that this is going to be an “active” community where we have fresh content pusblished, and a bit exclusive compared to the other place where NetScaler content gets published.

For my part is going to be organizing events, content and so on. If you have an idea about content or that you want to publish or stuff you would like to learn or hear more about, please reach out to me. Also I will start to publish more NetScaler related blogs there, so this blog might be a bit quiet on the NetScaler front Smilefjes

Other then that I hope you join our community!

Running RDS 2016 TP5 Connection Broker Database in Azure AD

As part of the RDS 2012, Microsoft introduced the support for high-availability for the connection broker role, this on the other hand required an external SQL database for the connection brokers to share the connection tables between them.
Yesterday Microsoft released a new Tech Preview for Windows Server 2016, which introduces a new feature for the RDS Connection Broker, the ability to use an Azure SQL Database as the DB source for the Connection Broker.

To set this up we need to precreate a new SQL database in Azure, which can be easily located from the Marketplace

image

So when defining the parameters of the SQL database you can choose the Basic Pricing tier, which costs about 4$ each month.

image

Now we just have to wait for the deployment to finish from the Azure portal. After the deployment is done you can find the SQL database inside the created resource group

image

From here click on the “Show database connection strings” and save the ODBC line.
Now when we want to create a high availabilty connection broker, we first need to create a deployment. From within server manager right click on the connection broker

image

And click on “configure high availability” From the wizard choose Shared Database

image

NOTE: You need to have the SQL native client installed on the connection broker server to make this feature work, you can download it from here –> http://go.microsoft.com/fwlink/?LinkID=239648&clcid=0x409 this needs to be installed before you can finish the wizard or else it will not connect.

Also you might also need to configure firewall rules against the SQL database, to allow access from the connection brokers.

image

After this is done you will be able to complete the wizard

image

image

If you are getting some issues, pay attention to the Terminal Servers Connection Broker event log

image

Load balancing features in Azure

Deploying distributed applications and services within Azure can be cumbersome, atleast when you are not familiar with the different options/features you have available. Therefore I wanted to use to post to explain the different options we have for load balancing within Azure, both for internal services and external facing services.

Now there are three different load balancing features available directly in Azure

  • Azure Load Balancer
  • Application Gateway
  • Traffic Manager

There is also third party solutions from different vendors available in the marketplace which has alot more features but again, this in most cases requires additional licenses and compute resources, but ill come back to those later in the blogpost.

Now there are some distinct differerences between the different load balancing features in Azure and the third party vendors.

Service Azure Load Balancer Application Gateway Traffic Manager Third Party (NetScaler, etc)
Layer Layer 4 support Layer 7 DNS Layer 4 to Layer 7, DNS etc
Protocol support TCP, UDP (Any applications based upon these protocol) HTTP, HTTPS All services which use DNS Most Protocols
Endpoints Azure VMs and Cloud Services instances Azure VMs and Cloud Services instances Azure endpoints, On-premises endpoints Azure VM endpoints, On-premise, Internal Instancs
Internal & External support Can be used for internal and External communication Can be used for internal and external communication Externally Interal and External traffic
Health Monitoring Probes: TCP or HTTP Probes: HTTP or HTTPS Probes: HTTP or HTTPS Custom based, TCP, HTTP, HTTPS, Inline HTTP, PING, UDP etc
Pricing Free $0,07 per gateway-hour per INSTANCE (Medium size) which  is default size $0.54 per million queries (Cheaper above 1 billion queries) Depending on the Vendors, regular compute instance + license from the vendor
         

So depending on the requirements we have it is also possible to do a combination of the different services, and in some cases it might also be more cost effective to do a combination. For instance that we have Traffic Manager to do GEO based load balancing between different regions and that we have NetScaler HA-pair setup on each site to deliver local load balancing capabilities in each region.

Azure Load Balancing

NOTE: That Azure Load Balancing requires that we have an availability-set in place for our virtual machines that we want to load balance. If you have existing virtual machiens that you want to add to an availability set which are created using ARM you cannot do this yet.

Which as mentioned this can be created either for external or internal purposes. For this example I’m going to use an external load balancing service against a couple of web servers against an ARM deployment.

image

We have two IIS web servers, which are deployed in the same availability-set, which are deployed  to the same virtual network. They are deployed in different network security groups, which allows us to handle ACLs based upon each vNIC.

To create an Azure Load balancer, we can just select and create a new Azure Load balancer resources from within the UI.

First we need to specify a probe, which is used to check health of the backend pool. Here we just specify what kind of protocol we want to use for the health test and port and path to check on against the backend pool.

image

Also when setting up the Azure Load balancer we have to specify a backend pool which consists of the virtual machines we want to load balance.

image

Then we need to create the load balancing rules which ties all the different settings together. Here we specify the backend pool, probe, backend port (For Reverse proxy features we can for instance load balance services externally on port 8080, while the servers are listening to port 80 interally)

image

And we define the the session persistency. NOTE: We cannot define the load balancing technique her, for instanse based upon load or amount of connections. This can be done using PowerShell to alter the distribution mode –> https://azure.microsoft.com/nb-no/documentation/articles/load-balancer-distribution-mode/

SSL Offloading is also not supported using Azure Load balancing, it is also missing important features like Content Switching and rewrite settings.

Application Gateway

Application Gateway is a layer 7 HTTP load balancing solution, Important to note however is that this feature is built upon IIS/AAR in Windows Server

As of now it is only available using the latest Azure PowerShell, but moving forward it will become available in the portal. New-AzureApplicationGateway –Name AppGW –Subnets 10.0.0.0/24 –vNetname vNet01 or we can use an ARM template from here –> https://azure.microsoft.com/en-us/services/application-gateway/

And we can now see that the AppGW is created

NOTE: The default value for InstanceCount is 2, with a maximum value of 10. The default value for GatewaySize is Medium. You can choose between Small, Medium and Large.

image

Next we need to do the configuration, this is by using an XML file where the declare all the speicifcs like external ports, what kind of protocol and if for instance cooke based persistency should be enabled

The XML file should look like this

<?xml version=»1.0″ encoding=»utf-8″?>
<ApplicationGatewayConfiguration xmlns:i=»http://www.w3.org/2001/XMLSchema-instance« xmlns=»http://schemas.microsoft.com/windowsazure«>
    <FrontendPorts>
        <FrontendPort>
            <Name>FrontendPort1</Name>
            <Port>80</Port>
        </FrontendPort>
    </FrontendPorts>
    <BackendAddressPools>
        <BackendAddressPool>
            <Name>BackendServers1</Name>
            <IPAddresses>
                <IPAddress>10.0.0.5</IPAddress>
                <IPAddress>10.0.0.6</IPAddress>
            </IPAddresses>
        </BackendAddressPool>
    </BackendAddressPools>
    <BackendHttpSettingsList>
        <BackendHttpSettings>
            <Name>BackendSetting1</Name>
            <Port>80</Port>
            <Protocol>Http</Protocol>
            <CookieBasedAffinity>Enabled</CookieBasedAffinity>
        </BackendHttpSettings>
    </BackendHttpSettingsList>
    <HttpListeners>
        <HttpListener>
            <Name>HTTPListener1</Name>
            <FrontendPort>FrontendPort1</FrontendPort>
            <Protocol>Http</Protocol>
        </HttpListener>
    </HttpListeners>
    <HttpLoadBalancingRules>
        <HttpLoadBalancingRule>
            <Name>HttpLBRule1</Name>
            <Type>basic</Type>
            <BackendHttpSettings>BackendSetting1</BackendHttpSettings>
            <Listener>HTTPListener1</Listener>
            <BackendAddressPool>BackendPool1</BackendAddressPool>
        </HttpLoadBalancingRule>
    </HttpLoadBalancingRules>
</ApplicationGatewayConfiguration>

If we want to change the config file we can use the command Set-AzureApplicationGatewayConfig -Name AppGwTest -ConfigFile “D:config.xml”

You can also create custom probes against each backend –> https://azure.microsoft.com/nb-no/documentation/articles/application-gateway-create-probe-classic-ps/

Traffic Manager

Traffic Manager is a DNS based load balancing solution. It is as simple as when a client requests a particular resource, Azure will respond with one or another backend resources. Even thou it is an Azure features it can also be used for on-premises load balancing.

The logic behind it is pretty simple we create an resource in Azure which is the traffic manager resources, which will be bound to an FQDN (resourcename.trafficmanager.net) when someone asks for this particular resource the traffic manager object will look at the available endspoints that it has to determine which are healthy and not and which is the preffered endpoint for this client. The endspoint are just other FQDN which could be a resource from an Azure datacenter on an on-premises web services which is available on an external solution.

image

Traffic Manager can be used to direct clients to their closest datacenter as well using the default performance based routing method

image

So how to we use this externally? Since we can’t actually use an traffic-manager.net URL for our external users. The way to do this is to point the traffic-manager.net as an CNAME alias for the official FQDN. So for instance www.msandbu.org would be a CNAME alias to iis.traffic-manager.net which would again resolve into our endpoints which might be another address entirely, dependong on the routing method and availability.

So now we have gone trough all the different load balancing features in Azure, can we combine and mix different load balancing features to achive even more uptime and better performance?  Yes!

A good example is using Traffic Manager in Combination with Azure Load Balancing, think about having an e-commerence solution where we have multiple instances deploying across different Azure regions.

image

Where we can for instance have Traffic Manager to load balance between different regions which will point the end-user to the closest location and from there we have Azure Load Balancing to load balance between resources inside each region. This is by using the combination of DNS + TCP based load balancing.

Now since we have all these nifty features in Azure why would we even need third party vendor load balancing features at all ?

  • Sure Connect (Meaning that the load balancing will queue incoming requests that the backend resources are unable to process)
  • Better granular load balancing (For instance load balancing with custom probes and monitors, SIP traffic, Database load balancing, group based load balancing on multiple ports. Different load balancing methods)
  • Content switching (Allows us to load balance requests based upon which URL for instance a end-user is going to)
  • Web Application Firewall capabilities (Now Azure only has Security Groups ACLs but it has no deep insight into what kind of traffic is hitting the external services, most ADCs have their own WAP feature which can be used to migiate attacks)
  • Rewrite and Responder policies (Being able to alter HTTP requests inline before hitting the services, when moving from one site to another for instance without needing to change the external FQDN, or changing HTTP headers) We can also use Responder policies to respond directly to blocked endpoints
  • HTTP Caching and Compression (Most ADC’s can cache often access data, it can also compress outbound data in order to optimize traffic flow)
  • Web Optimization (In most cases these solutions can also do optimization of the web traffic going outbound to further optimize the traffic)
  • GSLB (Most ADCs also have their own Global Server Load balancing features which does the same feature as traffic manager does)

Now most of these solution are available from the Azure marketplace, and most cases require their own license as well but they all run as a virtual machine instance and has the same limitations as they do. So how would they fit in?

In most cases it will be a combination of many elements. Since the ADC runs as virtual appliance they have the same limitations. They need to be placed in their own availbility set. We in most cases cannot use their built-in HA feature because of the Azure networking limitations for failover, therefore we need to use the Azure Load balancing do distribute traffic between them and use the probe to check if they are online or not.  Also in this scenario we have all web servers and NetScaler in the same virtual network, but only the NetScaler will be bound to the Azure Load balancing as the backend pool, since the NetScaler will handle internal communication with the web servers

image

Now we have a 3 way load balancing feature setup, so an example traffic flow would look like this.

1: User requests test.msandbu.org
2: Traffic Manager responds with a Public IP from the Azure region which is closest the end-user
3: User initiats connection with the IP closest in the Azure region
4: The Azure Load balancing feature which serves the VIP looks at the connection and fowards it to one of the active NetScalers
5: The NetScaler looks at the request and travers any policies for load balancing, content switching, url rewrites or WAP policies
6: The traffic is served from one of the web-servers back to the NetScaler and back to the client.

So now we have an combination of DNS, TCP, and HTTP / SSL load balancing to make sure that the content is optimized when deliverd.

AVINetworks – Architecture

I have previously blogged about AVINetworks –> https://msandbu.wordpress.com/2016/02/19/a-closer-look-at-avi-networks/
Where I wrote briefly on how their scale-out architecture differs from their competitors, where most ADC vendors have a single virtual appliance which contains all the features + management. Avi networks consists of a Cloud Controller which is the management layer where we do management either using the UI or using REST APIs.

This example below shows an VMware architecture. The Avi Controller can either be a stand-alone virtual appliance or be configured in a 3 nodes controller cluster.

image

The Controller nodes can be setup with an integration with VMware with either read or read/write privileges. With read/write priviliges the Avi Controller can read and discover VMs, data centers, networks, and hosts. This also allows for automatic deployment of the service engine.

When we configure a virtual service, for instance an load balanced service for our web servers AVI will automatically deploy two SEs (Service Engines) based upon a OVA file. After the deployment there will be one active service engine and the other is passive, which will be responsible for serving the requests on the VIP.

Now the Service Engines also report server health, client connection statistics, and client-request logs to the Controllers, which share the processing of the logs and analytics information.

The cool thing with Avi is the auto-scaling feature! Now let’s say for instance if our Service Engines run out of resources, this might be CPU, Memory or the amount of packets per second. Then AVI can for instance move the virtual service to an unused Service Engine or scale out the service across multiple service engines.

By default the primary service engines reponds to all ARP requests to the VIP, if the service needs to scale out. The primary SE will move the overflow traffic to a secondary SE on layer two, where the secondary Service engine will terminate the TCP connection and then process the connection and responds back to the end-client.

Now if for instance an primary SE will fail, a secondary SE will repond using GARP to take over existing connections. Any existing connections by the Primary SE will need to be recreated in the session table.

Now I can alter the default behaviour for the SE’s in a deployment

image

For instance the default memory for an Service Engine is 2048 MB, SSL connections consume alot more memory then regular L4 connections, and therefore we might even alter the memory capacity if we have available. NOTE: The memory limit can be between 1 GB and 128 GB per service engine.

I can also from the service window, migrate the service to another service engine or scale-out or scale-in

image

Doing Migrate or scale-in will handle all new connections, forwarding any older connections to the secondary SE.

Now we can easily see the benefits of this type of architecture, where we have a distributed management layer and a distributed data plane unlike the other ADCs which have effectivly moved their existing appliance model to a virtual appliance where they suffer from the limited resources they have available.

Introduction to KEMP ADC

So I’ve been busy testing out different load balancing (ADC) vendors as of the last month, later it will reveal itself why I have done so, but anways I want to write a quick introduction to KEMP technologies, which is one of the vendors I have worked with the last time.
For those that aren’t aware of KEMP they are in the same street as Citrix NetScaler and F5 BIG-IP and so on. They do ADC and that’s their focus and passion.

Now for someone that has been working alot of time with Citrix NetScaler it took some time getting used to the navigation and how they built up their product.

image

Now some key aspects to KEMP which I found interesting

  • Their support for most cloud vendors (Amazon, Azure and even vCloud Air)
  • Their support for most hypervisors (VMware, Hyper-V,XenServer,Oracle Virtualbox)
  • Bare metal support (Use an X86 server as an fully loaded ADC)
  • Deployment templates (For mostly all different product like Exchange, SQL, Lync, ADFS)
  • KEMP Condor –> http://mwne.ws/1rvgYJ6
  • Integration with the VMware operations suite

Now just the support using Deployment templates make this alot easier. Templates can be found from their website –> http://kemptechnologies.com/loadMaster-documentation/ And they actually already have one for Exchange 2016 which shows at they are early adopters to new products.

image

Now use templates makes it easier for IT-admins to deploy load balancing service for different products/services without hasseling with all the different options.

For instance if I wanted to deploy a virtual server based upon the RDS template I just had to create a new virtual server

image

Setting up services and specifying features were pretty simple using the web management UI. This also included WAF (Web Applicaiton Firewall), SSO, GSLB, Content Switching and so on.

I will do some more testing on the KEMP ADC VLM which is the virtual appliance and see how it performs compared to other alternatives, and how it operates and what I feel it is lacking from a ADC point-of-view.

But still they have good support for most all different platforms, and a simple UI combined with finished templates which makes it easy to get started.

Auto-provisioning NetScaler Virtual Appliance on Hyper-V

Did you know that Citrix just included support for Auto-provsioning of NetScaler on Hyper-V ? Well me neither until I scrolled trough the entire release notes on the Citrix website when they released the latest build. The auto-provisioning feature allows us to preconfigure the initial IP setup on the NetScaler.

So instead of going into the CLI and configuring the NSIP, Subnet and Gateway we can have a preconfigured file which sets this for us when booting and when that is done we can continue on with the rest of the configuration using NITRO APIs.

NOTE: This is still only supported for Hyper-V as far as I am aware of, after speaking with Citrix. So I’m guessing that XenServer and VMware ESX is coming eventually.

The setup to do this is simple:

1: Create a file called userdata.xml (Which contains this information, we can alter the different values here)

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”<Environment xmlns:oe=”http://schemas.dmtf.org/ovf/environment/1″
xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”
oe:id=””
xmlns=”http://schemas.dmtf.org/ovf/environment/1″>
<PlatformSection>
<Kind>HYPER-V</Kind>
<Version>2013.1</Version>
<Vendor>CISCO</Vendor>
<Locale>en</Locale>
</PlatformSection>
<PropertySection>
<Property oe:key=”com.citrix.netscaler.ovf.version” oe:value=”1.0″/>
<Property oe:key=”com.citrix.netscaler.platform” oe:value=”NS1000V”/>
<Property oe:key=”com.citrix.netscaler.orch_env” oe:value=”cisco-orch-env”/>
<Property oe:key=”com.citrix.netscaler.mgmt.ip” oe:value=”10.102.100.122″/>
<Property oe:key=”com.citrix.netscaler.mgmt.netmask” oe:value=”255.255.255.128″/>
<Property oe:key=”com.citrix.netscaler.mgmt.gateway” oe:value=”10.102.100.67″/></PropertySection>
</Environment>

2: Create this file on a ISO using for instance free tools like ImgBurn, then mount the ISO file under the IDE controller for the NetScaler VPX

image

And we are good to go!

NetScaler Use of Rewrite, Responder and URL transformation

Now when I started working with NetScaler I was always thinking what the hell are the differences the features Rewrite, Responder and URL transformation which were like different options in the NetScaler AppExpert field. Now after using these features for some time and scrolling in the discussion forums I notice the same question being asked over and over again.

What are the differences? and where should I use one over the other? So my purpose of this post is to try to explain the differences. So using different scenarios I will try to explain with some creative visio drawings..

Responder

The responder feature can be used to redirect URL requests to another page or respond back with random text for instance when doing maintance for instance. As an example based upon the expression we configure, users from a particular IP-segment will automatically be redirected to a particular URL.
NOTE: Responder only looks at HTTP traffic, so it can only be used for those types of servicesimage

The responder feature happens only at incoming requests so it does not change anything inline to the backend resource. So for instance if the end-user goes to the virtual server of 192.168.37.101 and it has a responder policy that is set to redirect to another URL, the NetScaler will reply to the HTTP request with an HTTP 302 STATUS code and respond back to the client, which will then establish a new request to the new URL

image

Another option we have is to display back static content, for instance if we have maintance and we want to display some content to the end-users which will be served from the NetScaler

image

We can also use this with for instance blocked IP addresses if we have an pattern set, so when blocked IP addresses try to connect to our site they will be shown a static HTML page.

image

Now the responder feature, think of it as a raw feature, it comsumes little CPU and only handle incoming requests, it cannot handle response traffic, but it allows for simple redirects to other sites using HTTP 302 commands, and be used do display static content.

Rewrite

Rewrite is a much more powerful feature which can be used for alot of things, besides HTTP it can also be used for SIP and DNS for instance. Rewrite is an Inline feature which allows it to change more of the content that is passing trough besides just looking at the URL a user wants to go to. So much more complex and because of that it will require more resources from the NetScaler if we compare it to URL responder. But! digging in, it can also do deeper into the HTTP stack so for instance it can alter the HTTP headers coming back from the web-server, as an example if we have a rewrite policy bound to a virtual server which is based upon response and we need to remove some data from the HTTP header.

image

This for instance will remove the server information from the HTTP header coming from the backend IIS web-server.

Which used to look like this from an end-user

image

Will now look like this

image

It can also be used to change HTTP data coming back from the server, for instance if we have an HTTP transaction containing the field password in it, we want it to change the data it responds back with something else. So much more powefull since it can handle request & peresponses and also HTTP headers

image

URL Transformation

Is a more blunt feature again, where the purpose of the service is simple do URL transformation from both requests and responses. Think of the following you have an particular web service which only handles requests to a particular hostname which is something http://webapp2.domain.local If you want to publish this externally you have some issues since that is an intneral only DNS hostname. This is where URL Transformation comes in.We can tell the NetScaler to change the incoming URL address before it is sent to the backend web server.

image

This can simply be changed for external access using URL transformation. When a user enters the URL for webapp.domain.com which points to a VIP on the NetScaler the URL transformation policy will alter the URL to another URL before it is sent to the backend server. When the backend server sees the request it will be with the original URL. It will return the data, but since the .local URL is not available externally the NetScaler needs to change the URL again before it comes back to the end-user.

This can be used in migration examples if we want to redirect users to another site on another server but still use the same external URL this is a great feature to use for this type of purpose.

Client Certificate authentication against XenDesktop using Storefront and NetScaler Gateway

so this is a question that I was asked the other day, and to be honest I wasn’t quite sure that this would work. I know that Smart Cards and so on works against a XenDesktop enviroment but just plain Client Certificates? not the same..

The purpose was that some admins want to have a simple way to start Citrix without the need for authentication. Now I started by setting up a Certificate policy and define the Client Cert authentication feature in the SSL profile. This gave me full authentication against NetScaler and to Storefront. The issue was when I tried to start an application, then I would get SSL errors which I have never seen before and again not so much information on Google on it. Therefore I needed to try another approach to it. Since the client certificate authentication worked internally, maybe there was an issue with NetScaler doing the authentication validation which seems to break the authentication against the VDA agents.

image

NOTE: If someone else has this working I would love to know about it!

But anyways I decided another approach, where I published StoreFront using the NetScaler with pure SSL_BRIDGE, Since Storefront was only going to be used as an authentication point anyways, I decided to give it a try.

From there it was just a matter of setting up certificates on Storefront and on the user-device. Which was a user-certificate.

First enable Smart Card authentication on the Storefront Store

image

And then specify this on the Receiver Web Site as well.

image

NOTE: This solution only works for Receiver for Web, since Citrix Receiver self-service cannot authenticate using Client Cert.

Specify a NetScaler Gateway which will be used for Remote Access only

image

Then go back to the store settings and specify the gateway appliance under optimal gateway routing feature.

image

So what will happen is that when a user authenticates to Storefront and click an application or desktop, it will trigger an ICA file where the NSGW.TEST.LOCAL will be defined as remote proxy solution for all traffic.

From there we only need to create a NetScaler Gateway virtual server, which only has an

  • IP
  • STA servers defined
  • Certificate

It need no authentication policy since this has already been done via Storefront. We do not need a session policy, besides defining ICA-proxy settings enabled.

So now when I go to Storefront web URL I get presented with this screen

image

Then when I click Log On I need to select a user certificate which is placed on my end-user device

image

In this case I need to have a certificate which is from the same Root CA which has been issued to my Storefront server, and volia! I’m in

image

Security settings–NetScaler Gateway

NOTE: This is content from my eBook but to make it easier to search, based upon the number of queries I get I decided to publish it on my blog

Security settings

When setting up a NetScaler Gateway it will be in most cases open externally for remote access to deliver Citrix to remote workers. Now by exposing a service externally you also open up yourself for attacks. There are many possible attack vectors

· Bruteforce attacks

· DDoS

· Protocol weakness

· Security exploits

Therefore, it is important to think about this when setting up NetScaler Gateway virtual server. Now when setting up a smart access server and allowing full VPN access for your endpoints you need to take extra care when setting up our policies. Therefore, this section is separated into different groups which list different settings we can configure to have a higher level of security on our virtual server.

General settings:
Under NetScaler Gateway à Global Settings à Change authentication AAA settings à Define Max Login Attempts and then define Failed Login Timeout. This help to avoid dictionary attacks by locking out authentication attempts after a certain amount of attempts.

Here we also have the enhanced authentication feedback button, which helps end users by notifying them what is wrong when they try to login, but it can also expose some critical information to malicious attackers.

image

This setting can either be defined globally or per virtual server, but it we are using multiple virtual servers the best is to configure this globally so it affects all virtual servers.

Session Policies:

If we are implementing full VPN solution, we can also specify multiple settings depending on what we want. The best practice is to not specify full access but use Split tunneling and specify intranet applications for those application that the end-users needs access to. This way only traffic destined to those applications will be processed by the NetScaler Gateway plugin.

In most cases also an end-user might not require access for a really long period of time and might forget to disconnect the session. In that case we can setup a timeout which decides when a session should be forcefully disconnected. This is done under session policies à Network Configuration à Advanced Settings.

image

It is also useful to have more specific session policies depending on what type of resource is trying to connect. For instance, we can have a session policy using OPSWAT expressions to avoid non-healthy endpoints to connecting to our environment.

For instance, a session policy with OPSWAT rules to determine if the endpoint is running an authentic antivirus solution

image

If the endpoint does not match the requirements, they will not get any access to the Citrix environment. The problem with this is that it happens after authentication has occurred, we can also use Preauthentication policies to do health checks before authentication, but then we cannot filter based upon AAA groups and users for instance.

In addition, we can use these settings in conjunction with Smart Access to control how the access to the Citrix environment and which group policies should be processed.

We can also specify idle-time out values, in the session profile together with split tunneling and session time-out

image

Now again an issue is if an attack has access to an end-users username and password and even has access to the end users device, then the attack will be able to access the environment. When possible try to add a two-factor authentication feature to minimize these types of attacks.

That way even if an attacker has access to the end users username and password they will not be able to login to the environment.

In addition, if we are not using Split tunneling, we should configure Authorization rules, which we can bind to the NetScaler gateway to define, ALLOW/DENY rules to internal resources using client expressions, which are then bound to AAA users or groups.

If this is not possible. Define ACL rules based upon the Intranet IP range that is defined as part of the NetScaler Gateway.

Now a lot of people focus on the SSL/TLS configuration of the virtual server, while that itself is important it should be part of the bigger picture since that only addresses the protocol exploits of SSL/TLS and might allow a malicious attacks to decrypt the secure connection and then do MiTM, while theoretically possible not easily achieved.

Now by default when configuring SSL/TLS Settings on NetScaler we can either use SSL profiles or use SSL parameters for each virtual server. If we use profiles, we cannot configure SSL parameters and the other way around.

NOTE: We also have the option to enable a global default SSL profile, which will be attached to all SSL protocol based virtual servers. This will use the ns_default_ssl_profile_frontend policy for front-end facing virtual servers. This can be enabled under Traffic Management à SSL à Change advanced SSL settings à enable default SSL profile, and take note after you enable it you cannot disable it.

The different SSL profiles can be viewed under System à Profiles à SSL Profile, by default there are two profiles one for front-end connections (for instance virtual servers) while the other are for backend connections (services, service groups)

Now there four main features that effect the security using TLS/SSL protocol

· Certificate (Private Key size, what does the certificate support?)

· Protocol Use (SSL or TLS?)

· Ciphers (Define how strong algorithm that should be used for encryption and which algorithm should be used for authenticity and authentication) Ciphers are attached to an SSL profile as well.

NOTE: There is a website called ssllabs.com, which is commonly used in conjunction with testing SSL/TLS security level on web services, where the score goes from F to A+ where A+ is the best possible score. This can only be achieved on the Gateway virtual server if it only uses the more secure protocols and Ciphers which give a high level of encryption and if we have a valid certificate. Again, I have to emphasize that this only addresses protocol weaknesses.

For our virtual server to score A+ on ssllabs.com test there are some modifications that need to be done again the SSL Profile or using SSL parameters.

· Bind the entire certificate chain to the virtual server, which means the certificate, any intermediate certificates and root certificates

· Deny SSL Renegotiation (This is used from a client to renegotiate which protocol to use, which might be used for attackers to lower a session from TLS 1.2 to a SSL version with lower security. Settings it to FRONTEND_CLIENTSERVER will disallow renegotiation.

image

· Make sure that SSL3 is disabled (This is disabled by default in the default profiles and should be reflected in the frontend profile)

image

· Specify a supported Cipher group, which ensures a high-level of encryption, this is added under the SSL profile as well. A Cipher group specified which SSL/TLS protocol that should be used and which type of encryption.

Another thing to be aware of is that some options are available for only front-end connections, but not backend connections. Another thing is that not all ciphers are available for VPX editions. If you try to create an cipher group of ciphers which are not supported on the VPX you will get an error message.

· The simplest way is to create a cipher group using CLI:
VPX Example:
add ssl cipher vpx-ciphers
bind ssl cipher vpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher vpx-ciphers
-cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher vpx-ciphers -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher vpx-cipher-list -cipherName TLS1-AES-256-CBC-SHA
bind ssl cipher vpx-cipher-list -cipherName TLS1-AES-128-CBC-SHA

· MPX Example:
add ssl cipher mpx-ciphers
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES256-GCM-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES128-GCM-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-256-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1.2-DHE-RSA-AES256-GCM-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-DHE-RSA-AES128-GCM-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1-DHE-RSA-AES-256-CBC-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA

· Implement HSTS and HTTP -> HTTPS redirection

One of the last things we need to configure is HSTS (HTTP Strict Transport Security) which is a security mechanism which is in place to protect websites against protocol downgrade attacks and cookie hijacking. It allows the NetScaler to notify web browsers that it should only interact which its services using HTTPS. This is a feature which Google implemented into Chrome, but other browsers such Firefox and Internet Explorer now support it. In order to configure it there are multiple steps.

· Have a valid certificate on the web-service (Root, any intermediate and server CA)

· Redirect all traffic from HTTP to HTTPS

· Serve an HSTS header on the base domain for HTTPS requests with header
Strict-Transport-Security: max-age=10886400; includeSubDomains; preload

· After this is done we can submit this to the Google chrome preload list here à https://hstspreload.appspot.com/

Now to first do implement HTTP to HTTPS the simplest way is to setup a simple load balancing virtual server on HTTP port 80 using the same IP as the NetScaler Gateway virtual server and then setting up a redirect.

NOTE: If you use the NetScaler Gateway wizard in NetScaler to configure NetScaler Gateway it uses this setup to configure HTTP to HTTPS redirect.

Go into Traffic Management à Load balancing à Virtual Servers. Click Add and give it a descriptive name and enter the same IP address of the NetScaler Gateway virtual server, and using HTTP as protocol as port 80.

image

Click OK, when asked to bind service to the virtual server click Continue. Click on the protection pane on the right side and there under Redirect URL, ether the FQDN of the NetScaler Gateway virtual server using HTTPS.

image

After that click OK and we are done.
Then we need to implement and HTTP rewrite policy that can insert the HSTS header. Go into AppExpert à Rewrite à Go into Actions first and click Add.

Give it a name like INSERT_HSTS_HEADER, under type choose INSERT_HTTP_HEADER, under header name enter Strict-Transport-Security under expression enter “max-age=157680000” and then click Create.

image

Then go back to the rewrite menu. Go into Policies and then click Add. Give it a name IMPLEMENT_HSTS_HEADER for instance and under Action choose the rewrite action we created, under expression use the expression true

image

Then click add. After we are done with this we need to add the rewrite policy to the NetScaler Gateway virtual server. To into NetScaler Gateway à Virtual Server à Choose the existing virtual server click edit à Policies, choose Rewrite and choose Response.

image

And then bind the existing rewrite rule we created, and click OK, and then we are done with the HSTS configuration.

The simplest way to confirm the HSTS settings and ciphers are properly setup is either using SSLlabs.com and do a test or using developer tools in Internet Explorer. This can be accessed clicking F12 within Internet Explorer and look at the HTTP header when connecting to the NetScaler Gateway virtual server.

image

From SSLlabs.com

image

NOTE: The simplest way to test ciphers groups when configuring NetScaler Gateway is using OpenSSL, which can be used for this purpose more info on this blogpost here à http://bit.ly/1MPaGY6

Splunk and NetScaler together

I was reached out to a while back and was asked if Splunk and NetScaler worked together? To be honest I haven’t tried that combination yet. So last night we decided to give it a try, using our regular Splunk setup.

Now in order to setup Splunk with NetScaler we need an IPFIX collector setup on the Splunk server, and this is possible using this Splunk addon for IPFX which can be found here –> https://splunkbase.splunk.com/app/1801/

This allows us to gather data using IPFIX into Spunk. And for those that are not aware, Citrix Appflow is basically just an IPFIX protocol which is raw binary-encoded data. In order for the IPFIX collector to be able to intertept this data the IPFIX sender needs to send across the templates to the collector. So when we first setup Splunk and NetScaler we will notice that data is not immediately interpret because it does not have the templates available and data will be listed as

TimeStamp=”2014-07-16T21:00:04″; Template=”264″; Observer=”1″; Address=”10.2.41.254″; Port=”2203″; ParseError=”Template not known (yet).”;

We can specify on the Appflow settings of the NetScaler how often it should send across the templates settings and we can also specify which settings we want in the AppFlow data that is exported to the collector

image

which is by default set to 10 minutes (I’ve changed it down to 1 minute) but no worry you do not have to change the defautl value, just want because Splunk will buffer the templates, after this we are good to go. Now if we are used to using Citrix Insight for instance it will only get information which is related to ICA sessions or web session, the IPFIX flow from NetScaler actually delivers alot of more useful information.

First add Splunk as an AppFlow Collector

image

Configure an AppFlow Action which is bound to the collector

image

Lastly define a Policy which defines which action to trigger if the NetScaler should generate IPFIX flow for a session.

image

We can use the general expression true which will in essence generate IPFIX traffic for everything… Load balancing, AppFirewall, Syslog, ICA sessions and so on. If we want to filter what we want NetScaler to send to the collector we can use general HTTP expressions like URL, User-Agents like we typically use for session policies to filter out based upon Citrix ICA session for instance.

NOTE: After we have created the policy we have to bind it to a gateway virtual server or globally.

After that is done we have to do the Splunk part. Log into the splunk console and go into the apps menu and choose Install Apps from file. From there point to the IPFIX file which can be found in the link I listed earlier.

When that is configured you should notice that there is an IPFIX data input. By going into Settings –-> Data Inputs

image

If there isn’t a number that just click Add New and enter the default settings and give it a name. Now when you see that AppFlow records are generated on NetScaler (Which can be easily seen by using the command

Stat Appflow

image

They should also be appearing in Splunk.  Go back to the main menu and choose the Search and reporting option

image

In the search option there we can just use the search prefix source=”NSIPOFNETSCALER:*” to see which data that has come from the NetScaler.

image

So notice there is a lot of data here since I choose the true expression in the appflow settings, but I can easily do sorting between the different settings. So let’s say I want to get all users which have accessed Citrix NetScaler Gateway?

source = “10.217.215.108*” | stats count by netscalerAaaUsername

image

Citrix Receiver versions connecting?

image

Endless possibilities with this module and being able to instant search data and it can also do syslog check as well, for instance if we made some changes to NetScaler.

image