Setting up High-availability NetScaler in Microsoft Azure ARM

By | December 16, 2016

I’m currently invovled in a PoC on setting up a redudant NetScaler deployment in Microsoft Azure, now it’s been some time since I had a session on setting up NetScaler and Azure on the NetScaler Masterclass so I decided to do a post on setting up High-availability pair NetScaler on Azure and what you need to think about.

In a default virtual enviroment, setting up NetScaler HA is pretty straight forward. The most common deployment is having an Active/Passive pair which will syncronize all the settings, files and TCP sessions and persistence tables. When a failure happens or it misses 3 consecutive heartbeats a failover will occur leveraging either GARP or if it is configured with VMAC (For Firewalls or such which do not support GARP)

But network features like VMAC or GARP are not supported in Azure, because of the way it is fuctions using NVGRE it cannot expose L2 features directly to the network. One thing that is important to note with NetScaler in Azure is that it is in single-ip mode, which means that we share an IP for NSIP, SNIP, VIP,  Also another limitation is that the regular ports  21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000, are not available for use as a VIP so therefore we need to do some NAT rules in place as well which we will sort out using Azure load balancing

Active / Passive deployment of NetScaler

1: Create a Availability Group where we put the NetScalers. This needs to be done before we provision the NetScaler’s since in ARM we cannot move a resource into a AG after it has been provisioned.

2: When creating NetScalers make sure to define which AG the appliance should be placed in. Both NetScalers need to be placed in the same AG

image

3: Create a Load Balancer in Azure

4: Define a front-end pool (The VIP externally) Which you should also define with a static IP when creating a new load balanced setup

image

5: Configure regular HA on the NetScaler (The UI is only accessable from the internal network, so if you have a Windows Computer which is available on the same internal network it is the easiest way to setup. Or else you can SSH to the NetScaler which is configured by default when you setup the appliance out of the template in Azure

image

6: Create a VIP on the NetScaler on for instance port 88 (Note even if we use port 88 externally on the NetScalers this is going to be NATed via the Azure Load balancer on port 80)

image

7: Create a backend pool on the Azure Load balancer (which points to the availability set and then to the virtual instances of the NetScalers)

image

8: Create a Health probe against port 9000 using Protocol TCP and Interval 5 seconds and 2 consecutive failures. The port the probe is checking will only respond on the active node in the HA-pair.

image

9: Define a load balaced rule set which maps together the health probe, ports and backend pool.

image

So now the deployment should look something like this. The Azure Load Balancer is defined with a front-end IP which is a public IP address, it is configured with a backend pool which defines the backend pool (in this case it points to the backend NetScalers) and it has a health probe which points to the NetScalers on port 9000, which is only active on the Active node in the HA-pairs. Lastly we have a load balancing rule which maps external requests on port 80 on the Azure load balancer and points to the backend VIP on port 88, which again has a load balanced vServer on the NetScaler which points to the backend pool.

image

Now last but not least, we need to allow access from the load balancer  to the NetScalers as part of the network security group.  So a network security group is bound to each of the NetScaler NIC’s so we need to allow that the load balancer is allowed to communicate with the NIC of the NetScalers on that particular port number, and also we should remove the public IP addresses that are attached to the NetScaler so they aren’t publicly available externally since by default they are open on port 22.

So setup a NSG rule allowing port 88 access to the NetScaler’s, since they do not have any public ip address, they will only be accessable on port 88 externally and internally, since the NetScaler is a hardend appliance.

image

Now when the Load balancer is up and running, you can see that there is going to be an active TCP session to the Primary NetScaler nodes. image

And if we do a failover the health probe will fail on 9000 on the old active node and the azure load balancer will move across to the other node.

image

Leave a Reply

Your email address will not be published. Required fields are marked *