When I was working with a customer project a couple of weeks back, I was setting up a redudant pair of Citrix ADC in Microsoft Azure which was going to be used publish customer workloads. The reason for setting ADC was because of security capabilities such as web application firewall, ip reputation, HTTP DoS, Rate limiting and such.
We configured Standard Load Balancer in front of the ADC (NetScaler) to ensure High-availabiltiy of the appliances themselves for outbound traffic.
As I mentioned here in my previous blogpost, https://msandbu.org/deep-dive-azure-load-balancer/ as part Standard is the only way we can get SLA as part of Azure Load Balancer (not part of Basic LB). Now the ADC appliances needed also to be able to connect to the Internet as well to be able to update the IP Reputation database. Now for some reason the NetScaler’s were not able to communicate out to the Internet.
For some reason, Internet outbound was not working but incoming traffic defined in the Load Balancer was working. That was when I remembered that there is a big difference between Standard and Basic Load Balancers.
If you want outbound connectivity when working with Standard SKUs, you must explicitly define it either with Standard Public IP addresses or Standard public Load Balancer. This includes creating outbound connectivity when using an internal Standard Load Balancer.
Since when we configure a Standard LB and bind it to the NetScalers as part of the backend pool. They will use the LB PIP as default IP for outbound connections, but was default outbound connections is blocked so therefore we need to define a outbound rule to allow traffic.
What we ended up doing was to define two front-end IP configurations (meaning two public Ip addresses) so that we can differenciate on inbound traffic and outbound traffic which makes it easier to monitor metrics in Microsoft Azure.
Using Terraform we can also easily define outbound rules to a load balancer.
resource "azurerm_lb_outbound_rule" "test" { resource_group_name = "${azurerm_resource_group.test.name}" loadbalancer_id = "${azurerm_lb.test.id}" name = "OutboundRule" protocol = "Tcp"
(Can be Tcp, Udp or Any)
backend_address_pool_id = "${azurerm_lb_backend_address_pool.test.id}" frontend_ip_configuration { name = "PublicIPAddress" } }
And that way we can also easier define any whitelisting rules that we might have to implement later for outbound connections. Now you might think that since we have two different front-end IP configurations that might create asymmetrical routing for incoming traffic. But the Load Balancer manages that as part of the ARP tables which is in the LB MUX to ensure that traffic is returning the same path that it came in.
You should also take notice that for outbound traffic flows, Azure allocates about 1,024 SNAT ports per IP configuration so if you get TCP SYN Timeout for outbound connections, ensure that 1: TCP timeout value is defined low enough, and ensure that you do not have to many outbound connections.