AzureStack– Breakdown of load balancing component

Being quite the networking geek I decided to breakdown the load balancing component that comes as part of AzureStack, which is actually the same load balancing component which is available from Azure as well.

Now the load balancing component is part of Windows Server 2016 release and controlled by the Network Controller. From a AzureStack perspective we have a Network Resource Provider which “translates” all operations from ARM to the Network Controller, so when a tenant or user goes into the Portal and configures a load balancer setup it will via the Network Resource Provider to the Northbound API on the Network Controller to configure the load balancer.

The Network Controller consists of multiple services which responsible for handling different NFV on the AzureStack, so for load balancing it has its own Software Load Balancer Manager, which stores the LB configuration even in a openvswitch database (OVSDB)

When setting up AzureStack, it will automatically deploy the SLB Host Agent Service on the Hyper-V host, which is responsible for NATing incoming requests to the correct virtual machine.

So if we look at a regular request for a load balanced VIP (IP 80.80.80.80) it will come the Edge Router, which will look at its routing table. The MUXVM or the MAS-SLB01 will advertise all Load balanced VIP using BGP with a /32 route. For multiple hosts the MUXVM can also be stacked using ECMP with the closest router so that you have a highly-available solution to reach the load balanced VIP. So when the request comes to the MAS-SLB01 (MUXVM) it will check with its load balancing policies from the network controller, it finds the VM which the traffic is destined for (10.0.0.4) the traffic will be transported using the SLB host agent services from the MUXVM. 

Overview picture showing the traffic flow for a single server

image

The server generates a response and sends it to the client, using its own IP address as the source, the host will now intercept the outgoing packet in the virtual switch which remembers that the client, now the destination, made the original request to the VIP. The host rewrites the source of the packet to be the VIP so that to the client does not see the internal private IP range. The host will then forward the packet directly to the default gateway for the physical network which uses the standard routing table to forward the packet on to the client which eventually receives the response. Now when you have a single host as the Azure Stack POC is today, it pretty easy. What if we have multiple hosts and then have multiple MUXVM’s how will it look like then?

Since the MUXVM using ECMP it means that incoming requests can come to either one of the MUX VM’s which have the BGP route presented, so it case it comes to the MUXVM on the other host, the traffic will be inspected by the policies defined on the Network controller which knows which hosts the virtual machines that are destined for the traffic are located. The incoming traffic will be proxied from the MUXVM on host 2 and then encapsulated using VXLAN across to the other host. The SLB Host agent will then inspect the traffic, remove the VXLAN header and then forward the traffic to the virtual machine on Host 1. The route back will go from the client –> SLB host agent service and then directly to the default gateway of the host.

image

So its important to note that AzureStack Load balancing is a DSR (Direct server return) meaning that traffic going back from the servers to the clients are not handled by the MUX-VM. So for instance if a host goes down and a MUX goes down the router in this case will just forward routes to another MUX and hope that the VM restartes on another host in the back.

Leave a Reply

Scroll to Top