Azure Stack networking overview and use of BGP and VXLAN

Now after been dabling with Azure Stack for some time since the preview, there has been one thing that has been bugging me, which is the networking flow. Hence I decided to create an overview of the network topology and how things are connected and how traffic flow is.

Important to remember that Azure Stack is using much of the features in 2016 including SLB, BGP, VXLAN and so on.
Most of the management machines in the Azure Stack POC is placed in the vEthernet 1001 connection on the Hyper-V host, and is connected to the vSwitch CCI_External.
The mangement machines are located on 192.168.100.0/24 scopes.
Now with this updated chart, we can see that each tenant has its own /32 BGP route which is attached to the MuxVM which acts as an load balancer

image

When traffic goes from the clientIP it is encapsulated using VXLAN and then goes to the MuxVM (Using its public IP address) In my case its 192.168.233.21 (Which is part of the PAHostvNIC which is then routed to the MuxVM using VXLAN encapsulation (Which uses UDP) and then forwarded to the BGPVM and then out the NatVM and out to the world.

image

On the other hand we have NATVM and the CLIENTVM which are placed on the 192.168.200 scope. The 192.168.200.0/24 network can communicate via the BGPVM which has two-armed configuration. Which acts as the gateway between 192.168.100 network and the 192.168.200.0 network. Now the funny thig is that NATVM just acts like a gateway for the external network in, it has RRAS installed and since it is directly connected to both networks it allows access from externally. Now BGPVM also has RRAS installed, but we cannot see that using the RRAS console, we need to see it in PowerShell, and also BGPVM (as stated) has a BGP route setup to the MuxVM. The MuxVM acts as an load balancer for the BGVM using BGP to advertise the VIP to the router using a /32 route.

So for instance on the ClientVM if we open a connection to Portal.Azurestack.local (Which has an IP of 192.168.133.74) The traffic flow will go like this.

ClientVM –> NATVM –> BGPVM –> (BGP ROUTE PEER) –> MuxVM –> PortalVM

Now remember that the configuration of BGP and LB and the host is done by the network controller

SLB infrastructure
For a virtual switch to be compatible with SLB, you must use Hyper-V Virtual Switch Manager or Windows PowerShell commands to create the switch, and then we must have the Azure Virtual Filtering Platform (VFP) for the virtual switch enabled.

So for those that are looking into Windows Server 2016, Look into the networking stack of 2016 its bloody HUGE!

Leave a Reply

Scroll to Top