Why load balancing with NSX makes alot of sense

So I am now spending some quality time 40,000 feet on my way to Amsterdam and after writing a bit more about load balancing and design scenarioes, it goes me thinking (Yes I am quite the philosopher) VMware are in these days in tech preview of the load balancing capabilities of NSX. Now NSX already is quite the powerhouse with

  • Distributed switching
  • Distributed Routing
  • Distributed Firewall
  • Edge services (SSL VPN, S2S VPN and so on)

Now with load balancing capabilities coming why would that make sense with NSX? In some cases I have been asked to setup a one-armed deployment with an ADC meaning that the load balancer would be placed in one network and would need to require routing to properly communicate with the backend resources that it was supposed to load balance, the reason for this is that most cases that the customer wants to have the firewall to inspect the traffic before it leaves the DMZ and enteres the internal network.

So this is the scenario (Which is quite simplfied:

We have a simple eCommerece site running on our two web-servers which are placed on seperate ESXi host for redundacy, they are placed in VLAN 110 which is an internal web server VLAN. On the other hand we have ADC 1 and ADC 2 which are located in VLAN 10 which is the DMZ VLAN where they have the VIP and the Source IP which is used for backend communication


Let us now also say that we have an one-armed deployment for the ADCs, if a enduser was to request something from the ecommerece site, the traffic flow would look like this.

  • Enduser to Router1 (GW for both VLANs)
  • Router1 to VIP1 (Running on ADC1)
  • ADC1 to Router1 (Connecting to Web-server which is located on another subnet)
  • Router1 to Web-server1 (Forward ADC1 packets)
  • Web-Server1 to Router1 (Respond with requested packets)
  • Router1 to ADC1 (Return back packets)
  • ADC1 to Router1 (Return back packets for enduser)
  • Router1 to Enduser

Phew! that is alot of traffic going back and forth. Now this would pretty much be the same if we moved Web-Server1 to the same host as ADC1, but if we for instance implemented distributed routing feature and NSX and the ADC and the Web-Server (Which was load balanced) on the same host it would need to leave the host, it would then just be processed in-kernel


The traffic flow would (from a network perspective) look more like this

  • Enduser to Router1
  • Router1 to ADC
  • In-kernel communication (Between ADC1 and WebServer1)
  • ADC to Router1
  • Router1 to Enduser

Much simpler and no more unnecessary traffic travering in the network. Firewall and routing are done in-kernel and does not need to leave the ESXi host.

So distributed routing helps if the resources are on the same host, if the resources were not on the same host, it would communicate using the ESXi vTEP to the host which has the resource available.

Now the advantage with NSX is that it is virtulization aware, so what if it could leverage the data locality feature to minimize the east-west traffic within an NSX enviroment, maybe have data locality based load balancing?
Now if we are using distributed load balancing feature, we do not need the ADC present since this is part of the in-kernel modules so let’s change the example to Web-tier and DB-tier


Now the NSX Distributed load balancing feature is currently in Tech Preview, I haven’t actually tried it myself but the possibilities are there. It could allow for users which are load balanced to Web on ESXi server 1 should then leverage DB on the same ESXi host to minimize the East-West traffic. But still even if the feature is not coming, it gives alot more flexibility within NSX to add up the even more features.

Leave a Reply

Scroll to Top