Software-defined networking difference between VXLAN and NVGRE

Myself being quite in the starting phase of software-defined networking and all the different network virtuliazation technologies out there, I thought I would do a summurization between the largest different vendors in this market. What differenciates them (from a protocol perspective) and why on earth would we use them ?

First of, network virtualization is not new it has been around for a long time. Since we started with computer virtualization and had some sort of networking capabilities, but to extend this capabilities required something more. We started out with

* Virtual Network adapters and dummy switches

And then we moved along into more cool stuff like

* Virtual VLAN
* Managed L2 Switches virtually
* Firewall and load balancing capabilities
* Virtuall routing capabilities and virtual routing tables

Now in the later years came VXLAN and NVGRE (which are two different tunneling protocols) which was primarly aimed at the scaleability issues with large cloud computing platforms and also with the problems with STP and using a large number of disabled links. Such as VLAN issues and overlapping IP-address segments, and that management should be a part of the virtualization layer and not seperate.

VXLAN

VXLAN (Part of NSX) is in essence a tunneling protocol which wraps layer 2 on layer 3 network. Where a network is split into different segment and only VMs within the same VXLAN segment can communicate with each other. This segment has its own 24-bits segment ID. VXLAN uses IP Multicast to deliver bcast/mcast/unknown destination VM Mac addresses to all access switches participating in a given VXLAN.

In a tradisional VLAN packet it would look like this

Using VXLAN we wrap the Ethernet packet within UDP packet, so first we have the inner (Original) Ethernet header

So using VXLAN addes another 50 bytes of additional overhead for the Protocol. Which in essence means that it will the standard MTU over 1500. There is a tech post from Vmware which stats that the MTU should be adjusted to 1600 MTU, but you should rather consider Jumbo frames http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-VXLAN-Perf.pdf

So it gives more overhead and all packets need to wrapped out of the VXLAN before being sent to the other VM. This also makes an issue when sending small packets such as Telnet/SSH which transmits a packet for eac keystroke which will see a large amount of overhead for each packet, even thou it not a very common workload.

In order to allow communication between a VXLAN enabled host and a non enabled VXLAN host you need a VXLAN capable device in between which acts as a gateway.

Now a nice thing about VXLAN is that there is coming more and more support for VXLAN enabled devices, and so using VXLAN in our cloud infrastructure we can define access and management from the virtualization layer and move all VXLAN traffic over just one transport VLAN.

NVGRE

NVGRE on the other hand is primarly a tunneling protocol that Microsoft is pushing, which uses GRE to tunnel L2 packets across an IP fabric. Which uses a 24 bits of the GRE to identity the network ID

The positive thing about using GRE is that many existing hardware already has full support for GRE (Hence switching and nic offloading) but on the other hand wrapping L2 packets within a GRE layer will not allow regular features like firewalls or load balancers be able to “see” the packets unlike with UDP. So therefore the load balancers / firewall would need to act as a Gateway and remove the GRE wrapper in order to do packet inspection.

For instance in Windows Server 2016 TP4 it includes its own load balancing and firewall capabilities to be able to do this without unwrapping the packets. Here are some features that are included in TP4

Network Function Virtualization (NFV). In today’s software defined datacenters, network functions that are being performed by hardware appliances (such as load balancers, firewalls, routers, switches, and so on) are increasingly being deployed as virtual appliances. This “network function virtualization” is a natural progression of server virtualization and network virtualization. Virtual appliances are quickly emerging and creating a brand new market. They continue to generate interest and gain momentum in both virtualization platforms and cloud services. The following NFV technologies are available in Windows Server 2016 Technical Preview.

  • Software Load Balancer (SLB) and Network Address Translation (NAT). The north-south and east-west layer 4 load balancer and NAT enhances throughput by supporting Direct Server Return, with which the return network traffic can bypass the Load Balancing multiplexer.

  • Datacenter Firewall. This distributed firewall provides granular access control lists (ACLs), enabling you to apply firewall policies at the VM interface level or at the subnet level.

  • RAS Gateway. You can use RAS Gateway for routing traffic between virtual networks and physical networks; specifically, you can deploy site-to-site IPsec or Generic Routing Encapsulation (GRE) VPN gateways and forwarding gateways. In addition, M+N redundancy of gateways is supported, and Border Gateway Protocol (BGP) provides dynamic routing between networks for all gateway scenarios (site-to-site, GRE, and forwarding).

The future

It might be that both of these prococols will be replaced by another tunneling protocol called Geneve which is a cojoint effort by Intel, Vmware, Microsoft and Red Hat –-> http://tools.ietf.org/html/draft-gross-geneve-00#ref-I-D.ietf-nvo3-dataplane-requirements which in my eyes look alot like VXLAN using UDP wrapping protocol.

Either way the tunneling protocol that be used needs to be properly adopted by the management layer in order to integrated with the computing virtualization layer to ensure that traffic policies and security management are in place.

Leave a Reply

Scroll to Top