Azure Kubernetes Service Kubenet vs Azure CNI

When setting up a Kubernetes Cluster in Azure you have the option to deploy using two different network plugins. Either using Kubernetes Kubenet or Azure CNI.

aks2

So, what are the main differences between these two? to be honest there is also a lot of other considerations as well. Since Kubernetes networking can be complex. Before I go into the details, we need to just cover some basic Kubernetes Networking first.

NOTE: If you need a deep-dive setup into Azure networking I suggest reading my other blog post on the subject here –> Troubleshoot Networking in Microsoft Azure | Marius Sandbu (msandbu.org)

  • Kubernetes Services – within Kubernetes when we use the term service it is to logically group a set of pods together and provide network connectivity to it. There are some different service types available: Cluster IP, NodePort and LoadBalancer. Now since a service runs within a pod, such as a web front-end like apache, nginx, or such. We need a way to expose only port 80/443 of that service within the POD. Then we use Kubernetes services to expose those ports/services we want. Within each node, there is a Kube-proxy component that is running that is responsible for forwarding the traffic to that specific pod. Cluster-IP is only available internally within the Kubernetes Cluster. Nodeport allows forwarding to a specific port that is exposed to each host. Load Balancer often uses cloud-based load balancers (layer 4) to provide access.

 

  • CNI – There is also the concept of CNI (Container Network Interface) which is different network plugins that define how Kubernetes should interact with the physical/virtual underlying network. For instance, in Azure, we have kubenet and Azure CNI as the other option.

 

  • HTTP Routing in Azure – Is an uncomplicated way to provide reverse proxy connections to service in Azure (It is a lightweight ingress controller). The HTTP Routing feature is an addon to AKS which creates an integration with an Azure DNS Zone and Azure LB, using an app registration that allows the AKS cluster to create DNS records within the zone.  To provide remote access and set up an external DNS zone that you can use to access it. However this feature is only intended for dev/test and not for production workloads.

 

  • Ingress – Now this is the fun part. Ingress controller is essentially where the main magic happens. The Ingress is what provides proper load balancing, SSL termination, and HTTP virtual hosting of services. You can think of this as the traditional load balancer that you have that provides remote access to services like NetScaler, F5 and others. Now the ingress itself needs to be managed by an Ingress controller, and there are a lot of different flavors to choose from. You can use the cloud-native ingress services which for Azure is Application Gateway. It is important to remember that ingress only handles traffic flow into the services and not east-west traffic in the cluster.
  • Kubernetes Networking Policy – A not-so-fun fact with Kubernetes is that by default pods are non-isolated, meaning that they accept traffic from any source. Using a network policy allows us to restrict traffic flow to the different pods. These policies are implemented by the network plugin (which we will get back to in a bit). Since Kubernetes itself does not enforce network policies, and instead delegates its enforcement to the network plugins, such as Calico or Cilium. A difference between the two vendors is that Calico uses IPtables, while Cilium uses eBPF. 

 

  • Service Mesh – A another concept that often adds into the mix here is service mesh. Which allows us to control how applications can communicate with one another. Essentially managing the east-west traffic flow. Service Mesh acts as another proxy layer using an integration called sidecar proxy (most common sidecar proxy is envoy). A service mesh is divided into a data plane and a control plane. The service mesh data plane is responsible for the communication of services within the mesh and can provide features such as load balancing, encryption, and failure recovery through a separate, dedicated layer of infrastructure. Service Mesh can also handle capabilities like a/b tester or traffic splitting. 
  • Gateway API – Which is a new concept and the evolution of the ingress. While the ingress today provides a large ecosystem of providers, the intention is that Kubernetes should provide much of the components/features as a standard part of the platform. You can read more about it here –> Evolving Kubernetes networking with the Gateway API | Kubernetes

Putting it all together. We have different components that ensure traffic flow to an AKS Cluster in Azure. When setting up an AKS cluster you can define one of two CNI’s which can either be kubenet or Azure CNI. So, what is the difference?

  • Kubenet – Using Kubenet only the nodes receive an IP address in the virtual network subnet that they are deployed in. Kubectl. This is a basic configuration, but since it does not have any underlying integrations with Azure it means that it also can work across multiple platforms outside of Azure as well. Kubenet is the default CNI used in Azure. Kubenet means that containers get an IP address from a NATted IP address. There also a limitation to 400 nodes per cluster because of the UDR route tables lookup. Here you can see the default UDR that is created because of the kubenet API.

 

  • Azure CNI – Using Azure CNI it means that it provides a direct integration with the underlying Azure virtual network. All pods that are created get a direct IP address from the virtual network they reside in. This of course means that all pods are subject to rules applied on the virtual network such as NSG Flow logs, NSG, UDR’s and such. So, for instance for each pod Azure CNI preallocates 30 IP addresses. So, if you have 50 nodes it will preallocate  (51) + (51 * 30 (default)) = 1,581 IP addresses. This for instance will not work well with Azure NetApp Files which has a max limit of used IP addresses in a VNET which is 1,000.

So how does packet flow from an end-user trying to access a specific service in Azure running? If we deploy a standalone service without any ingress and using load balancer-based services in AKS the traffic flow would look like this.

Setting up the same service on an AKS cluster using Azure CNI there would be some changes in the traffic flow since a single vNIC would have multiple IP configurations. This provides lower latency since there is no need to encapsulate packets further.

This chart summarizes some of the key differences between kubenet and Azure CNI. Another difference is that if you have Azure CNI you also need to plan accordingly for IP assignment.

For production purposes, both options are valid and supported by Microsoft. One thing to bear in mind is that kubenet since it uses NAT within Azure means that it is an additional layer within VXLAN encapsulated packets which can add some additional latency. Also, that using Azure CNI means that you need to plan your IP ranges. If you plan to use Azure CNI in combination with Azure NetApp Files for instance you also need to plan the amount of IP addresses you want/require since NetApp files only support 1000 IP addresses within the same VNET (and peered VNET). Also, if you want to use other overlay solutions such as Cilium (they only support Azure CNI)

 

Leave a Reply

Scroll to Top