Moving from ADC to Service Mesh

Over the last years now I’ve been working a lot with different ADC platforms. ADC is a term used to describe a next-generation load balancer (according to Gartner), where we have vendors such as F5, Citrix, KEMP, and so on.

Now the ADC market is no longer a single, undivided market serving traditional data-center-deployed application environments, but moving towards software-defined and a microservice ecosystem. This is essentially what this post is about, how and what ADC market is changing to a service mesh based architecture and what it is and the role it plays in the future application delivery platform.

Now the picture below summarizes an simple overview design of a traditional ADC, where its role is essentially delivering access to backend applications running on servers, either it be virtual or physical that have some form of application running. ADC’s are used for load balancing and proxing traffic from end-users or from other services to backend services.

Bilderesultat for adc load balancing

Many of the ADC vendors in the market also have powerful security mechanisms such as

  • DDoS Shield
  • HTTP DoS Shield and Rate Limiting
  • WAF – Web Application Firewall
  • SSL/TLS Mechanisms such as defining TLS/SSL version, ciphers groups and such.

Most vendors in the market come from a physical appliance background where they’ve now converted their system into a virtual appliance as well, so they can provide much of the same capabilities for virtual infrastructure workloads.

Now since most of these vendors come from a physical appliance space they tend to have the different components (Management, Control and Data Plane) running on top of the same system. This makes it simple for the vendors since they then can have a single system that they can use on both physical and virtual appliances. However this type of design choice makes it more difficult to scale out if needed, what if we need to scale out our data plane to handle more traffic? then we would need to setup a new virtual appliance from the vendor to take care of that new traffic seems to be more and more describing a monolitith application.

Most ADC vendors are also based upon load balancing a set of static targets. To give an example an ADC will monitor the endpoints to see at the health status on the backend servers that have been defined, and it will remove any unhealthy targets from the load balanced list. So what if we need to scale out with more servers running in a cloud enviroment? or in another datacenter? If so we might need to add more data plane workers to be able to communicate and load balance these new services. The same applies for inter-service load balancing, what if service-A need to communicate with service-B and where service-a is scaling up and down automatically it can be an issue with an traditional ADC architecture since most of them are not designed to automatice scale up and down their data plane. 

This is for instance where Avi Networks have made their debut with a more truly software-defined approach with tighter integration with the virtualization layer, and where they also seperated the control and data plane. This makes it easier to scale out data plane when needed.

Avi has a component they call an Avi Controller which is the control plane and integrates into the different enviroments and they have a second component called service engines which are there to handle data flow, which can scale up and down based upon demand.

Bilderesultat for avi networks architecture

Now moving forward…Enter microservices. Within a microservice architecture, application services that were once monolithic running within seperate virtual machines are now separated and distributed across mulitple containers. As a result, much communication are now between services directly within a cluster.

The result is an requirement for infrastructure software, which is designed to optimize communications between microservices. It needs to provide internal gateways and loadbalancing service-to-service communications. With Microservices we can have multiple containers running in a combination of physical and virtual nodes where all the hosts that are running as worker nodes and a set of the different services.

The traditional ADC tends to live outside of the container ecosystem. So how can we load balance traffic from Task1 to Task2? Since this traffic should be contained within the worker nodes?

Bilderesultat for container networking load balancing

Pluss we might still need some external load balancing mechanism to load balance between multiple sites or multiple datacenter or even clusters.

Within a container enviroment we tend to have an overlay network which the worker nodes are using to communicate with each other and services internally are also using to communicate. Also we have something called an ingress network. The ingress component is used to expose services externally, and many are using software-defined ADC products as the ingress network component to handle this north-south traffic coming in to the container enviroment and east-west traffic inside the enviroment to load balance traffic between the different services.

If we look at Kubernetes we have something called an ingress controller which tends to be used a part of the solution that manages external access to the services in a cluster, typically HTTP. Within Kubernetes you can have different solutions that act as the role as ingress (

for purpose of this blogpost I will be focusing on one in particular called lstio.

You can take a closer look at the different Ingress options and capabilities in the following Google Sheet –>

Isito is defined as a service mesh, not just a plain ingress controller (Istio uses Envoy) The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them

Bilderesultat for istio

Istio provides much of the foundation that an traditional ADC provides but more integrated with a microservices and software-defined architecture.

  • Load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
  • Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

No Isitio is only supported with Kubernetes because of this design choice so it still a valid option if you want to implement similiar functionality for existing virtual machines. However Isitio solves something that most ADC vendor do not, and that is to provide a uniformed way of managmement of traffic in a microservices architecture.

And this is the big point and which most people should start looking into now when evaluating ADC vendors in their capability or strategy moving forward on how to provide a centralized managed solution that can handle cross-clouds, containerized enviroments and classic virtual machines but still providiing load balancing, insight and security across these enviroments is the big puzzle that needs to be solved


Leave a Reply

Scroll to Top