Cloud Ace Blog

Nov 11, 2019 | GCP Container

by Avinash Bodapati

Cloud Solutions Engineer

Container Native Load Balancing on GCP — How does it matter?

Balance

Photo by Austin Neill on Unsplash

Cloud is a fast changing landscape with numerous new announcements being made on a daily basis by the Cloud Service Providers. Every Cloud platform has its own strengths and being informed of these differentiators only helps Engineers and Businesses alike. Speaking of Google Cloud Platform, among many other differentiators, Global VPCs (Global Load Balancers made possible) and GKE (managed Kubernetes offering) are two things that quite resonate during our customer interactions. Having mentioned that, Cloud Native Load Balancing (Network Endpoints Group aka NEG to be more specific) blends the above two things into a nice little feature for containerised applications.

While the concept of Cloud Native Load Balancing was introduced at Cloud Next 18, NEG’s General Availability (GA) was announced only earlier this year.

Balance2

source: cloud.google.com

The above picture pretty much conveys what container native load balancing is all about. In short, application pods replace VMs as the backends of the load balancer. Now, we do not intend to explain how all of this works as we cannot get any better than the official google cloud documentation :)

Looking at the image above, the initial impressions would be better load balancing decisions and may be better response times among other benefits. Even before jumping into the performance details, anyone in charge of production applications would appreciate the fact that the network packet takes one less hop to its destination. Troubleshooting is no mean feat with proxies and iptables involved.

Balance3

Test environment consisting of two GKE clusters. A VPC native cluster on the right side.

Now to understand the performance side of it, we spun up two 3–node regional GKE clusters. A route based GKE cluster and a VPC native GKE cluster. A simple http server has been deployed as a ‘NodePort’ service (6 pod replicas) on both these clusters and an Ingress has been deployed to frontend the service. A VM in us-central1 region has apache benchmark installed and generates the test traffic. The only difference being the service deployed onto the NEG cluster (right hand side) makes use of the NEG annotation

Balance4

Deploying a service with neg annotation automatically configures a NEG and the corresponding pods as the endpoints

Below image shows how the pods have been distributed across the 3 node regional cluster. Interestingly, pods have been distributed the same way in case of both the clusters (1:2:3).

Balance5

Screenshot showing the pods distribution across the nodes for both the clusters

We now started sending steady test traffic to both the clusters at near peak capacity and monitored the load balancers for the traffic distribution pattern.

Balance6

Backend for non-NEG cluster

Balance7

Backend for NEG enabled cluster

It can be witnessed that the traffic is equally distributed across all 3 nodes in the first case and this happens irrespective of the number of pods running on the node. This is not one would logically expect/agree. On the flip side, it has been exciting to see the results from NEG enabled cluster. The traffic distribution closely matches the pod ratio (1:2:3). We now understand what Google means when it says optimal load balancing. And here comes the bonus.

Balance8

Non NEG stats for apache benchmarking sending 60K+ requests

Balance9

NEG stats for apache benchmarking sending 60K+ requests

The difference should be attributed to the latency between the load balancer and the backends, as the frontend traffic pattern remains unchanged. We used a concurrency of 100 requests and these stats are huge for us considering what we deployed is something simple. We now look to take forward our tests by adopting NEGs for one of our internet facing services.

Do share with us your own findings and observations around NEGs : )

We also publish articles on Medium. Take a look and follow us to see when we publish new articles.

Contact Us