How to evenly distribute traffic

Load balancing: Round robin may not be the right choice

Ram Lakshmanan
load balancing
©Shutterstock / sorayut

When it comes to load balancing, round robin may not be the best algorithm to choose from. If auto-scaling is instrumented, it is even worse. In this article, see a simple example explaining why this is so, how the round robin algorithm works, and which load balancing algorithm you should potentially pick instead for even traffic distribution.

Based on our experience, we believe round robin may not be an effective load balancing algorithm, because it doesn’t equally distribute traffic among all nodes.

You might wonder how this is possible? Yes, it is possible! 😊

How does the round robin load balancing algorithm work?

Round robin algorithm sends requests among nodes in the order that requests are received. Here is a simple example. Let’s say you have 3 nodes: node-A, node-B, and node-C.

  • First request is sent to node-A.
  • Second request is sent to node-B.
  • Third request is sent to node-C.

The load balancer continues sending requests to servers based on this order. It might seem that traffic would get equally distributed among the nodes. But that isn’t true.

What is the problem with round robin algorithm?

Fig: Load Balancer with 2 nodes (round-robin)

Let’s pick up a simple example. Let’s say you launched your web application with a load balancer, and it has two nodes (node-A, node-B) behind it. The load balancer is configured to run with round robin algorithm, and sticky-session load balancing is enabled. Let’s say 200 users are using your application currently.

SEE ALSO: The new ValueType in Java

Since the round robin algorithm is enabled in the load balancer, each node will get 100 users’ requests.

Fig: New node added. Newly added node gets less traffic in round-robin algorithm

A few minutes later, you add node-C. Let’s say now additional 100 users start using the application. Since it’s round robin algorithm, the load balancer will distribute new users’ requests equally to all 3 nodes (i.e., 33 users request to each node).

But remember node-A and node-B is already processing 100 users requests each. So now node-A and node-B will end up processing 133 users requests each (i.e., 100 original users requests + 33 new users requests), whereas node-C will process only 33 (new) users requests. Now, do you see why round robin isn’t equally distributing the traffic?

In the round robin algorithm, older nodes in the pool will always end-up processing more requests. Newly added nodes will end up processing less amount of traffic. The load is never evenly distributed. For maintenance, patching & installation purposes, you have to continually keep adding and removing nodes from the load balancer pool.

If you instrumented auto-scaling in place, the problem gets even worse. In auto-scaling nodes are more dynamic. They get added and removed even more frequently.

What algorithm to use?

Fig: New node added. Traffic is evenly distributed in Least Connections

There are a variety of load balancing algorithms: Weighted Round Robin, Random, Source IP, URL, least connections, least traffic, and least latency.

SEE ALSO: Python tutorial: Best practices and common mistakes to avoid

Given the shortcoming in round robin, you can consider trying other choices.

One choice you may consider is: ‘least connections‘ algorithm. As per this algorithm, the node which has the least number of connections will get the next request. Thus, as per our earlier example, when new 100 users start to use the application, all new users’ requests will be sent to node-C. Thus, the load will be equally distributed among all nodes.

Ram Lakshmanan
Every single day, millions & millions of people in North America—bank, travel, and commerce—use the applications that Ram Lakshmanan has architected. Ram is an acclaimed speaker in major conferences on scalability, availability, and performance topics. Recently, he has founded a startup, which specializes in troubleshooting performance problems.

Inline Feedbacks
View all comments