Types of Load Balancers and Algorithms

Introduction

As we already have discussed what is a Load Balancer now it is time to know how it cleverly decides which server can process the client request effectively.

The point here is actually how it is making a decision. So, there comes in the solution for the above thought i.e., it uses Algorithms to select the best server available to process the request.

There are algorithms designed to choose the server that performs the best action.

The Logic behind choosing an Algorithm for Load balancing

The logic behind choosing an algorithm depends on how much load is standing on the Network or Application Layer, the service and the type of application.

Algorithms are written accordingly and hence implemented. This marks a great impact on performance, business progress of Load Balancer.

Now when to use Application Layer Algorithms and Network Layer Algorithm techniques?

The strategy behind choosing a Network Layer Algorithm or Application Layer Algorithm depends on a few elements such as distribution logic, server load visibility, session persistence, Uncontrolled Load Handling.

Network Layer Load Balancers

The Network Layer Load Balancers don’t have access to the visibility of traffic flow; have limited knowledge on the nature of information stored in the data packet, available server.

Allowing you to de-stress

With our load and stress Testing Services

Read More

So, when the Load is on the Network Layer the following types of Network Layer algorithms are designed in such a way that it maintains a uniform distribution of load on the servers.

1. Round Robin

In this type, the load is handled in a round sequential manner.

The servers are programmed in such a way that they handle the incoming client requests in a circular pattern thinking that each server is able to process the same amount of requests and hence the load gets distributed uniformly to all the servers and is balanced.

2. Weighted Round Robin

As the name suggests, each server is weighted i.e., given a numerical value depending upon how many requests it can handle.

According to the capacity of each server, the one rated the highest will get maximum client requests.

3. Least-Connection

Here the request goes to the server which is having the least number of active sessions when compared to other

4.Weighted Least Connection

As we know that numerical value is given to each server depending upon its capacity to handle client request in weighted Round Robin, the same is followed here and whenever two servers are having same active sessions, the new request is sent to a server having higher weighting or numerical value.

5. Agent-based Adaptive Load Balancing

Again as the name suggests, each server here has an agent which reports about its current load to the load balancer, this helps in understanding which server is having space to occupy and process the new client request.

6. Chained Failover Load Balancer

Users are certainly not going to accept any product that has any sort of undiscovered hidden features.

Hence, it is a prerequisite to performing any sort of software testing prior to the insertion of the application or software for the clients with an eye to the detection of the hidden errors.

This helps in making your software stand ahead in the market, and beat the competition.

7. Weighted Response Time

Here the health check is made to each server in the pool and the response time is calculated. The new request is sent to the one, having the lowest response time.

8. Source IP hash

Here a unique hash key is generated by combining the IP addresses of the client and the server and the with the help of this key a server is allocated.

It has the benefit of regeneration of Key if the key gets broken and hence the client gets back to the same server which it was using previously.

9. Software Defined Networking Adaptive:

In this, the knowledge of the layers present above the networking layer and the below networking layer is combined.

Now the load balancer will now know the state of the server on each layer, the type of Application running on each layer, the health of the network,  the traffic on each server and hence all these factors help in maintaining the uniform load.

Application Layer Algorithms

Here in this Layer, the client requests are distributed across the servers based on incoming data. The main advantage of this algorithm is to help us in achieving a response from the server.

Least Pending Request (LPR)

This technique is very flexible in handling a sudden new incoming request and hence the load gets distributed evenly on every server.

It observes the HTTPS requests that are pending and distributes the request to the server that is free to occupy new request.

The benefit in LPR is that it helps in distributing the load evenly on all the servers not depending on the predefined rules as followed for Network Layer.

Is Your Application Secure? We’re here to help. Talk to our experts Now

Inquire Now

As and when the requests jump in, it makes way for all the requests to get distributed uniformly on all the servers and hence it can be called an intelligent and clever decision-making technique without any protocols followed.

Also, another benefit is, as it data-oriented, it can distribute the load on the servers depending on the type of request and also knows how much time each request can take to get a response, thus giving rise to efficient Load Distribution.



Author: Ayesha Rahman
Ayesha Rahman is a performance test engineer at Indium Software with hand-on experience in the field of IoT and software development. She has a Bachelor’s degree in Engineering (Electronics and Communication) and likes listening to music, play shuttle also persuing a beauty course at an academy.