Load Balancer definition & meaning
A load balancer, also known as a server farm or a server pool, is a networking solution that distributes the workload across multiple servers in a server farm. In other words, it divides the network traffic among the servers in a system. It can be deployed as software or hardware. By distributing the connections from clients to servers, it acts as a front-end to servers.
The main purpose of a load balancer is to reduce the heavy workload on each server by distributing it which makes servers more efficient, increases their performance, and reduces latency. Distributing the requests to available servers is essential for most applications, allowing them to function properly. It reduces the wait time drastically to provide a better user experience.
How does a Load balancer work?
Load balancers use load balancing algorithms to steer the traffic to a pool of servers. Load balancers check the health of backend resources to detect which ones are available to avoid overloading a server, which can make it unreliable. Hardware load balancers require the installation of a device, specialized in load balancing. Software load balancers can run on a server, virtual machine, or in the cloud. Most CDN solutions also come with load balancing features. There are two types of static load balancing algorithms:
Static load balancing algorithms: Static load balancers don’t take the state of the system into account while distributing workloads. Static load balancers distribute the requests on a predetermined plan. It reduces the risk of overloading servers, however, it doesn’t completely eliminate the risk. Static load balancers are easy to set up, but it is not the most efficient solution. Round robin DNS and client-side random load balancing solutions are the most popular static load balancing solutions.
Dynamic load balancing algorithms: Dynamic load balancing algorithms check the availability, workload, and health of each server before directing the traffic. It can avoid sending requests to overburdened or poorly performing servers to offer the ideal response time. Compared to static load balancing algorithms, dynamic load balancing algorithms are harder to set up, because it requires various different factors to be configured. The most popular types of dynamic load balancing algorithms are least connection, weighted least connection, resource-based, and geolocation-based load balancing.
Why use a load balancer?
By dividing incoming traffic among servers, load balancers reduce wait time drastically. It also prevents servers from overloading, which is also important for avoiding performance-related issues. Most importantly, load balancers allow organizations to get the best possible performance from their existing servers.
Is a load balancer necessary?
Load balancers are necessary for applications to provide the best possible outcome. Its capabilities also enable organizations to complete server maintenances without disruptions, automate disaster recovery, add and remove application servers without any impact, and monitor and block malicious content. Load balancers also detect server failures and redirect the traffic to an available server, thus user experience isn’t impacted by the problem and allows the organization to solve the problem.
What are the types of load balancers?
There are 4 types of load balancers:
- Application load balancer
- Network load balancer
- Classic load balancer
- Gateway load balancer
Application load balancer
Application load balancers make the decision at the application layer. It supports path-based routing and is capable of routing requests to one or multiple ports on container instances. Application Load Balancers also support dynamic host port mapping which allows organizations to have multiple tasks from a single service on the same container instance.
Network load balancer
Network load balancers make decisions at the transport layer and are capable of handling millions of requests per second. A network load balancer uses a flow hash routing algorithm while selecting from the target group for the default rule. It redirects the requests without modifying their headers. It also supports dynamic host port mapping.
Classic load balancer
Classic load balancers require a fixed relationship between the ports of the load balancer and the container instance. It can make the routing decision at either the transport layer or the application layer. It uses static mapping requiring that the cluster has at least as many container instances as the desired count of a single service that uses a Classic Load Balancer.
Gateway load balancer
Gateway load balancers allow administrators to deploy, scale, and manage virtual appliances, including firewalls, intrusion detection, and prevention systems, and deep packet inspection systems. It uses a transparent network gateway while scaling the virtual appliances with the demand. It operates at the third layer, the network layer. A gateway load balancer listens for all IP packets on all ports, then directs the traffic to the target group, which is specified in the listener rule.
Reverse Proxy vs. a Load Balancer?
Unlike load balancers, a reverse proxy act as a gateway. It can be considered a website’s public face through which all the web traffic should pass. Although they are capable of distributing the load similar to load balancers, a reverse proxy can also be used with one application connection.
The main purpose of a reverse proxy is to provide security as a barrier between the server and the client. It prevents malicious clients from directly accessing the internal network. It is also possible to reduce the response times by using some reverse proxy features, such as caching, SSL encrypting, and compressing. It also provides flexibility to organizations, allowing them to make changes in the backend architecture easily.