++++
Engineering
Mar 2025×10 min read

Load balancing is the critical process of distributing incoming network traffic across a group of backend servers, also known as a Server Pool or Server Farm. It acts as the 'traffic cop' sitting in front of your servers and routing client requests in a way that maximizes speed and capacity utilization.

Load Balancing: The Traffic Controller ⚖️

Driptanil Datta
Driptanil DattaSoftware Developer
🌍
References & Disclaimer

This content is adapted from Mastering System Design from Basics to Cracking Interviews (Udemy). It has been curated and organized for educational purposes on this portfolio. No copyright infringement is intended.


Why Load Balancing is Needed

A high-traffic website might receive hundreds of thousands of concurrent requests. A single server cannot handle this volume alone. Load balancing ensures:

  • High Availability: Ensures system uptime even under heavy traffic.
  • Traffic Distribution: Spreads requests evenly across all available healthy servers.
  • Overload Prevention: Avoids overburdening any single server, preventing crashes.
  • Improved Performance: Reduces latency and enhances response times for users.
  • Graceful Failures: Automatically redirects traffic if a server fails via Health Checks.
  • Scalability: Makes it easy to add or remove servers from the pool dynamically.

Types of Load Balancers

1. Based on Layer (OSI Model)

  • Layer 4 (Transport Layer): Operates at the TCP/UDP level. Routing decisions based on IP/Port. Extremely fast but content-blind.
  • Layer 7 (Application Layer): Operates at the HTTP/HTTPS level. Intelligent routing based on content (URLs, Cookies, Headers).

2. Based on Deployment

  • Hardware Balancers: Specialized physical devices (e.g., F5). Powerful but expensive and rigid.
  • Software Balancers: Applications like Nginx, HAProxy, or Envoy. Flexible and cost-effective.
  • Cloud-Managed: Services like AWS Elastic Load Balancer (ELB) that scale automatically.

Load Balancing Strategies

Static Strategies

  • Round Robin: Sequential distribution (S1 -> S2 -> S3). Best for identical server specs.
  • IP Hashing: Routes based on client IP. Ensures Session Persistence (sticky sessions).
  • Weighted Round Robin: Assigns more traffic to servers with higher capacity.

Dynamic Strategies

  • Least Connections: Sends traffic to the server with the fewest active connections.
  • Least Response Time: Prioritizes servers with the fastest response and least load.
  • Adaptive: Real-time monitoring of health and resources for intelligent routing.

Load Balancer in Action

Imagine an app where the Load Balancer sits on a Public IP, but servers are in a Private Subnet.


Choosing the Right Balancer

  • Layer 4 vs. Layer 7: Use L4 for maximum speed (DB clusters). Use L7 for intelligent microservices routing.
  • Security: Look for SSL Termination (decryption at LB) and DDoS Protection.
  • Managed Services: Cloud-managed LBs like AWS ALB scale automatically and reduce operational overhead.

Interview Questions & Answers 💡

1. What is load balancing, and why is it important?

It's the process of distributing traffic across multiple backend servers to ensure efficient utilization, prevent overload, and improve availability.

  • Availability: Redirects traffic if a server fails.
  • Resources: Spreads requests evenly.
  • Performance: Reduces latency.
  • Scalability: Supports horizontal growth.

2. Compare Layer 4 and Layer 7 load balancing.

  • Layer 4 (Transport): Faster, content-blind, routes by IP/Port.
  • Layer 7 (Application): Intelligent, content-aware, routes by URL/Headers/Cookies.

3. How does a load balancer handle failover?

It uses Health Checks to continuously monitor servers. If one fails, the balancer automatically redirects traffic to healthy instances until the node recovers.

4. When would you use Least Connections over Round Robin?

Use Least Connections for scenarios where requests vary in complexity (e.g., some database queries take longer). Round Robin is better for uniform, short-lived requests.

5. What are provide advantages of SSL Termination at the Load Balancer?

Decryption is resource-intensive. Moving it to the Load Balancer (SSL Termination or Offloading) saves CPU cycles on backend servers and simplifies certificate management.


Final Thoughts 🎯

Load Balancers are the backbone of horizontal scaling. While L4 is faster for simple traffic, L7 Balancing is the industry standard for modern web APIs and microservices.

What's next? API Gateway & Management etworking/7.-api-gateway)

Drip

Driptanil Datta

Software Developer

Building full-stack systems, one commit at a time. This blog is a centralized learning archive for developers.

Legal Notes
Disclaimer

The content provided on this blog is for educational and informational purposes only. While I strive for accuracy, all information is provided "as is" without any warranties of completeness, reliability, or accuracy. Any action you take upon the information found on this website is strictly at your own risk.

Copyright & IP

Certain technical content, interview questions, and datasets are curated from external educational sources to provide a centralized learning resource. Respect for original authorship is maintained; no copyright infringement is intended. All trademarks, logos, and brand names are the property of their respective owners.

System Operational

© 2026 Driptanil Datta. All rights reserved.