πŸš€
4. Real-Time Communication

Introduction to Real-Time Communication

Real-time communication refers to the continuous exchange of data with minimal latency. It is critical for modern applications where instant updates are a requirement, such as stock tickers, collaborative tools, and multiplayer gaming.


Why Real-Time Communication?

Traditional HTTP is a request-response protocol. The client must ask for data, and the server provides it. This "pull" model is inefficient for real-time needs.

Challenges of Traditional HTTP:

  • Latency: Each request requires a new connection or at least a full round-trip.
  • Overhead: HTTP headers are sent with every single message.
  • No Push: The server cannot initiate a message to the client.

Alternatives:

  • Polling: Frequently asking the server for updates.
  • Long Polling: Keeping a request open until the server has data.
  • Server-Sent Events (SSE): A one-way persistent stream from server to client.
  • WebSockets: A two-way, persistent, full-duplex connection.

WebSockets vs. HTTP


WebSockets: Persistent Full-Duplex Communication

WebSockets provide a persistent, bi-directional connection over a single TCP connection. This allows for high-frequency data exchange with very low overhead.

How it Works:

  1. HTTP Upgrade: The client sends a request to "upgrade" to WebSockets.
  2. Handshake: The server accepts the upgrade, and the protocol switches from HTTP to WS.
  3. Data Exchange: Data is exchanged in "frames" without the need for repetitive HTTP headers.
  4. Permanent Connection: The connection stays open until closed by either party.

Long Polling: Simulating Real-Time with HTTP

Long Polling is a technique where the client sends a request and the server "hangs" onto that request until new data is available or a timeout occurs.

Step-by-Step Flow:

Step 1: Client makes an HTTP request

The client asks for updates and waits.

Step 2: Server holds the request

If no data is available, the server does not respond immediately. It keeps the connection open.

Step 3: Server responds with new data

As soon as an event occurs, the server sends the response.

Step 4: Client immediately sends another request

The cycle starts again to simulate a persistent connection.


When to Use?

FeatureWebSocketsLong Polling
CommunicationFull-Duplex (Two-Way)Half-Duplex (Simulated)
LatencyExtremely LowModerate
OverheadLow (Frames)High (HTTP Headers)
ScalabilityHarder (Sticky sessions, open handles)Easier (Standard HTTP)

Use Cases:

  • WebSockets: Chat apps (WhatsApp/Slack), Online Gaming (Fortnite), Live Trading.
  • Long Polling: Simple notifications (Twitter likes), legacy browsers.

Interview Questions

Basic

  • What is real-time communication, and why is it important? It refers to immediate data transfer (latency < 100ms), vital for user experience in interactive apps.
  • Explain the WebSocket handshake process. It starts as an HTTP GET request with an Upgrade: websocket header. The server responds with 101 Switching Protocols.

Intermediate

  • What are the advantages of WebSockets over long polling? Lower latency, lower server resource usage (no constant reconnection), and true bi-directional data flow.
  • How do WebSockets handle connection failures? They usually require an application-level "Heartbeat" or "Ping/Pong" mechanism to detect and trigger a reconnection.
  • Can you use WebSockets with load balancers? Yes, but you often need Sticky Sessions (Session Affinity) to ensure the persistent connection reaches the same server.

Interview Questions & Answers: Real-Time Communication

Master the concepts of live updates and persistent connections with these interview-focused questions.

1. What is real-time communication, and why is it important?

Real-time communication (RTC) refers to instantaneous data exchange with minimal latency. It ensures live updates without requiring manual refreshes.

Why it's important:

  • Low Latency: Immediate transmission of info.
  • Improved UX: No delays or manual triggers.
  • Critical Apps: Essential for chat, stock tickers, gaming, and IoT.

2. How do WebSockets work, and how do they differ from HTTP?

WebSockets provide a persistent, full-duplex connection over a single TCP connection.

FeatureWebSocketsTraditional HTTP
ConnectionPersistent (Open)Closes after each cycle
LatencyExtremely LowHigher (New requests needed)
CommunicationBi-directionalClient-initiated only
OverheadMinimal (Single connection)High (Repeated headers)

3. Explain the WebSocket handshake process.

It upgrades an HTTP connection to a WebSocket connection:

  1. Client Request: Sends HTTP GET with Upgrade: websocket.
  2. Server Response: Responds with 101 Switching Protocols.
  3. Established: The connection remains open for frames.

4. What is long polling, and how does it work?

Long polling is a technique where the client sends a request and the server holds it open until new data is available. Once data is sent, the client immediately sends another request.

5. What are the advantages of WebSockets over long polling?

  • Efficiency: Avoids unnecessary HTTP headers.
  • True Push: Server can send data at any time without a pending request.
  • Latency: No need to wait for a new 3-way handshake for every update.

6. In what scenarios would you prefer long polling over WebSockets?

  • Compatibility: When WebSockets are blocked by aggressive firewalls or old browsers.
  • Simplicity: No need for a specialized WebSocket server if the update frequency is very low.
  • Stateless Systems: When the backend is strictly designed for standard HTTP.

7. How do WebSockets handle connection failures?

  • Heartbeats: Using Ping/Pong messages to detect dead connections.
  • Reconnection Logic: Clients usually implement exponential backoff to retry connections.

8. Can you use WebSockets with load balancers?

Yes, but they require Sticky Sessions (Session Affinity) or connection-aware load balancing to ensure the persistent connection stays bridged to the correct server.

9. What are some challenges of scaling WebSockets?

  • State Management: Handling user presence across multiple nodes (usually solved via Redis Pub/Sub).
  • Concurrency: Managing millions of long-lived open file descriptors.
  • Sticky Routing: Ensuring the load balancer correctly routes subsequent frames.

Summary & Final Takeaways

  • WebSockets = Persistent, full-duplex communication.
  • Long Polling = Simulated real-time via HTTP request-holding.
  • Choice depends on your latency requirements and infrastructure complexity.

What’s next? Modern API Protocols - Beyond REST (gRPC, GraphQL).

Β© 2024 Driptanil Datta.All rights reserved

Made with Love ❀️

Last updated on Thu Mar 12 2026