Master how computers talk to each other. Understand TCP connections, HTTP requests, and WebSockets—the protocols that power every app you use daily.
Save
Complete lesson & earn 250 PX
When you open Instagram, your phone talks to a . When you send a message, data travels across the internet. These protocols make it all work.
EXERCISE
1One machine asks, another delivers. This simple pattern powers the entire internet.
Save
EXERCISE
2Before HTTP or WebSockets, there is TCP. Learn how connections work and why they are expensive.
Save
EXERCISE
3TCP delivers data. HTTP defines what that data means. Learn the protocol powering every website and API.
Save
EXERCISE
4HTTP is request-response only. WebSockets enable servers to push data to clients instantly. This is how real-time apps work.
Save
Client: The one making requests. Your phone, laptop, browser.
: The one doing the work. Handles business logic, stores data, runs computations.
Pattern: Client demands. Server delivers.
Opening Instagram:
Client (your phone): "Show me my feed"
Server (Instagram backend): Fetches posts from , returns them
Deleting a tweet:
Client (browser): "Delete tweet ID 12345"
Server (Twitter backend): Removes from database, confirms deletion
Sending email:
Client (Gmail app): "Send this email to user@example.com"
Server (Gmail backend): Queues email, sends it, returns success
Booking Uber:
Client (Uber app): "Request ride from point A to B"
Server (Uber backend): Matches driver, updates database, pushes notification
Every app works this way.
Centralized control: Update server once. All clients benefit immediately.
Security: Sensitive operations happen on server. Client never touches database directly.
Example: Banking app.
Your phone shows balance. But it cannot directly modify the database.
Server validates identity, checks permissions, executes transaction, updates records.
Client only sees results. Database stays protected.
Separation of concerns: Client handles UI. Server handles logic and data.
Scale: One server handles thousands of clients simultaneously.
Two machines must talk. They could be:
Data travels as packets over TCP/IP protocol.
Physical distance matters:
Server in same city? 10ms latency.
Server across country? 50ms latency.
Server on another continent? 200ms latency.
This is why CDNs exist. Content delivered from nearest server.
Peer-to-peer: Clients talk directly to each other. No central server.
Example: BitTorrent. You download video chunks from other users directly.
Why not always use this?
Complexity: Clients must discover each other, coordinate, handle security.
Trust: Cannot trust random peers. Who validates data correctness?
Consistency: How does everyone see the same version of data?
Most apps choose client-server for simplicity and control.
How do client and server actually exchange data?
They need a common language. Just like humans need English or Hindi.
Computers need protocols: TCP, HTTP, .
Understanding these is understanding the internet.
TCP (Transmission Control Protocol): The way to send data reliably over networks.
Guarantees:
Almost every app uses TCP because reliability matters.
Alternative is UDP: Faster but unreliable. Used for video calls, gaming. Packets can arrive out of order or get lost.
Before any data flows, TCP requires setup.
Process:
Step 1: Client → : "SYN" (let us connect)
Step 2: Server → Client: "SYN-ACK" (agreed, let us connect)
Step 3: Client → Server: "ACK" (connection established)
Now data can flow.
Visualization:
Client Server
| |
|-------- SYN ----------->|
|<------ SYN-ACK ---------|
|-------- ACK ----------->|
| |
[Connection Ready]
Each step is a network round-trip.
Example: Client in Mumbai. Server in California. 200ms latency per trip.
Three-way handshake = 3 messages = 600ms before sending any data!
For a single HTTP request: 600ms setup + actual request/response.
This is why connection reuse matters.
When done, connection closes.
Process:
Step 1: Client → Server: "FIN" (finished, closing)
Step 2: Server → Client: "ACK" (acknowledged, connection closed)
Sometimes: Server also sends FIN, client sends ACK (four-way).
Typical: Two messages to close connection.
Critical fact: TCP connection does NOT automatically close after one request.
Connection stays open until:
This matters: You can reuse one TCP connection for multiple requests!
Without reuse (new connection per request):
Request 1: 3-way handshake + request + response + 2-way teardown
Request 2: 3-way handshake + request + response + 2-way teardown
Request 3: 3-way handshake + request + response + 2-way teardown
Overhead: 15 network messages for 3 requests.
With reuse (keep connection open):
First request: 3-way handshake + request + response
Request 2: request + response (same connection)
Request 3: request + response (same connection)
Final: 2-way teardown
Overhead: 5 network messages total. 3× faster!
Scenario: Mobile app loading profile. Needs 5 API calls.
Without reuse:
With reuse:
5× reduction! This is why connection pooling exists.
Important: TCP handles reliable delivery. It does NOT care what data you send.
Analogy: TCP is like postal service. Delivers letters reliably. Does not read content.
What you send over TCP: Totally up to you.
Common choices:
TCP delivers bytes. You define what those bytes mean.
Redis uses custom protocol over TCP.
Why? Optimized for key-value operations. Faster than HTTP for its use case.
Takeaway: You can define your own protocol. Just ensure both sides understand it.
Video calls (Zoom): Use UDP. Speed matters more than perfection. Dropped frame = minor glitch.
Gaming (Fortnite): Use UDP. Old position data is worthless. Need instant updates.
DNS lookups: Often UDP. Single packet request/response. No need for connection.
Everything else: Probably TCP. Reliability worth the overhead.
HTTP (HyperText Transfer Protocol): The common language of the web.
Purpose: Defines how to format requests and responses.
Runs on TCP: HTTP uses TCP for reliable delivery.
Most APIs you build: using HTTP.
Structure:
GET /api/users/123 HTTP/1.1
Host: api.example.com
User-Agent: Mozilla/5.0
Accept: application/<TopicPreview slug="json">json</TopicPreview>
(optional body)
Parts:
Request line: Method (GET), path (/api/users/123), version (HTTP/1.1)
Headers: Key-value metadata (Host, User-Agent, Accept)
Body (optional): Data sent to (JSON, form data)
GET: Retrieve data. No body.
POST: Create resource. Has body.
PUT: Update entire resource. Has body.
DELETE: Remove resource. Usually no body.
PATCH: Partial update. Has body.
Example GET:
GET /api/products/456 HTTP/1.1
Host: store.com
Server returns product with ID 456.
Example POST:
POST /api/products HTTP/1.1
Host: store.com
Content-Type: application/json
{"name": "Laptop", "price": 999}
Server creates product, returns it with new ID.
Structure:
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 87
{"id": 123, "name": "John", "email": "john@example.com"}
Parts:
Status line: Version, status code (200), message (OK)
Headers: Content-Type, Content-Length, etc.
Body: Actual data (JSON, HTML, image bytes)
2xx Success:
4xx Client Errors:
5xx Server Errors:
HTTP/1.1 default behavior: Connection closes after response.
Flow:
For next request: Repeat entire flow!
Why? HTTP designed for simple document retrieval. One request per page.
Modern reality: Page load needs 50-100 requests (HTML, CSS, JS, images, API calls).
Problem: New TCP connection per request is extremely expensive.
Tell server to keep connection open.
Request:
GET /api/data HTTP/1.1
Host: api.example.com
Connection: keep-alive
Response:
HTTP/1.1 200 OK
Connection: keep-alive
(data)
Connection stays open. Next request reuses it. No new handshake.
Modern browsers: Automatically include keep-alive.
Modern servers (, Apache): Support it by default.
Backend services maintain pool of persistent connections.
Example: Node.js app → .
Without pooling: New TCP connection per query. Slow.
With pooling: Maintain 10 persistent connections. Reuse them. Fast.
Universal: Every language has HTTP libraries.
Human readable: Debug with curl, Postman, browser tools.
Stateless: Each request independent. Easy to scale.
Standardized: Everyone understands GET, POST, status codes.
HTTP is the lingua franca of the internet.
HTTP is one-way: Client always initiates. only responds.
Server cannot push data without client requesting first.
Problem: How to build real-time features?
Examples:
Short polling: Client repeatedly asks "any updates?"
Every 2 seconds:
Client: "Any new messages?"
Server: "No"
Client: "Any new messages?"
Server: "No"
Client: "Any new messages?"
Server: "Yes: {message}"
Problems:
WebSocket: Persistent, bidirectional connection.
Key feature: After setup, either side can send data anytime.
Flow:
Client request:
GET /chat HTTP/1.1
Host: example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Server response:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Connection upgraded. Now WebSocket protocol. No more HTTP.
Client Server
| |
|--- HTTP Upgrade ------->|
|<-- 101 Switching --------|
| |
|=== WebSocket Open ===|
| |
|--- "Hello" ------------>|
|<-- "Hi there" ----------|
|<-- "New notification" --|
|--- "Got it" ----------->|
After upgrade: Messages flow freely. No request/response pattern.
Scenario: Chat app. Sending 10 messages.
HTTP (new connection each time):
Message 1: 3-way + request + response + 2-way = 5 messages
Message 2: 3-way + request + response + 2-way = 5 messages
...
Message 10: 3-way + request + response + 2-way = 5 messages
Total: 50 network messages
WebSocket:
Initial: 3-way handshake + upgrade
Message 1: Send (1 packet)
Message 2: Send (1 packet)
...
Message 10: Send (1 packet)
Final: Close
Total: 13 network messages
WebSocket is 4× more efficient!
Chat apps (WhatsApp, Slack):
Messages arrive instantly. No polling needed.
Server pushes message to recipient via WebSocket.
Live notifications (Instagram, Twitter):
New like? Server pushes instantly.
No need for app to constantly ask "any updates?"
Stock trading (Robinhood, Zerodha):
Prices update multiple times per second.
Server streams price updates continuously.
Collaborative editing (Google Docs, Figma):
See others typing in real-time.
Every keystroke sent via WebSocket, broadcast to all users.
Live sports (ESPN apps):
Goal scored? See it within 1 second. No refresh needed.
Persistent connections are expensive.
Memory: Each connection holds server resources (TCP socket, buffers).
100,000 users = 100,000 open connections.
Server requirements: Need specialized servers for high connection counts (Node.js, Go, Erlang).
Traditional servers (Apache): Designed for short HTTP requests. Poor WebSocket performance.
Trade-off: Amazing user experience. Higher infrastructure cost.
Simple : Fetching profile? Use HTTP. No need for persistent connection.
Infrequent updates: Notification once per hour? HTTP polling fine.
Public APIs: Hard to manage thousands of persistent connections from unknown clients.
Mobile on cellular: WebSocket connections interrupted by network switches. HTTP with retries more reliable.
Use WebSockets only when truly needed. For real-time, bidirectional communication.
Socket.IO: Popular library for WebSockets.
Features:
Chat example:
Server:
const io = require('socket.io')(3000);
io.on('connection', (socket) => {
socket.on('chat message', (msg) => {
io.emit('chat message', msg); // Broadcast
});
});
Client:
const socket = io('http://localhost:3000');
socket.emit('chat message', 'Hello!');
socket.on('chat message', (msg) => display(msg));
That is it! Basic real-time chat in under 10 lines.