Why does TCP use a 3-way handshake and not 2?
Why does TCP use a 3-way handshake and not 2?
A two-way handshake is not sufficient for three reasons:1. Preventing duplicate historical connections. Consider a client that sends a SYN, the network drops it, and the client sends another SYN. If the delayed first SYN arrives at the server later, a two-way handshake would let the server enter a ESTABLISHED state on a connection the client has already abandoned. The third step — the client ACK — lets the client reject the stale server response with a RST, preventing phantom connections from consuming server resources.2. Synchronizing sequence numbers in both directions. Each side must confirm it received the other’s initial sequence number (ISN). The server’s SYN-ACK confirms the client’s ISN; the client’s ACK confirms the server’s ISN. With only two messages, the server never learns whether the client received its ISN.3. Avoiding resource waste. Without the client’s confirmation, the server would allocate buffers and maintain state for connections that may never be used.The third ACK can carry data, so latency is not wasted. Four steps would provide no additional benefit over three, since three is already the theoretical minimum for reliable bidirectional synchronization.
What happens during TCP 4-way teardown?
What happens during TCP 4-way teardown?
TCP connections are closed with four messages because each direction is closed independently.
- FIN from the active closer: The side initiating the close sends FIN, signaling it has no more data to send. It enters FIN_WAIT_1 state.
- ACK from the passive closer: The other side acknowledges the FIN. The active closer enters FIN_WAIT_2. The passive closer can still send data — it may have data buffered that it needs to flush.
- FIN from the passive closer: Once the passive closer has finished sending its data, it sends its own FIN.
- ACK from the active closer: The active closer acknowledges the second FIN and enters TIME_WAIT before the connection is fully closed.
What is TIME_WAIT and why does it last 2×MSL?
What is TIME_WAIT and why does it last 2×MSL?
TIME_WAIT is the state the active closer enters after sending its final ACK. It waits for 2×MSL (twice the Maximum Segment Lifetime) before fully releasing the connection’s port and memory.MSL is the maximum time any TCP segment can survive in the network before being discarded. On Linux, MSL is typically 60 seconds, so TIME_WAIT lasts up to 120 seconds.The 2×MSL duration serves two purposes:Ensuring the final ACK is delivered. If the passive closer never receives the active closer’s last ACK, it will retransmit its FIN. That retransmitted FIN must travel one MSL to reach the active closer, and the ACK must travel one MSL back — 2×MSL total. TIME_WAIT allows this round trip to complete before the port is reused.Preventing old segments from corrupting new connections. If a new TCP connection reuses the same 4-tuple (src IP, src port, dst IP, dst port) too quickly, delayed segments from the old connection could be misinterpreted as belonging to the new one. Waiting 2×MSL ensures all segments from the old connection have expired.Risks of too many TIME_WAIT connections: Each TIME_WAIT connection holds a port and file descriptor. On high-connection servers, exhausting the ephemeral port range (typically 32768–61000) or file descriptor limits causes connection failures. Mitigations include enabling
SO_REUSEADDR, tuning net.ipv4.tcp_tw_reuse, or designing clients to reuse persistent connections.What are the differences between HTTP/1.0, 1.1, 2, and 3?
What are the differences between HTTP/1.0, 1.1, 2, and 3?
Each HTTP version addresses performance limitations of the previous one.HTTP/1.0 creates a new TCP connection for every request/response pair. This works but is inefficient: TCP handshake and slow-start overhead add latency for every resource.HTTP/1.1 introduced persistent connections (keep-alive by default), allowing multiple requests over a single TCP connection. It also added pipelining — sending the next request before receiving the previous response — but pipelining suffers from head-of-line blocking: responses must be delivered in order, so a slow response blocks all subsequent ones.HTTP/2 addresses head-of-line blocking at the application layer with multiplexing: multiple logical streams share a single TCP connection, each identified by a stream ID. Responses can arrive out of order. HTTP/2 also adds header compression (HPACK) to eliminate repeated header overhead and server push to proactively send resources the client will need. However, because it runs over TCP, a single lost packet causes TCP’s congestion control to stall all streams — head-of-line blocking at the transport layer.HTTP/3 replaces TCP with QUIC, a protocol built on UDP. QUIC implements its own reliability and congestion control per stream, so a lost packet only blocks the stream it belongs to — not the entire connection. QUIC also eliminates the separate TLS handshake by integrating encryption, enabling 0-RTT connection resumption for returning clients.
| Version | Transport | Multiplexing | Header Compression | Key Limitation |
|---|---|---|---|---|
| 1.0 | TCP | No | No | One connection per request |
| 1.1 | TCP | Pipelining only | No | HOL blocking |
| 2 | TCP | Yes (streams) | HPACK | TCP-level HOL blocking |
| 3 | QUIC/UDP | Yes (streams) | QPACK | Newer, less universal support |
How does HTTPS/TLS work?
How does HTTPS/TLS work?
HTTPS runs HTTP over TLS. TLS provides three security properties: confidentiality (encryption), integrity (tamper detection), and authentication (server identity verification).TLS handshake (RSA key exchange, simplified):
- ClientHello: The client sends its supported TLS versions, cipher suites, and a random value (
Client Random). - ServerHello: The server selects a cipher suite, responds with its own random value (
Server Random), and sends its CA-signed digital certificate containing the server’s public key. - Certificate verification: The client validates the certificate using the CA’s public key embedded in the browser or OS. It checks the certificate chain up to a trusted root CA, verifies the signature, and confirms the domain matches.
- Key exchange: The client generates a third random value (
pre-master key), encrypts it with the server’s public key, and sends it. Both sides now have all three random values. - Session key derivation: Both sides independently compute the same session key from the three random values using the agreed cipher. All subsequent communication uses this symmetric session key, which is far faster than asymmetric encryption.
- Finished: Both sides send a
Finishedmessage encrypted with the session key to confirm the handshake was not tampered with.
What is the difference between GET and POST?
What is the difference between GET and POST?
The distinction is semantic and behavioral, not purely about where data goes.GET is intended for retrieving a resource. It should be safe (no side effects) and idempotent (repeating it produces the same result). Parameters go in the URL query string. GET requests can be cached, bookmarked, and logged by proxies — so sensitive data must never go in a GET URL.POST is intended for submitting data that causes a state change — creating a resource, triggering an action. It is neither safe nor idempotent by default. The request body carries the payload, which can be any size and content type. POST requests are not cached by default.Practical differences:
Note: POST over HTTP is not inherently more secure than GET — the body is still plaintext. HTTPS encrypts both the URL and the body.
| Property | GET | POST |
|---|---|---|
| Data location | URL query string | Request body |
| Cacheable | Yes | No (by default) |
| Idempotent | Yes | No |
| Data size limit | Browser/server URL limits (~2–8 KB) | Effectively unlimited |
| Logging | URL logged by proxies | Body not logged |
| Safe for sensitive data | No | Yes (over HTTPS) |
WebSocket vs SSE: when would you use each?
WebSocket vs SSE: when would you use each?
Both WebSocket and SSE solve the problem of pushing data from server to client without repeated polling, but they have different characteristics.WebSocket establishes a full-duplex, persistent TCP connection after an HTTP upgrade handshake. Either side can send messages at any time. It supports binary and text data with minimal per-message overhead.Use WebSocket when:
- You need bidirectional real-time communication (chat, multiplayer games, collaborative editing).
- You are transmitting binary data (audio streams, file transfers).
- Low latency is critical and you need fine-grained control over the connection.
Content-Type: text/event-stream. The server pushes text events continuously; the client receives them via the EventSource API. The browser automatically reconnects if the connection drops.Use SSE when:- You need server-to-client only data flow (live feeds, progress bars, AI token streaming).
- You want simpler infrastructure — SSE works over standard HTTP/2, passes through proxies naturally, and requires no special server support.
- Automatic reconnection without client logic is desirable.
| Property | WebSocket | SSE |
|---|---|---|
| Direction | Bidirectional | Server to client only |
| Protocol | ws:// / wss:// | HTTP |
| Data formats | Text + Binary | Text only |
| Auto reconnect | Manual | Built-in |
| Proxy/firewall | Can be blocked | Generally passes through |
| HTTP/2 multiplexing | No | Yes |
What is CSRF and how do you prevent it?
What is CSRF and how do you prevent it?
CSRF (Cross-Site Request Forgery) tricks an authenticated user’s browser into sending an unintended request to a target site. Because the browser automatically attaches cookies, the server sees the request as legitimate even though the user did not initiate it.Example attack: A user is logged into their bank. They visit a malicious page that contains 3. Checking the
<img src="https://bank.com/transfer?to=attacker&amount=1000">. The browser fetches that URL with the bank’s session cookie, executing the transfer without the user’s knowledge.Prevention methods:1. CSRF tokens. The server embeds a unique, unpredictable token in each form or response. The client must include this token in state-changing requests. Because the attacker’s page cannot read the victim’s token (same-origin policy blocks cross-origin reads), it cannot forge a valid request. The token is typically stored in a hidden form field or a custom request header.2. SameSite cookie attribute. Setting SameSite=Strict on session cookies tells the browser not to send the cookie on any cross-site request. SameSite=Lax is a middle ground — it blocks most cross-site POST requests but allows cookies on top-level navigation GET requests (e.g., clicking a link). Most modern frameworks set SameSite=Lax by default.Referer / Origin header. Reject requests whose Origin or Referer does not match your domain. This is a defense-in-depth measure — it is not foolproof because some browsers strip these headers, but it adds a useful layer.The strongest posture combines SameSite cookies with CSRF tokens for sensitive operations.What is JWT and how does it work?
What is JWT and how does it work?
JWT (JSON Web Token) is a compact, self-contained token format used for authentication and information exchange. It allows a server to be stateless — the server does not need to store session data because the token itself carries the user’s identity and claims.Structure: A JWT is three Base64URL-encoded segments joined by dots:
JWT works well for distributed systems and microservices. Session cookies are better when you need to immediately revoke access (e.g., force logout).
header.payload.signature- Header: Specifies the token type (
JWT) and the signing algorithm (e.g.,HS256). - Payload: Contains claims — the user ID, roles, expiry time, and any other data you want to embed. The payload is not encrypted by default; anyone can decode it. Never put secrets or sensitive PII in an unencrypted JWT.
- Signature:
HMAC_SHA256(base64(header) + "." + base64(payload), secret). The server uses its private secret to sign the token. Verification means recomputing the signature and checking it matches — proving the token was not tampered with.
- User logs in with credentials.
- Server validates credentials, generates a JWT signed with its secret, and returns it to the client.
- Client stores the JWT (typically in
localStorageor anHttpOnlycookie) and sends it in theAuthorization: Bearer <token>header on subsequent requests. - Server verifies the signature on each request — no database lookup required.
| Property | JWT | Session Cookie |
|---|---|---|
| Server state | Stateless | Stateful (session store) |
| Instant revocation | Difficult (need blocklist) | Easy (delete session) |
| Horizontal scaling | Easy | Requires shared session store |
| Token size | Larger (~200–500 bytes) | Small (session ID only) |
How does DNS resolution work?
How does DNS resolution work?
DNS translates human-readable domain names (e.g.,
api.flightaware.com) into IP addresses. The resolution process involves a hierarchy of servers.Resolution steps:- Browser cache: The browser checks its local DNS cache. If the record is present and not expired, resolution stops here.
- OS resolver cache: If the browser cache misses, the OS checks its own cache and the
/etc/hostsfile. - Recursive resolver: The OS sends a query to the configured DNS resolver (typically your ISP’s or a public resolver like
8.8.8.8). The recursive resolver does the heavy lifting. - Root nameserver: If the resolver has no cached answer, it queries one of the 13 root nameserver clusters. The root responds with the address of the TLD nameserver (e.g., the
.comnameserver). - TLD nameserver: The resolver queries the TLD nameserver, which responds with the address of the domain’s authoritative nameserver.
- Authoritative nameserver: The resolver queries the authoritative nameserver for the domain. This server holds the actual DNS records and returns the IP address (A record for IPv4, AAAA for IPv6).
- Response and caching: The recursive resolver caches the result for the TTL (Time to Live) specified in the record and returns the IP to the client.