Skip to main content
Networking questions appear in almost every backend engineering interview because the protocols that move data between services are fundamental to everything you build. Interviewers use these questions to test whether you understand not just what happens, but why — why TCP uses three steps and not two, why HTTP/2 added binary framing, why TIME_WAIT exists. The answers below go deeper than surface definitions so you can explain the reasoning confidently.
A two-way handshake is not sufficient for three reasons:1. Preventing duplicate historical connections. Consider a client that sends a SYN, the network drops it, and the client sends another SYN. If the delayed first SYN arrives at the server later, a two-way handshake would let the server enter a ESTABLISHED state on a connection the client has already abandoned. The third step — the client ACK — lets the client reject the stale server response with a RST, preventing phantom connections from consuming server resources.2. Synchronizing sequence numbers in both directions. Each side must confirm it received the other’s initial sequence number (ISN). The server’s SYN-ACK confirms the client’s ISN; the client’s ACK confirms the server’s ISN. With only two messages, the server never learns whether the client received its ISN.3. Avoiding resource waste. Without the client’s confirmation, the server would allocate buffers and maintain state for connections that may never be used.The third ACK can carry data, so latency is not wasted. Four steps would provide no additional benefit over three, since three is already the theoretical minimum for reliable bidirectional synchronization.
TCP connections are closed with four messages because each direction is closed independently.
  1. FIN from the active closer: The side initiating the close sends FIN, signaling it has no more data to send. It enters FIN_WAIT_1 state.
  2. ACK from the passive closer: The other side acknowledges the FIN. The active closer enters FIN_WAIT_2. The passive closer can still send data — it may have data buffered that it needs to flush.
  3. FIN from the passive closer: Once the passive closer has finished sending its data, it sends its own FIN.
  4. ACK from the active closer: The active closer acknowledges the second FIN and enters TIME_WAIT before the connection is fully closed.
The reason there are four messages rather than two is that the server’s ACK (step 2) and the server’s FIN (step 3) cannot be combined into one message as in the handshake, because the server may have additional data to send after receiving the client’s FIN.
TIME_WAIT is the state the active closer enters after sending its final ACK. It waits for 2×MSL (twice the Maximum Segment Lifetime) before fully releasing the connection’s port and memory.MSL is the maximum time any TCP segment can survive in the network before being discarded. On Linux, MSL is typically 60 seconds, so TIME_WAIT lasts up to 120 seconds.The 2×MSL duration serves two purposes:Ensuring the final ACK is delivered. If the passive closer never receives the active closer’s last ACK, it will retransmit its FIN. That retransmitted FIN must travel one MSL to reach the active closer, and the ACK must travel one MSL back — 2×MSL total. TIME_WAIT allows this round trip to complete before the port is reused.Preventing old segments from corrupting new connections. If a new TCP connection reuses the same 4-tuple (src IP, src port, dst IP, dst port) too quickly, delayed segments from the old connection could be misinterpreted as belonging to the new one. Waiting 2×MSL ensures all segments from the old connection have expired.Risks of too many TIME_WAIT connections: Each TIME_WAIT connection holds a port and file descriptor. On high-connection servers, exhausting the ephemeral port range (typically 32768–61000) or file descriptor limits causes connection failures. Mitigations include enabling SO_REUSEADDR, tuning net.ipv4.tcp_tw_reuse, or designing clients to reuse persistent connections.
Each HTTP version addresses performance limitations of the previous one.HTTP/1.0 creates a new TCP connection for every request/response pair. This works but is inefficient: TCP handshake and slow-start overhead add latency for every resource.HTTP/1.1 introduced persistent connections (keep-alive by default), allowing multiple requests over a single TCP connection. It also added pipelining — sending the next request before receiving the previous response — but pipelining suffers from head-of-line blocking: responses must be delivered in order, so a slow response blocks all subsequent ones.HTTP/2 addresses head-of-line blocking at the application layer with multiplexing: multiple logical streams share a single TCP connection, each identified by a stream ID. Responses can arrive out of order. HTTP/2 also adds header compression (HPACK) to eliminate repeated header overhead and server push to proactively send resources the client will need. However, because it runs over TCP, a single lost packet causes TCP’s congestion control to stall all streams — head-of-line blocking at the transport layer.HTTP/3 replaces TCP with QUIC, a protocol built on UDP. QUIC implements its own reliability and congestion control per stream, so a lost packet only blocks the stream it belongs to — not the entire connection. QUIC also eliminates the separate TLS handshake by integrating encryption, enabling 0-RTT connection resumption for returning clients.
VersionTransportMultiplexingHeader CompressionKey Limitation
1.0TCPNoNoOne connection per request
1.1TCPPipelining onlyNoHOL blocking
2TCPYes (streams)HPACKTCP-level HOL blocking
3QUIC/UDPYes (streams)QPACKNewer, less universal support
HTTPS runs HTTP over TLS. TLS provides three security properties: confidentiality (encryption), integrity (tamper detection), and authentication (server identity verification).TLS handshake (RSA key exchange, simplified):
  1. ClientHello: The client sends its supported TLS versions, cipher suites, and a random value (Client Random).
  2. ServerHello: The server selects a cipher suite, responds with its own random value (Server Random), and sends its CA-signed digital certificate containing the server’s public key.
  3. Certificate verification: The client validates the certificate using the CA’s public key embedded in the browser or OS. It checks the certificate chain up to a trusted root CA, verifies the signature, and confirms the domain matches.
  4. Key exchange: The client generates a third random value (pre-master key), encrypts it with the server’s public key, and sends it. Both sides now have all three random values.
  5. Session key derivation: Both sides independently compute the same session key from the three random values using the agreed cipher. All subsequent communication uses this symmetric session key, which is far faster than asymmetric encryption.
  6. Finished: Both sides send a Finished message encrypted with the session key to confirm the handshake was not tampered with.
Why three random values? Using a single value would be vulnerable if an attacker could predict or replay it. Three independent random values from two parties make the session key unpredictable even if one value is compromised.Modern TLS uses ECDHE key exchange instead of RSA, which provides forward secrecy: even if the server’s private key is later compromised, past session keys cannot be derived.
The distinction is semantic and behavioral, not purely about where data goes.GET is intended for retrieving a resource. It should be safe (no side effects) and idempotent (repeating it produces the same result). Parameters go in the URL query string. GET requests can be cached, bookmarked, and logged by proxies — so sensitive data must never go in a GET URL.POST is intended for submitting data that causes a state change — creating a resource, triggering an action. It is neither safe nor idempotent by default. The request body carries the payload, which can be any size and content type. POST requests are not cached by default.Practical differences:
PropertyGETPOST
Data locationURL query stringRequest body
CacheableYesNo (by default)
IdempotentYesNo
Data size limitBrowser/server URL limits (~2–8 KB)Effectively unlimited
LoggingURL logged by proxiesBody not logged
Safe for sensitive dataNoYes (over HTTPS)
Note: POST over HTTP is not inherently more secure than GET — the body is still plaintext. HTTPS encrypts both the URL and the body.
Both WebSocket and SSE solve the problem of pushing data from server to client without repeated polling, but they have different characteristics.WebSocket establishes a full-duplex, persistent TCP connection after an HTTP upgrade handshake. Either side can send messages at any time. It supports binary and text data with minimal per-message overhead.Use WebSocket when:
  • You need bidirectional real-time communication (chat, multiplayer games, collaborative editing).
  • You are transmitting binary data (audio streams, file transfers).
  • Low latency is critical and you need fine-grained control over the connection.
SSE (Server-Sent Events) uses a standard HTTP connection with Content-Type: text/event-stream. The server pushes text events continuously; the client receives them via the EventSource API. The browser automatically reconnects if the connection drops.Use SSE when:
  • You need server-to-client only data flow (live feeds, progress bars, AI token streaming).
  • You want simpler infrastructure — SSE works over standard HTTP/2, passes through proxies naturally, and requires no special server support.
  • Automatic reconnection without client logic is desirable.
PropertyWebSocketSSE
DirectionBidirectionalServer to client only
Protocolws:// / wss://HTTP
Data formatsText + BinaryText only
Auto reconnectManualBuilt-in
Proxy/firewallCan be blockedGenerally passes through
HTTP/2 multiplexingNoYes
CSRF (Cross-Site Request Forgery) tricks an authenticated user’s browser into sending an unintended request to a target site. Because the browser automatically attaches cookies, the server sees the request as legitimate even though the user did not initiate it.Example attack: A user is logged into their bank. They visit a malicious page that contains <img src="https://bank.com/transfer?to=attacker&amount=1000">. The browser fetches that URL with the bank’s session cookie, executing the transfer without the user’s knowledge.Prevention methods:1. CSRF tokens. The server embeds a unique, unpredictable token in each form or response. The client must include this token in state-changing requests. Because the attacker’s page cannot read the victim’s token (same-origin policy blocks cross-origin reads), it cannot forge a valid request. The token is typically stored in a hidden form field or a custom request header.2. SameSite cookie attribute. Setting SameSite=Strict on session cookies tells the browser not to send the cookie on any cross-site request. SameSite=Lax is a middle ground — it blocks most cross-site POST requests but allows cookies on top-level navigation GET requests (e.g., clicking a link). Most modern frameworks set SameSite=Lax by default.
res.cookie('token', accessToken, {
  httpOnly: true,     // Prevents JS access (XSS protection)
  secure: true,       // HTTPS only
  sameSite: 'Strict', // CSRF protection
  maxAge: 1000 * 60 * 60 * 24 * 7
})
3. Checking the Referer / Origin header. Reject requests whose Origin or Referer does not match your domain. This is a defense-in-depth measure — it is not foolproof because some browsers strip these headers, but it adds a useful layer.The strongest posture combines SameSite cookies with CSRF tokens for sensitive operations.
JWT (JSON Web Token) is a compact, self-contained token format used for authentication and information exchange. It allows a server to be stateless — the server does not need to store session data because the token itself carries the user’s identity and claims.Structure: A JWT is three Base64URL-encoded segments joined by dots: header.payload.signature
  • Header: Specifies the token type (JWT) and the signing algorithm (e.g., HS256).
  • Payload: Contains claims — the user ID, roles, expiry time, and any other data you want to embed. The payload is not encrypted by default; anyone can decode it. Never put secrets or sensitive PII in an unencrypted JWT.
  • Signature: HMAC_SHA256(base64(header) + "." + base64(payload), secret). The server uses its private secret to sign the token. Verification means recomputing the signature and checking it matches — proving the token was not tampered with.
Authentication flow:
  1. User logs in with credentials.
  2. Server validates credentials, generates a JWT signed with its secret, and returns it to the client.
  3. Client stores the JWT (typically in localStorage or an HttpOnly cookie) and sends it in the Authorization: Bearer <token> header on subsequent requests.
  4. Server verifies the signature on each request — no database lookup required.
Tradeoffs vs. session cookies:
PropertyJWTSession Cookie
Server stateStatelessStateful (session store)
Instant revocationDifficult (need blocklist)Easy (delete session)
Horizontal scalingEasyRequires shared session store
Token sizeLarger (~200–500 bytes)Small (session ID only)
JWT works well for distributed systems and microservices. Session cookies are better when you need to immediately revoke access (e.g., force logout).
DNS translates human-readable domain names (e.g., api.flightaware.com) into IP addresses. The resolution process involves a hierarchy of servers.Resolution steps:
  1. Browser cache: The browser checks its local DNS cache. If the record is present and not expired, resolution stops here.
  2. OS resolver cache: If the browser cache misses, the OS checks its own cache and the /etc/hosts file.
  3. Recursive resolver: The OS sends a query to the configured DNS resolver (typically your ISP’s or a public resolver like 8.8.8.8). The recursive resolver does the heavy lifting.
  4. Root nameserver: If the resolver has no cached answer, it queries one of the 13 root nameserver clusters. The root responds with the address of the TLD nameserver (e.g., the .com nameserver).
  5. TLD nameserver: The resolver queries the TLD nameserver, which responds with the address of the domain’s authoritative nameserver.
  6. Authoritative nameserver: The resolver queries the authoritative nameserver for the domain. This server holds the actual DNS records and returns the IP address (A record for IPv4, AAAA for IPv6).
  7. Response and caching: The recursive resolver caches the result for the TTL (Time to Live) specified in the record and returns the IP to the client.
The entire process typically takes 20–120 ms for a cold lookup. Subsequent lookups for the same domain are served from cache in microseconds.Common record types: A (IPv4), AAAA (IPv6), CNAME (alias to another name), MX (mail server), TXT (arbitrary text, used for domain verification and SPF), NS (nameserver for a domain).