Skip to main content
Redis (Remote Dictionary Server) is an in-memory data store that you can use as a database, cache, and message broker. It stores all active data in RAM, applies changes atomically, and persists to disk on configurable schedules. This page covers the core data structures and their use cases, the reasons Redis achieves sub-millisecond latency, your persistence and clustering options, patterns for keeping your cache consistent with MySQL, and the three ways to use Redis as a message queue.

Data types

Redis exposes five primary data types. Each is implemented with a purpose-built internal encoding that makes common operations O(1) or O(log n).

String

The simplest type. A string key holds a single value up to 512 MB — arbitrary bytes, a serialized object, or a counter.
SET username "alice"
GET username
-- "alice"

SET page_views 0
INCR page_views
-- (integer) 1

SETEX session:abc123 3600 "user_data_json"  -- expires in 3600 seconds
Use cases: session tokens, counters, rate-limit buckets, distributed locks via SET key value NX EX seconds.

List

An ordered collection of strings, implemented as a doubly-linked list. You push and pop from either end in O(1).
RPUSH tasks "send_email"
RPUSH tasks "resize_image"
LRANGE tasks 0 -1
-- 1) "send_email"
-- 2) "resize_image"

BLPOP tasks 0   -- blocking pop; waits until an item is available
Use cases: task queues (RPUSH + BLPOP), activity feeds, recent-items lists.

Hash

A map of string fields to string values stored under a single key. Ideal for representing objects without serializing to JSON.
HSET user:1 name "Alice" age "25" email "alice@example.com"
HGET user:1 name
-- "Alice"
HGETALL user:1
-- 1) "name"  2) "Alice"  3) "age"  4) "25"  5) "email"  6) "alice@example.com"
Use cases: user profiles, product catalogs, configuration objects.

Set

An unordered collection of unique strings. Add, remove, and check membership are all O(1). Set operations (union, intersection, difference) run in O(N).
SADD tags "go" "redis" "backend"
SMEMBERS tags
-- 1) "backend"  2) "go"  3) "redis"  (order not guaranteed)

SADD user:1:follows 42 55 78
SADD user:2:follows 42 99
SINTER user:1:follows user:2:follows
-- 1) "42"  (mutual follows)
Use cases: unique visitor tracking, tag systems, friend graphs, deduplication.

Sorted set

Like a set, but every member carries a floating-point score. Members are ordered by score. Range lookups by score or rank run in O(log N).
ZADD leaderboard 8200 "player_alice"
ZADD leaderboard 9150 "player_bob"
ZADD leaderboard 7300 "player_carol"

ZREVRANGE leaderboard 0 2 WITHSCORES
-- 1) "player_bob"   2) "9150"
-- 3) "player_alice" 4) "8200"
-- 5) "player_carol" 6) "7300"

ZRANGEBYSCORE leaderboard 8000 10000
-- 1) "player_alice"  2) "player_bob"
Use cases: leaderboards, priority queues, time-series windows (score = Unix timestamp), rate-limit sliding windows.

Why Redis is fast

Redis consistently delivers sub-millisecond response times for five compounding reasons:
  1. In-memory storage. Every read and write targets RAM, which is three to four orders of magnitude faster than SSD and six orders of magnitude faster than spinning disk. The CPU, not storage, is the limiting factor.
  2. Efficient internal data structures. Each Redis type maps to a space- and time-optimized encoding (e.g., ziplist for small hashes, skiplist + hashtable for sorted sets). The engine automatically upgrades encodings as a collection grows.
  3. Single-threaded command execution. Redis processes one command at a time in a single event loop thread. There are no context switches, no mutex contention, and no deadlocks on the data-path. Background threads handle AOF flushing, RDB snapshotting, and freeing memory, so they never stall the main thread.
  4. Non-blocking I/O multiplexing. The event loop uses epoll (Linux) or kqueue (macOS) to monitor thousands of sockets with a single system call. A single thread handles many concurrent clients without spawning per-connection threads.
  5. Efficient memory management. Redis proactively reclaims memory through a configurable eviction policy (LRU, LFU, TTL-based) and lazy deletion of expired keys, keeping allocator fragmentation low.
Redis 6.0 introduced multi-threaded network I/O for parsing requests and writing responses. Command execution remains single-threaded. You may see “Redis is multi-threaded” in recent docs — this refers only to the I/O layer.

Persistence

Redis offers three persistence strategies with different durability-vs-performance trade-offs.

RDB (Redis Database snapshot)

RDB writes a point-in-time binary snapshot of all data to disk. The bgsave command (the default) forks a child process that serializes the snapshot while the main thread continues serving requests. The copy-on-write mechanism ensures the child sees a stable memory image even as the parent writes new data.
  • Pros: compact file format; fast startup (load binary directly into memory); minimal main-thread impact.
  • Cons: data written after the last snapshot is lost on crash; snapshot generation is CPU- and memory-intensive for large datasets.

AOF (Append-Only File)

AOF records every write command in text format after it executes. On restart, Redis replays the file to restore state. Three appendfsync strategies control when the OS flushes the AOF buffer to disk:
StrategyWhen fsync runsData loss risk
alwaysAfter every commandNone (slowest)
everysec (default)Once per secondUp to 1 second
noOS decidesUp to OS flush interval
AOF rewrite (triggered automatically when the file grows too large) forks a child process that writes a compact representation of current state to a new file. The main thread continues appending to the old buffer; the child appends any new commands to a separate rewrite buffer, then swaps the files atomically.
  • Pros: minimal data loss; human-readable log; can recover from partial writes.
  • Cons: larger file than RDB; slower restart for very large datasets.

Mixed persistence (Redis 4.0+)

The AOF file begins with an RDB binary block (fast bulk load) followed by AOF command records for changes since the snapshot. This combines RDB’s fast startup with AOF’s low data-loss guarantee.

Comparison

RDBAOFMixed
Data lossSince last snapshotUp to 1 second (everysec)Up to 1 second
Restart speedFast (binary load)Slow (replay all commands)Fast (RDB load + short AOF replay)
File sizeSmallLarge (grows until rewrite)Moderate
Availability sinceAlwaysAlwaysRedis 4.0

Clustering

Master-slave replication

One primary node accepts reads and writes. One or more replica nodes receive a copy of every write command asynchronously. Replicas are read-only and serve read traffic to reduce primary load. Replication is asynchronous: the primary does not wait for replicas to acknowledge before returning to the client. This means replicas may lag behind the primary by a small window, and strong consistency is not guaranteed.

Sentinel

Redis Sentinel adds automatic failover on top of master-slave replication. You run at least three Sentinel processes that continuously monitor the primary and replica nodes. When a Sentinel detects the primary is unavailable (subjective down), it asks the other Sentinels to vote. Once a quorum of Sentinels agrees (objective down), the Sentinel leader:
  1. Selects the best replica based on replication lag, priority, and node ID.
  2. Promotes it to primary by sending SLAVEOF NO ONE.
  3. Reconfigures remaining replicas to follow the new primary.
  4. Notifies clients of the new primary address via the pub/sub mechanism.
Simple primary + one or more replicas. Manual failover only. Suitable when you need read scaling and can tolerate manual recovery from a primary failure.
Primary ──write──> Replica 1
            └──write──> Replica 2

Cache consistency

When you use Redis as a cache in front of MySQL, every write to MySQL must be reflected in Redis quickly enough that readers do not see stale data for too long. The challenge is that writing to two systems is never atomic.

Cache-aside (lazy loading)

The application manages the cache directly. On a cache miss, the application fetches from MySQL and populates Redis. On a write, the application updates MySQL and then invalidates (deletes) the cache entry rather than updating it.
Read path:
  1. Check Redis → miss
  2. Read from MySQL
  3. Write result to Redis with TTL
  4. Return to caller

Write path:
  1. Update MySQL
  2. DELETE from Redis
Why delete instead of update? Deleting is cheaper and less error-prone than computing and writing the new cached value, especially when the cached value aggregates data from multiple tables. The next read will repopulate the cache lazily.

Write order matters

StrategyProblem
Delete cache first, then update MySQLA concurrent read between the two operations populates the cache with the old MySQL value
Update MySQL first, then delete cacheMuch lower risk: cache writes are faster than MySQL writes, so the race window is tiny
“Update MySQL first, then delete cache” is the recommended pattern. A TTL on cached keys acts as a safety net for the rare cases where the delete fails.

Handling delete failures

If the cache delete fails after a successful MySQL write, the cache holds stale data until the TTL expires. Two async remediation patterns:
  • Retry via message queue: enqueue the cache key for deletion; a consumer retries until the delete succeeds.
  • Subscribe to MySQL binlog (CDC): tools like Canal read the MySQL binlog and emit change events. Your application deletes the cache key in response to the binlog event, decoupling cache invalidation from the write path entirely.

Delayed double-delete

For “delete first, then update” workloads, delayed double-delete reduces the stale window:
  1. Delete the cache key.
  2. Update MySQL.
  3. Sleep briefly (long enough for any in-flight cache-aside read to complete).
  4. Delete the cache key again.
This pattern only reduces — it does not eliminate — the inconsistency window. Prefer “update MySQL first, then delete cache” when possible.
Strong consistency between MySQL and Redis requires distributed locking, which defeats the purpose of caching. Design for eventual consistency with a short TTL as a backstop.

Redis as a message queue

Redis supports three queue patterns, each with different reliability and feature trade-offs.

List queue

Use RPUSH to enqueue and BLPOP to dequeue (blocking pop waits until a message is available). Data is globally ordered within the list.
-- Producer
RPUSH jobs:email "{"to":"alice@example.com","subject":"Welcome"}"

-- Consumer (blocks for up to 5 seconds)
BLPOP jobs:email 5

Pub/Sub

Producers publish messages to a channel; all current subscribers receive a copy. Messages are not persisted — a subscriber that is offline at publish time misses the message.
-- Subscriber
SUBSCRIBE notifications:orders

-- Publisher
PUBLISH notifications:orders "order_id:9912"

Stream

Redis Streams (added in Redis 5.0) provide a durable, consumer-group-aware append log with message acknowledgement semantics.
-- Producer: append to stream; * auto-generates ID
XADD orders * order_id 9912 status "placed"

-- Consumer group
XGROUP CREATE orders order-processors $ MKSTREAM
XREADGROUP GROUP order-processors worker1 COUNT 10 BLOCK 2000 STREAMS orders >

-- Acknowledge after processing
XACK orders order-processors <message-id>
Pending messages (delivered but not acknowledged) are tracked in a per-consumer pending list. If a consumer crashes, another consumer in the group can claim its pending messages via XCLAIM.

Comparison

ListPub/SubStream
PersistenceYes (in-memory + AOF/RDB)NoYes
Message acknowledgementNoNoYes (XACK)
Consumer groupsNoNo (all subscribers receive)Yes
Replay old messagesNoNoYes (seek by ID)
OrderingGlobal FIFOPer-channel FIFOPer-stream, partitioned by ID
ComplexityLowLowMedium
Use List for simple background jobs where occasional message loss is acceptable. Use Stream when you need durable delivery, acknowledgement, or multiple independent consumer groups. For high-throughput production workloads, prefer a dedicated message broker like Kafka.