Caching Patterns: Write-Through, Cache-Aside, Write-Behind, and Read-Through
Caching Patterns: Write-Through, Cache-Aside, Write-Behind#
"There are only two hard things in computer science: cache invalidation and naming things." — Phil Karlton
Caching is the single biggest performance improvement you can make. But the wrong pattern leads to stale data, inconsistencies, and bugs that are nearly impossible to debug.
Why Cache?#
Without cache: App → Database (50ms) → Response
With cache: App → Redis (1ms) → Response (cache hit)
App → Redis (miss) → Database (50ms) → Write to Redis → Response
50x faster on cache hit. Most read-heavy apps have 90%+ hit rates.
The Four Patterns#
1. Cache-Aside (Lazy Loading)#
The application manages the cache explicitly.
Read:
1. Check cache → hit? Return cached data
2. Cache miss → read from DB → write to cache → return
Write:
1. Write to DB
2. Invalidate cache (delete key)
async function getUser(id: string) {
const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);
const user = await db.users.findUnique({ where: { id } });
await redis.set(`user:${id}`, JSON.stringify(user), 'EX', 3600);
return user;
}
async function updateUser(id: string, data: Partial<User>) {
await db.users.update({ where: { id }, data });
await redis.del(`user:${id}`); // Invalidate
}
Pros: Simple, only caches what's actually read, app has full control Cons: Cache miss = slow first request, possible stale data between write and invalidation Best for: Most applications. The default choice.
2. Read-Through#
Cache sits between app and DB. Cache handles fetching on miss.
Read:
1. App asks cache for data
2. Cache miss → cache fetches from DB → stores → returns to app
(App never talks to DB directly for reads)
Pros: App code is simpler (no cache logic), consistent read path Cons: Cache needs DB access (coupling), cold start fills slowly Best for: CDN caching, ORM-level caching
3. Write-Through#
Every write goes through the cache to the DB.
Write:
1. App writes to cache
2. Cache synchronously writes to DB
3. Both updated atomically
Pros: Cache is always consistent with DB, no stale data Cons: Write latency increases (cache + DB), unused data gets cached Best for: Data that's read immediately after writing (e.g., user profiles)
4. Write-Behind (Write-Back)#
Write to cache immediately, async flush to DB.
Write:
1. App writes to cache → returns immediately (fast!)
2. Cache asynchronously batches writes to DB (every N seconds)
Pros: Fastest writes (only cache latency), batches reduce DB load Cons: Data loss risk if cache crashes before flush, eventual consistency Best for: High-write throughput (analytics counters, view counts, gaming leaderboards)
Comparison#
| Pattern | Read Speed | Write Speed | Consistency | Complexity |
|---|---|---|---|---|
| Cache-Aside | Fast (hit) / Slow (miss) | Medium | Eventual | Low |
| Read-Through | Fast (hit) / Slow (miss) | Medium | Eventual | Medium |
| Write-Through | Fast | Slow (both paths) | Strong | Medium |
| Write-Behind | Fast | Fastest | Eventual (risky) | High |
Cache Invalidation Strategies#
TTL (Time-To-Live)#
redis.set("user:123", data, "EX", 3600); // Expires in 1 hour
Simple but imprecise. Data can be stale for up to TTL duration.
Event-Based Invalidation#
// On user update:
await db.users.update({ where: { id }, data });
await redis.del(`user:${id}`);
// Or publish event:
await kafka.publish("user.updated", { id });
// Cache service listens and invalidates
Precise but complex. Must handle every write path.
Version-Based#
// Include version in cache key
const version = await redis.incr("user:123:version");
const key = `user:123:v${version}`;
Old versions expire naturally. No explicit invalidation needed.
Hybrid (TTL + Event)#
// Set TTL as safety net
redis.set("user:123", data, "EX", 3600);
// Also invalidate on writes
redis.del("user:123");
Best of both. Events handle the common case, TTL catches edge cases.
Cache Stampede Prevention#
When a popular key expires, thousands of requests hit the DB simultaneously:
Cache key expires → 1000 concurrent requests → all miss → all query DB → DB overwhelmed
Solutions:
Mutex Lock#
async function getWithLock(key: string) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lock = await redis.set(`lock:${key}`, "1", "NX", "EX", 5);
if (lock) {
const data = await db.query(...);
await redis.set(key, JSON.stringify(data), "EX", 3600);
await redis.del(`lock:${key}`);
return data;
}
// Wait and retry
await sleep(50);
return getWithLock(key);
}
Stale-While-Revalidate#
Serve stale data while refreshing in the background:
TTL: 1 hour (hard expire)
Soft TTL: 50 minutes (trigger background refresh)
Request at 55 minutes → return stale data → background: refresh cache
Redis vs Memcached#
| Feature | Redis | Memcached |
|---|---|---|
| Data structures | Strings, hashes, lists, sets, sorted sets | Strings only |
| Persistence | RDB + AOF | None |
| Replication | Built-in | None |
| Pub/Sub | Yes | No |
| Lua scripting | Yes | No |
| Clustering | Redis Cluster | Client-side sharding |
| Max value size | 512MB | 1MB |
| Memory efficiency | Lower (overhead per key) | Higher |
Redis wins for almost everything. Memcached only when you need pure key-value with maximum memory efficiency.
Architecture Example#
E-Commerce Caching#
Client → CDN (static assets, 1 day TTL)
→ API → Redis Cache
→ Product catalog (cache-aside, 1h TTL)
→ User sessions (write-through)
→ Cart (write-behind, flush every 5s)
→ Search results (cache-aside, 5min TTL)
→ PostgreSQL (source of truth)
→ Elasticsearch (search, cached at Redis layer)
Best Practices#
- Cache-aside is the default — start here unless you have a specific reason not to
- Always set TTL — even with event-based invalidation (safety net)
- Cache serialized data — JSON or protobuf, not database objects
- Prefix cache keys —
user:123,product:456:reviews,tenant:acme:config - Monitor hit rate — below 80% means your caching strategy needs work
- Handle cache failures gracefully — if Redis is down, fall back to DB (slower, not broken)
- Don't cache everything — cache hot data, let cold data hit the DB
Summary#
| Need | Pattern |
|---|---|
| General purpose | Cache-aside + TTL |
| Always-fresh reads | Write-through |
| Fastest writes | Write-behind |
| CDN/proxy caching | Read-through |
| Invalidation | Hybrid (events + TTL) |
| Stampede prevention | Mutex lock or stale-while-revalidate |
Design caching architectures at codelit.io — 115 articles, 100 specs, 90+ templates, 29 exports.
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
AI Architecture Review
Get an AI audit covering security gaps, bottlenecks, and scaling risks
Related articles
AI Agent Tool Use Architecture: Function Calling, ReAct Loops & Structured Outputs
6 min read
AI searchAI-Powered Search Architecture: Semantic Search, Hybrid Search, and RAG
8 min read
AI safetyAI Safety Guardrails Architecture: Input Validation, Output Filtering, and Human-in-the-Loop
8 min read
Try these templates
Netflix Video Streaming Architecture
Global video streaming platform with adaptive bitrate, CDN distribution, and recommendation engine.
10 componentsSearch Engine Architecture
Web-scale search with crawling, indexing, ranking, and sub-second query serving.
8 componentsGoogle Search Engine Architecture
Web-scale search with crawling, indexing, PageRank, query processing, ads, and knowledge graph.
10 componentsBuild this architecture
Generate an interactive architecture for Caching Patterns in seconds.
Try it in Codelit →
Comments