API Backward Compatibility: Ship Changes Without Breaking Consumers
API Backward Compatibility#
Every API change is a potential outage for your consumers. Backward compatibility means existing clients continue working after you deploy — no coordinated upgrades, no downtime, no broken integrations.
The Cost of Breaking Changes#
You: rename "username" to "user_name" in response
Result:
→ Mobile app v2.1 crashes (expects "username")
→ Partner integration fails (parses "username")
→ CLI tool breaks (maps "username" to local field)
→ 3 teams file urgent bugs at 2 AM
One renamed field. Four broken consumers. This is why backward compatibility matters.
Safe Changes (Non-Breaking)#
Additive Changes#
Adding new fields, endpoints, or enum values is always safe:
Before: { "id": 1, "name": "Alice" }
After: { "id": 1, "name": "Alice", "email": "alice@example.com" }
Old clients → ignore "email" (unknown fields are skipped)
New clients → use "email"
Rules for additive changes:
- New fields must be optional (never required)
- New endpoints do not affect existing routes
- New enum values are safe if consumers handle unknown values gracefully
- New query parameters with default values preserve old behavior
Default Values#
When adding a required concept, assign a default so old requests still work:
POST /orders (old client — no "priority" field)
Server: priority missing → default to "normal"
POST /orders (new client)
{ "priority": "urgent" }
Server: priority = "urgent"
Old clients never send the field. New clients opt in. No one breaks.
Dangerous Changes (Breaking)#
| Change | Why It Breaks |
|---|---|
| Remove a field | Clients parsing that field crash |
| Rename a field | Same as removal for old clients |
| Change field type | String to int breaks deserialization |
| Change URL path | Bookmarks and hardcoded URLs fail |
| Make optional field required | Old requests rejected |
| Change error format | Client error handling breaks |
| Remove enum value | Clients sending that value get 400s |
Postel's Law (Robustness Principle)#
"Be conservative in what you send, be liberal in what you accept."
For API producers:
→ Send only documented fields
→ Never change the type of existing fields
→ Always include fields you promised
For API consumers:
→ Ignore unknown fields (do not fail on extras)
→ Handle missing optional fields with defaults
→ Accept both old and new enum values gracefully
When both sides follow Postel's law, accidental breakage drops dramatically.
Field Deprecation Strategy#
Never remove fields abruptly. Use a phased approach:
Phase 1 — Deprecate (Month 1)
→ Add "deprecated" marker in docs and response headers
→ Log which consumers still use the field
→ Add replacement field alongside old one
Phase 2 — Warn (Month 2-3)
→ Return Sunset header: Sunset: Sat, 01 Jun 2026 00:00:00 GMT
→ Email consumers who still call deprecated field
→ Dashboard shows deprecation usage metrics
Phase 3 — Remove (Month 4+)
→ Remove field only when usage hits zero
→ Or provide migration path and hard deadline
Response headers for deprecation:
Deprecation: true
Sunset: Sat, 01 Jun 2026 00:00:00 GMT
Link: <https://api.example.com/docs/migration>; rel="deprecation"
Schema Evolution#
JSON Schema Versioning#
Version your schemas to track compatibility:
v1: { "id": int, "name": string }
v2: { "id": int, "name": string, "email": string } ← additive, compatible
v3: { "id": int, "full_name": string, "email": string } ← BREAKING (renamed)
Protobuf / gRPC Evolution#
Protocol Buffers have built-in evolution rules:
// Safe: add new field with new number
message User {
int32 id = 1;
string name = 2;
string email = 3; // added in v2
}
// UNSAFE: reuse field number 2 for different type
// UNSAFE: change field type from string to int
// UNSAFE: rename field (wire format uses numbers, but generated code breaks)
Protobuf golden rule: Never reuse or change field numbers.
Avro Schema Evolution#
Avro supports forward and backward compatibility natively:
Backward compatible: reader uses NEW schema, writer used OLD
→ New fields must have defaults
Forward compatible: reader uses OLD schema, writer used NEW
→ Removed fields must have had defaults
Full compatible: both directions work
Consumer-Driven Contracts#
Instead of guessing what consumers need, let them define their expectations:
Consumer A contract:
GET /users/123
Expects: { "id": number, "name": string }
Consumer B contract:
GET /users/123
Expects: { "id": number, "name": string, "email": string }
Provider runs ALL consumer contracts in CI:
→ Consumer A contract passes ✓
→ Consumer B contract passes ✓
→ Safe to deploy
Tools: Pact, Spring Cloud Contract, Specmatic
Pact Workflow#
Consumer → writes Pact test → generates contract JSON
↓
Pact Broker (stores contracts)
↓
Provider CI → fetches contracts → verifies against real API
↓
All pass → safe to deploy
Any fail → block deployment
Breaking Change Detection in CI#
Automate compatibility checks so breaking changes never reach production:
OpenAPI Diff#
CI Pipeline:
1. Checkout current OpenAPI spec (main branch)
2. Checkout proposed OpenAPI spec (PR branch)
3. Run: openapi-diff old-spec.yaml new-spec.yaml
4. If breaking changes detected → fail PR
5. If only additive changes → pass
Tools: openapi-diff, optic, Akita
Protobuf Compatibility Checks#
CI Pipeline:
1. buf breaking --against .git#branch=main
2. Checks: field removal, type changes, number reuse
3. Fail on any wire-incompatible change
Schema Registry Checks#
For event-driven APIs (Kafka, etc.):
Schema Registry enforces compatibility mode:
BACKWARD → new schema can read old data
FORWARD → old schema can read new data
FULL → both directions
NONE → no checks (dangerous)
Producer publishes new schema → Registry validates → reject if incompatible
Design your API evolution strategy at codelit.io — generate interactive architecture diagrams with versioning flows.
Summary#
- Additive only — add fields, never remove or rename
- Defaults everywhere — new fields must have sensible defaults
- Postel's law — be strict in output, lenient in input
- Deprecate gradually — Sunset headers, usage tracking, phased removal
- Consumer-driven contracts — let consumers define their expectations
- CI enforcement — openapi-diff, buf breaking, schema registry
- Version schemas — track compatibility across releases
361 articles on system design at codelit.io/blog.
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
AI Agent Tool Use Architecture: Function Calling, ReAct Loops & Structured Outputs
6 min read
AI searchAI-Powered Search Architecture: Semantic Search, Hybrid Search, and RAG
8 min read
AI safetyAI Safety Guardrails Architecture: Input Validation, Output Filtering, and Human-in-the-Loop
8 min read
Try these templates
Netflix Video Streaming Architecture
Global video streaming platform with adaptive bitrate, CDN distribution, and recommendation engine.
10 componentsSearch Engine Architecture
Web-scale search with crawling, indexing, ranking, and sub-second query serving.
8 componentsMultiplayer Game Backend
Real-time multiplayer game server with matchmaking, state sync, leaderboards, and anti-cheat.
8 componentsBuild this architecture
Generate an interactive architecture for API Backward Compatibility in seconds.
Try it in Codelit →
Comments