The Twelve-Factor App: A Modern Guide to Cloud-Native Application Design
The Twelve-Factor App#
Heroku's twelve-factor methodology was published in 2012. Over a decade later, it remains the foundation of cloud-native application design — every container platform, serverless runtime, and PaaS embodies these principles.
Why Twelve-Factor Still Matters#
The twelve factors solve recurring deployment problems:
- Works on my machine but not in production
- Configuration secrets leaked into version control
- Scaling requires architectural changes
- Deployments cause downtime
- Environments drift apart over time
The 12 Factors#
I. Codebase — One Codebase, Many Deploys#
One repository per application. Multiple environments (staging, production) are deploys of the same codebase, not separate repos.
┌─────────────┐
│ Git Repo │
│ (single) │
└──────┬──────┘
│
┌────┼────┐
▼ ▼ ▼
dev stg prod ← same code, different deploys
Modern practice: Monorepos are fine — each app within the monorepo is its own twelve-factor app with its own deploy pipeline.
II. Dependencies — Explicitly Declare and Isolate#
Never rely on system-wide packages. Declare every dependency explicitly.
// package.json — explicit declaration
{
"dependencies": {
"express": "^4.18.0",
"pg": "^8.11.0"
}
}
# Dockerfile — isolated runtime
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
Modern practice: Lock files (package-lock.json, poetry.lock, go.sum) pin exact versions. Container images provide full isolation.
III. Config — Store Config in the Environment#
Configuration that varies between deploys (database URLs, API keys, feature flags) belongs in environment variables, not in code.
# WRONG — config in code
const dbUrl = "postgres://prod-server:5432/mydb";
# RIGHT — config from environment
const dbUrl = process.env.DATABASE_URL;
# Kubernetes ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_URL: "postgres://db:5432/mydb"
CACHE_TTL: "300"
Litmus test: Could you open-source the codebase right now without exposing credentials? If yes, config is properly externalized.
IV. Backing Services — Treat Backing Services as Attached Resources#
Databases, message queues, caches, and email services are attached resources accessed via URLs. Swapping a local PostgreSQL for an RDS instance should require only a config change.
Local dev: DATABASE_URL=postgres://localhost:5432/mydb
Production: DATABASE_URL=postgres://rds-xyz.aws.com:5432/mydb
┌─────────┐ ┌──────────┐ ┌─────────┐
│ App ├────▶│ Postgres │ │ Redis │
│ ├────▶│ (any) │ │ (any) │
│ ├────────────────────▶│ │
└─────────┘ └──────────┘ └─────────┘
Swap any backing service without code changes.
V. Build, Release, Run — Strictly Separate Build and Run Stages#
Build: code + dependencies → immutable artifact (Docker image)
Release: artifact + config → versioned release
Run: launch release in execution environment
# Build — produces an immutable image
docker build -t myapp:v1.2.3 .
# Release — tag with config version
docker tag myapp:v1.2.3 registry.io/myapp:v1.2.3
# Run — combine image with runtime config
kubectl apply -f deployment.yaml # references image + ConfigMap
Key rule: You cannot change code at runtime. Every change requires a new build.
VI. Processes — Execute the App as Stateless Processes#
Application processes are stateless and share-nothing. Any persistent data lives in a backing service (database, object store).
# WRONG — sticky sessions, local file storage
app.use(session({ store: new FileStore() }));
# RIGHT — external session store
app.use(session({ store: new RedisStore({ client: redisClient }) }));
Stateless processes enable horizontal scaling — add more instances without coordination.
VII. Port Binding — Export Services via Port Binding#
The app is completely self-contained and binds to a port to serve requests. No external web server required.
const app = express();
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`Listening on ${port}`));
# Kubernetes Service
spec:
containers:
- name: app
ports:
- containerPort: 3000
Modern practice: One app can become another app's backing service — service A calls service B via its bound port.
VIII. Concurrency — Scale Out via the Process Model#
Scale by running more processes, not by making one process bigger.
┌──────────────────────────────────────┐
│ web.1 web.2 web.3 │ ← HTTP requests
│ worker.1 worker.2 │ ← Background jobs
│ scheduler.1 │ ← Cron tasks
└──────────────────────────────────────┘
# Kubernetes HPA
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
spec:
scaleTargetRef:
name: web
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
averageUtilization: 70
IX. Disposability — Maximize Robustness with Fast Startup and Graceful Shutdown#
Processes start fast and shut down gracefully. They can be killed and replaced at any moment.
// Graceful shutdown
process.on("SIGTERM", async () => {
console.log("SIGTERM received, draining connections...");
server.close();
await database.disconnect();
process.exit(0);
});
Requirements:
- Startup in seconds, not minutes
- Handle SIGTERM gracefully — finish in-flight requests, close connections
- Jobs are idempotent and reentrant — safe to restart mid-execution
- Use preStop hooks in Kubernetes for connection draining
X. Dev/Prod Parity — Keep Development, Staging, and Production as Similar as Possible#
Minimize gaps between environments:
| Gap | Bad | Good |
|---|---|---|
| Time | Weeks between deploys | Hours or minutes |
| Personnel | Devs write, ops deploy | Same team does both |
| Tools | SQLite dev, Postgres prod | Postgres everywhere |
# docker-compose.yml — local dev matches production
services:
app:
build: .
environment:
DATABASE_URL: postgres://db:5432/mydb
db:
image: postgres:16
redis:
image: redis:7
kafka:
image: confluentinc/cp-kafka:7.5.0
XI. Logs — Treat Logs as Event Streams#
The app writes to stdout. The execution environment routes logs to the appropriate destination.
// App writes to stdout — that's it
console.log(JSON.stringify({
level: "info",
message: "Order created",
orderId: "abc-123",
timestamp: new Date().toISOString()
}));
App (stdout) → Container runtime → Fluentd → Elasticsearch → Kibana
→ CloudWatch
→ Datadog
Never write to log files, manage log rotation, or configure log destinations inside the app.
XII. Admin Processes — Run Admin/Management Tasks as One-Off Processes#
Database migrations, console sessions, and one-off scripts run as isolated processes in the same environment as the app.
# Run migration as a one-off process
kubectl exec -it deploy/myapp -- npx prisma migrate deploy
# One-off data fix
kubectl run admin-task --image=myapp:v1.2.3 --rm -it \
--env="DATABASE_URL=$DATABASE_URL" \
-- node scripts/fix-orphaned-records.js
Modern practice: Kubernetes Jobs and init containers handle admin tasks. Never SSH into a production server to run scripts manually.
Beyond Twelve Factors#
The original twelve factors have been extended by practitioners:
- API First — design contracts before implementation
- Telemetry — metrics, traces, and health checks as first-class concerns
- Authentication — security baked in, not bolted on
Quick Reference#
┌─────┬──────────────────────┬─────────────────────────────┐
│ # │ Factor │ One-Liner │
├─────┼──────────────────────┼─────────────────────────────┤
│ I │ Codebase │ One repo, many deploys │
│ II │ Dependencies │ Declare and isolate │
│ III │ Config │ Environment variables │
│ IV │ Backing Services │ Attached resources via URL │
│ V │ Build/Release/Run │ Immutable artifacts │
│ VI │ Processes │ Stateless, share-nothing │
│ VII │ Port Binding │ Self-contained, export port │
│VIII │ Concurrency │ Scale out with processes │
│ IX │ Disposability │ Fast start, graceful stop │
│ X │ Dev/Prod Parity │ Keep environments identical │
│ XI │ Logs │ Stdout event streams │
│ XII │ Admin Processes │ One-off tasks, same env │
└─────┴──────────────────────┴─────────────────────────────┘
The twelve-factor methodology is not a checklist to complete once — it is a design philosophy that shapes every architectural decision. Modern platforms like Kubernetes, Docker, and serverless runtimes assume you follow these factors.
Want to deepen your cloud-native skills? Explore 246 engineering articles on codelit.io.
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
Cost Estimator
See estimated AWS monthly costs for every component in your architecture
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
AI Agent Tool Use Architecture: Function Calling, ReAct Loops & Structured Outputs
6 min read
AI searchAI-Powered Search Architecture: Semantic Search, Hybrid Search, and RAG
8 min read
AI safetyAI Safety Guardrails Architecture: Input Validation, Output Filtering, and Human-in-the-Loop
8 min read
Try these templates
Scalable SaaS Application
Modern SaaS with microservices, event-driven processing, and multi-tenant architecture.
10 componentsNetflix Video Streaming Architecture
Global video streaming platform with adaptive bitrate, CDN distribution, and recommendation engine.
10 componentsSearch Engine Architecture
Web-scale search with crawling, indexing, ranking, and sub-second query serving.
8 componentsBuild this architecture
Generate an interactive architecture for The Twelve in seconds.
Try it in Codelit →
Comments