Nginx Architecture Explained — How It Handles Millions of Connections
Why Nginx powers 34% of the web#
Nginx serves more websites than any other web server. It handles millions of concurrent connections with minimal memory because of one key design choice: event-driven, non-blocking I/O.
Understanding Nginx teaches you why event loops beat thread-per-connection for I/O-bound workloads.
The process model#
Master process#
Runs as root. Responsibilities:
- Read and validate configuration
- Bind to ports (80, 443)
- Fork worker processes
- Handle signals (reload, upgrade, shutdown)
Worker processes#
Run as unprivileged user. Each worker:
- Handles thousands of connections simultaneously
- Uses a single-threaded event loop (epoll/kqueue)
- No shared memory between workers — no locks needed
- Typically one worker per CPU core
Master (PID 1)
├── Worker (PID 2) — handles ~10,000 connections
├── Worker (PID 3) — handles ~10,000 connections
├── Worker (PID 4) — handles ~10,000 connections
└── Worker (PID 5) — handles ~10,000 connections
Why event-driven beats threads#
Apache (thread-per-connection):
- Each connection = one thread
- 10,000 connections = 10,000 threads
- Each thread uses ~1MB stack = 10GB RAM just for stacks
- Context switching overhead kills performance
Nginx (event loop):
- Each worker handles thousands of connections in one thread
- epoll/kqueue notifies when a connection has data ready
- Worker processes ready connections, immediately moves to the next
- 10,000 connections per worker uses ~2.5MB RAM total
This is why Nginx can serve 100,000+ concurrent connections on modest hardware.
Request processing phases#
Nginx processes each request through a pipeline of phases:
- Post-read — Read client request headers
- Server rewrite — Apply rewrite rules from server block
- Find location — Match request URI to location blocks
- Rewrite — Apply location-level rewrite rules
- Post-rewrite — Internal redirect if rewrite changed the URI
- Pre-access — Rate limiting, connection limiting
- Access — Authentication (basic auth, JWT, IP allow/deny)
- Post-access — Satisfy directive (any/all)
- Try files — Check for static files
- Content — Generate response (proxy_pass, fastcgi, static file)
- Log — Write access log entry
Key use cases#
Reverse proxy#
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://backend:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Nginx sits in front of your app server, handling SSL termination, compression, and connection management.
Load balancer#
upstream backend {
least_conn;
server app1:3000 weight=3;
server app2:3000 weight=2;
server app3:3000 backup;
}
Algorithms: round-robin (default), least_conn, ip_hash (sticky sessions), random.
Static file server#
location /static/ {
root /var/www;
expires 30d;
add_header Cache-Control "public, immutable";
}
Nginx serves static files 10-100x faster than your application server.
SSL termination#
server {
listen 443 ssl http2;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
}
Handle TLS at the edge. Backend servers communicate over plain HTTP internally.
Rate limiting#
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://backend;
}
Performance tuning#
| Setting | Recommendation |
|---|---|
worker_processes | Set to number of CPU cores (auto) |
worker_connections | 1024-4096 per worker |
keepalive_timeout | 65s for clients, 60s for upstreams |
gzip on | Compress text responses (saves 60-80% bandwidth) |
sendfile on | Zero-copy file serving from disk |
tcp_nopush on | Send headers and beginning of file in one packet |
Nginx vs alternatives#
| Feature | Nginx | Apache | Caddy |
|---|---|---|---|
| Architecture | Event-driven | Thread/process | Event-driven (Go) |
| Performance | Excellent | Good | Good |
| Config syntax | Custom | XML-like | JSON/Caddyfile |
| Auto HTTPS | No (use certbot) | No | Yes (built-in) |
| Modules | C modules | Dynamic (.so) | Go plugins |
Visualize your web infrastructure#
See how Nginx connects to your app servers, CDN, and databases — try Codelit to generate an interactive architecture diagram.
Key takeaways#
- Event-driven = thousands of connections per worker thread
- Master-worker model — one master, N workers (one per CPU core)
- Reverse proxy first — SSL termination, compression, load balancing at the edge
- Static files — always serve from Nginx, not your app server
- Rate limiting built-in — protect your upstream services
- Nginx is not an application server — it proxies to your app (Node, Python, Go)
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
Cost Estimator
See estimated AWS monthly costs for every component in your architecture
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
Batch API Endpoints — Patterns for Bulk Operations, Partial Success, and Idempotency
8 min read
system designCircuit Breaker Implementation — State Machine, Failure Counting, Fallbacks, and Resilience4j
7 min read
testingAPI Contract Testing with Pact — Consumer-Driven Contracts for Microservices
8 min read
Try these templates
Netflix Video Streaming Architecture
Global video streaming platform with adaptive bitrate, CDN distribution, and recommendation engine.
10 componentsSearch Engine Architecture
Web-scale search with crawling, indexing, ranking, and sub-second query serving.
8 componentsGoogle Search Engine Architecture
Web-scale search with crawling, indexing, PageRank, query processing, ads, and knowledge graph.
10 componentsBuild this architecture
Generate an interactive Nginx Architecture Explained in seconds.
Try it in Codelit →
Comments