Database Connection Management: Pooling, Limits, and Serverless Strategies
A single PostgreSQL connection consumes roughly 10 MB of memory. At 500 connections that is 5 GB of RAM doing nothing but holding state. Database connection management is the discipline of keeping that number low while serving thousands of concurrent requests.
Connection Lifecycle#
Every database connection follows the same steps:
Client Database
│ │
│── TCP handshake ──────────────▶│
│── TLS handshake (optional) ───▶│
│── Authentication ─────────────▶│
│◀── Auth OK ────────────────────│
│ │
│── Query ──────────────────────▶│
│◀── Result ─────────────────────│
│── ... more queries ... ───────▶│
│ │
│── Terminate ──────────────────▶│
│◀── Connection closed ──────────│
The TCP handshake, TLS negotiation, and authentication add 2-5 ms of latency per connection. For a web request that runs a single query, this overhead can exceed the query itself.
Why Connection Pooling#
Connection pooling reuses established connections across requests, eliminating per-request setup cost:
App Instance A ──┐
App Instance B ──┤──▶ Connection Pool ──▶ Database
App Instance C ──┘ (20 connections) (max_connections = 100)
Without pooling: 50 app instances x 10 connections each = 500 database connections.
With pooling: 50 app instances share a pool of 20 connections. Peak concurrent queries rarely exceed 20.
Pooling Modes#
| Mode | Behavior | Use Case |
|---|---|---|
| Session | Connection assigned for the full client session | SET commands, temp tables |
| Transaction | Connection assigned per transaction | Most web applications |
| Statement | Connection assigned per statement | Simple read-heavy workloads |
Transaction mode is the sweet spot for most applications. It returns the connection to the pool after each COMMIT or ROLLBACK, maximizing reuse.
PgBouncer (PostgreSQL)#
PgBouncer is a lightweight connection pooler that sits between your application and PostgreSQL:
# pgbouncer.ini
[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb
[pgbouncer]
listen_port = 6432
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 300
server_lifetime = 3600
Key settings:
default_pool_size— Connections per user/database pair. Start with(cores * 2) + effective_spindle_counton the database server.max_client_conn— How many application connections PgBouncer accepts. This can be high because PgBouncer clients are cheap.reserve_pool_size— Extra connections activated under load spikes.server_idle_timeout— Close server connections idle longer than this. Prevents stale connections.
Limitation in transaction mode: Features that depend on session state — SET, LISTEN/NOTIFY, prepared statements, advisory locks — break because the next query may land on a different backend connection.
ProxySQL (MySQL)#
ProxySQL is the MySQL equivalent. It adds query routing, caching, and read/write splitting on top of connection pooling:
App ──▶ ProxySQL ──┬──▶ Primary (writes)
└──▶ Replica 1 (reads)
└──▶ Replica 2 (reads)
-- Route SELECT queries to read replicas (hostgroup 1)
INSERT INTO mysql_query_rules (rule_id, match_pattern, destination_hostgroup)
VALUES (1, '^SELECT', 1);
-- Route everything else to primary (hostgroup 0)
INSERT INTO mysql_query_rules (rule_id, match_pattern, destination_hostgroup)
VALUES (2, '.*', 0);
ProxySQL also multiplexes connections: hundreds of application connections map to a smaller pool of backend connections, just like PgBouncer.
Prepared Statements#
Prepared statements separate query parsing from execution:
-- Prepare (once per connection)
PREPARE get_user AS SELECT * FROM users WHERE id = $1;
-- Execute (many times)
EXECUTE get_user(42);
Benefits:
- Eliminates repeated parsing and planning overhead
- Prevents SQL injection by separating code from data
- Reduces network traffic for repeated queries
Pooling interaction: In PgBouncer transaction mode, prepared statements created in one transaction are not visible in the next (different backend connection). Solutions include using PgBouncer 1.21+ with prepared_statement_cache_size, or preparing statements at the application ORM level.
Connection Limits#
The max_connections Formula#
PostgreSQL's default max_connections is 100. Increasing it has diminishing returns because each connection consumes memory and CPU scheduling overhead.
A practical formula:
max_connections = (available_RAM - shared_buffers - OS_reserved) / per_connection_memory
For a 16 GB server with 4 GB shared_buffers and 2 GB OS reserve:
(16 GB - 4 GB - 2 GB) / 10 MB = ~1000 connections (theoretical max)
But performance degrades well before the theoretical limit. A better guideline:
effective_connections = cores * 2 + effective_disk_concurrency
For a 4-core server with SSD: 4 * 2 + 200 = 208. In practice, keep max_connections under 300 and use a pooler.
Monitoring Connection Usage#
-- Current connections by state
SELECT state, count(*)
FROM pg_stat_activity
GROUP BY state;
-- Connections approaching limit
SELECT max_conn, used, max_conn - used AS available
FROM (SELECT count(*) AS used FROM pg_stat_activity) t,
(SELECT setting::int AS max_conn FROM pg_settings
WHERE name = 'max_connections') s;
Alert when usage exceeds 80% of max_connections. Investigate idle connections that are not returning to the pool.
Idle Timeout and Connection Hygiene#
Idle connections waste resources. Configure timeouts at every layer:
| Layer | Setting | Recommended |
|---|---|---|
| Application | idle_timeout | 30-60 seconds |
| PgBouncer | server_idle_timeout | 300 seconds |
| PostgreSQL | idle_in_transaction_session_timeout | 30 seconds |
| PostgreSQL | idle_session_timeout (v14+) | 300 seconds |
idle_in_transaction_session_timeout is critical. A connection stuck in an open transaction holds locks and prevents vacuum from reclaiming dead tuples. Set this aggressively.
Serverless Database Connections#
Serverless compute (Lambda, Cloud Functions, edge workers) creates a unique problem: hundreds of cold starts may each open a new database connection simultaneously.
Neon#
Neon is serverless PostgreSQL with a built-in connection pooler:
- Uses a WebSocket-based proxy that handles connection multiplexing
- Scales connections from zero — the database itself suspends when idle
- Supports both pooled connections (port 5432 via their proxy) and direct connections
- The
@neondatabase/serverlessdriver uses WebSockets, enabling connections from edge runtimes
# Pooled connection string (for serverless)
postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/mydb?sslmode=require
# Direct connection (for migrations)
postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/mydb?sslmode=require&options=endpoint%3Dep-cool-name-123456
PlanetScale#
PlanetScale is serverless MySQL built on Vitess:
- HTTP-based query protocol eliminates persistent connection overhead
- The
@planetscale/databasedriver usesfetch()— no TCP connection needed - Connection pooling is handled at the infrastructure layer
- Branching model provides isolated database environments for development
import { connect } from "@planetscale/database";
const conn = connect({
host: "aws.connect.psdb.cloud",
username: "...",
password: "...",
});
// Each query is an independent HTTP request
const results = await conn.execute("SELECT * FROM users WHERE id = ?", [42]);
General Serverless Strategies#
- Use HTTP-based drivers when available (PlanetScale, Neon serverless driver, Cloudflare D1).
- Deploy a pooler (PgBouncer, RDS Proxy, Cloud SQL Auth Proxy) between serverless functions and traditional databases.
- Set aggressive idle timeouts — serverless connections should close within seconds, not minutes.
- Limit concurrency — use reserved concurrency or queue-based architectures to bound database load.
Key Takeaways#
- Connection setup costs 2-5 ms. Pooling amortizes this across thousands of requests.
- Transaction-mode pooling is the best default — it maximizes connection reuse.
- PgBouncer for PostgreSQL and ProxySQL for MySQL are production-proven poolers.
- Keep
max_connectionsmodest and let the pooler absorb client concurrency. - Set
idle_in_transaction_session_timeoutto prevent stuck transactions from degrading the database. - Serverless workloads need HTTP-based drivers or managed proxies to avoid connection storms.
- Monitor connection usage and alert before you hit limits.
Build and explore system design concepts hands-on at codelit.io.
271 articles on system design at codelit.io/blog.
Try it on Codelit
GitHub Integration
Paste any repo URL to generate an interactive architecture diagram from real code
Related articles
Try these templates
Build this architecture
Generate an interactive architecture for Database Connection Management in seconds.
Try it in Codelit →
Comments