Database Connection String Management — Secrets, Rotation, and Failover
The connection string is a liability#
Every database connection string contains credentials. Hardcode it once, commit it to git, and you have a security incident waiting to happen. Connection string management is not glamorous, but getting it wrong is catastrophic.
Anatomy of a connection string#
Most connection strings follow this pattern:
protocol://username:password@host:port/database?options
Breaking it down for PostgreSQL:
postgresql://app_user:s3cret@db-primary.internal:5432/myapp?sslmode=require&connect_timeout=10
| Component | Value | Purpose |
|---|---|---|
| Protocol | postgresql | Driver selection |
| Username | app_user | Authentication identity |
| Password | s3cret | Authentication secret |
| Host | db-primary.internal | Server address |
| Port | 5432 | Network port |
| Database | myapp | Target database |
| Options | sslmode=require | Connection behavior |
Different databases have different formats:
# MongoDB
mongodb+srv://user:pass@cluster.mongodb.net/mydb?retryWrites=true
# MySQL
mysql://user:pass@host:3306/mydb?charset=utf8mb4
# Redis
redis://:password@host:6379/0
# SQL Server
Server=host,1433;Database=mydb;User Id=user;Password=pass;Encrypt=yes
Never hardcode connection strings#
This is the most common mistake and it leads to:
- Credentials in git history — even after removal, they are in the log forever
- Same credentials in all environments — dev, staging, production sharing secrets
- No rotation possible — changing a password means redeploying every service
- Leaked secrets in logs — connection errors often print the full string
Environment-based configuration#
The minimum viable approach: environment variables per environment.
# development
DATABASE_URL=postgresql://dev:dev@localhost:5432/myapp_dev
# staging
DATABASE_URL=postgresql://stg_user:rotated_pwd@staging-db:5432/myapp_stg?sslmode=require
# production
DATABASE_URL=postgresql://prod_user:strong_pwd@prod-db.internal:5432/myapp?sslmode=verify-full
Layer your configuration:
import os
DATABASE_CONFIG = {
"development": {
"url": os.environ.get("DATABASE_URL", "postgresql://dev:dev@localhost:5432/myapp_dev"),
"pool_size": 5,
"ssl": False,
},
"staging": {
"url": os.environ["DATABASE_URL"],
"pool_size": 10,
"ssl": True,
},
"production": {
"url": os.environ["DATABASE_URL"],
"pool_size": 25,
"ssl": True,
"ssl_ca": "/etc/ssl/rds-ca.pem",
},
}
env = os.environ.get("APP_ENV", "development")
config = DATABASE_CONFIG[env]
Secrets managers#
Environment variables are better than hardcoding but still have limitations. Secrets managers solve the remaining problems:
AWS Secrets Manager#
import boto3
import json
def get_db_credentials():
client = boto3.client("secretsmanager")
response = client.get_secret_value(SecretId="prod/myapp/database")
secret = json.loads(response["SecretString"])
return f"postgresql://{secret['username']}:{secret['password']}@{secret['host']}:{secret['port']}/{secret['dbname']}"
HashiCorp Vault#
import hvac
client = hvac.Client(url="https://vault.internal:8200")
secret = client.secrets.database.generate_credentials(name="myapp-role")
# Returns temporary credentials that auto-expire
username = secret["data"]["username"]
password = secret["data"]["password"]
Key benefits of secrets managers#
- Automatic rotation — credentials change on a schedule without redeployment
- Audit logging — know who accessed what credential and when
- Access control — only specific services can read specific secrets
- Dynamic credentials — Vault can generate per-request database users that expire
Secrets rotation#
Static credentials are a ticking clock. Rotation strategies:
Dual-credential rotation#
- Create new credentials (user B)
- Update the secret store with user B
- Wait for all connections to drain (or restart services)
- Delete old credentials (user A)
# AWS Secrets Manager automatic rotation lambda
def rotate_secret(event, context):
step = event["Step"]
if step == "createSecret":
# Generate new password, store as AWSPENDING
new_password = generate_strong_password()
store_pending_secret(new_password)
elif step == "setSecret":
# Update the database with new credentials
alter_database_user_password(new_password)
elif step == "testSecret":
# Verify new credentials work
test_database_connection(new_password)
elif step == "finishSecret":
# Promote AWSPENDING to AWSCURRENT
promote_pending_secret()
Zero-downtime rotation#
Use connection pools that refresh credentials without dropping active connections:
class RotatingConnectionPool:
def __init__(self, secret_id, refresh_interval=3600):
self.secret_id = secret_id
self.refresh_interval = refresh_interval
self.pool = self._create_pool()
self.last_refresh = time.time()
def get_connection(self):
if time.time() - self.last_refresh > self.refresh_interval:
self._refresh_pool()
return self.pool.getconn()
def _refresh_pool(self):
new_url = fetch_secret(self.secret_id)
new_pool = create_pool(new_url)
old_pool = self.pool
self.pool = new_pool
self.last_refresh = time.time()
# Drain old pool gracefully
old_pool.closeall()
Connection string builders#
Never concatenate strings to build connection URLs. Use builders:
from urllib.parse import urlencode, quote_plus
def build_connection_string(host, port, database, username, password, **options):
encoded_password = quote_plus(password) # Handle special characters
params = urlencode(options)
return f"postgresql://{username}:{encoded_password}@{host}:{port}/{database}?{params}"
url = build_connection_string(
host="db-primary.internal",
port=5432,
database="myapp",
username="app_user",
password="p@ss!word#123", # Special chars handled correctly
sslmode="verify-full",
connect_timeout=10,
application_name="myapp-api",
)
Most ORMs provide builder APIs:
# SQLAlchemy
from sqlalchemy import URL
url = URL.create(
drivername="postgresql+psycopg2",
username="app_user",
password="p@ss!word#123",
host="db-primary.internal",
port=5432,
database="myapp",
query={"sslmode": "verify-full"},
)
Failover URLs#
A single host in your connection string is a single point of failure. Strategies for resilience:
Multi-host connection strings#
PostgreSQL and MongoDB support multiple hosts natively:
# PostgreSQL — tries hosts in order
postgresql://user:pass@primary:5432,standby1:5432,standby2:5432/mydb?target_session_attrs=read-write
# MongoDB — replica set with automatic failover
mongodb://user:pass@node1:27017,node2:27017,node3:27017/mydb?replicaSet=rs0
DNS-based failover#
Use a DNS CNAME that points to the active primary. On failover, update the DNS record:
db-primary.internal → 10.0.1.100 (primary)
# After failover:
db-primary.internal → 10.0.1.101 (promoted standby)
AWS RDS and Aurora do this automatically with their endpoint URLs.
Application-level failover#
FAILOVER_HOSTS = ["primary.db.internal", "standby1.db.internal", "standby2.db.internal"]
def get_connection():
for host in FAILOVER_HOSTS:
try:
conn = connect(host=host, connect_timeout=5)
return conn
except ConnectionError:
continue
raise Exception("All database hosts unreachable")
Read replica routing#
Separate read and write traffic to scale reads horizontally:
class DatabaseRouter:
def __init__(self, write_url, read_urls):
self.write_pool = create_pool(write_url)
self.read_pools = [create_pool(url) for url in read_urls]
self._read_index = 0
def get_write_connection(self):
return self.write_pool.getconn()
def get_read_connection(self):
# Round-robin across read replicas
pool = self.read_pools[self._read_index % len(self.read_pools)]
self._read_index += 1
return pool.getconn()
router = DatabaseRouter(
write_url="postgresql://user:pass@primary:5432/mydb",
read_urls=[
"postgresql://user:pass@replica1:5432/mydb",
"postgresql://user:pass@replica2:5432/mydb",
],
)
Be aware of replication lag — reads immediately after writes may return stale data. For read-after-write consistency, route those reads to the primary.
Key takeaways#
- Never hardcode connection strings — use environment variables at minimum, secrets managers ideally
- Rotate credentials automatically — static passwords are a liability that grows over time
- Use connection string builders — avoid manual concatenation and special character bugs
- Configure failover URLs — a single host is a single point of failure
- Route reads to replicas — scale read traffic without overloading the primary
- Audit access to secrets — know who accessed credentials and when
Visualize your database architecture#
Map out your primary, replicas, failover paths, and connection routing — try Codelit to generate an interactive diagram.
Article #358 of the Codelit system design series. Explore all articles at codelit.io.
Try it on Codelit
Cost Estimator
See estimated AWS monthly costs for every component in your architecture
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
Try these templates
Headless CMS Platform
Headless content management with structured content, media pipeline, API-first delivery, and editorial workflows.
8 componentsProject Management Platform
Jira/Linear-like tool with issues, sprints, boards, workflows, and real-time collaboration.
8 componentsWarehouse & Inventory Management
WMS with receiving, bin locations, pick/pack/ship, real-time stock tracking, and barcode scanning.
8 componentsBuild this architecture
Generate an interactive architecture for Database Connection String Management in seconds.
Try it in Codelit →
Comments