CI/CD Pipeline Architecture: From Code Commit to Production
Every modern engineering team ships code through a CI/CD pipeline. Whether you realize it or not, the path from git push to production is an architecture problem — and getting it right determines how fast and safely you can deliver software.
What Is CI/CD?#
Continuous Integration (CI) means every developer's code change is automatically built and tested against the shared mainline — multiple times a day.
Continuous Deployment (CD) extends this by automatically releasing every change that passes the pipeline to production, without manual gates.
Together, a CI/CD pipeline is the automated assembly line that takes source code and turns it into a running service.
Pipeline Stages#
A typical pipeline has four stages:
- Source — A commit triggers the pipeline (webhook, polling).
- Build — Compile code, resolve dependencies, produce artifacts (Docker images, binaries).
- Test — Unit tests, integration tests, linting, security scans.
- Deploy — Push artifacts to staging, then production.
# GitHub Actions — minimal CI/CD pipeline
name: CI/CD
on:
push:
branches: [main]
pull_request:
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npm run lint
- run: npm test -- --coverage
deploy:
needs: build-and-test
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci && npm run build
- name: Deploy to production
run: ./scripts/deploy.sh
env:
DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}
Branching Strategies#
Your branching model dictates how code enters the pipeline.
Trunk-Based Development#
Everyone commits to main. Short-lived feature branches (< 1 day) merge frequently. Feature flags gate incomplete work. Best for high-velocity teams with strong test coverage.
GitFlow#
Long-lived develop and release branches. Better for teams shipping versioned software (mobile apps, SDKs) where multiple releases coexist.
Rule of thumb: if you deploy continuously, use trunk-based. If you ship discrete versions, consider GitFlow.
CI/CD Tools Compared#
| Tool | Hosting | Config | Strengths |
|---|---|---|---|
| GitHub Actions | Cloud / self-hosted runners | YAML in .github/workflows/ | Native GitHub integration, marketplace |
| GitLab CI | Cloud / self-managed | .gitlab-ci.yml | Built-in registry, environments, DORA metrics |
| Jenkins | Self-hosted | Jenkinsfile (Groovy) | Maximum flexibility, huge plugin ecosystem |
| CircleCI | Cloud / self-hosted | .circleci/config.yml | Fast parallelism, Docker-layer caching |
| ArgoCD | Kubernetes | Declarative GitOps | Kubernetes-native, drift detection |
For Kubernetes-heavy teams, ArgoCD deserves special attention — it watches a Git repo and reconciles cluster state automatically (GitOps).
Deployment Strategies#
Once artifacts are built, how do you roll them out?
Rolling Deployment#
Instances are updated one at a time. Simple, but a bad deploy affects users gradually before you can react.
Blue-Green Deployment#
Run two identical environments. Route traffic to "blue" while "green" gets the update. Swap the router when green is healthy. Instant rollback — just swap back.
Canary Deployment#
Route a small percentage of traffic (e.g., 5%) to the new version. Monitor error rates and latency. If metrics hold, increase to 100%. This is the gold standard for large-scale services.
# ArgoCD canary rollout (Argo Rollouts)
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: api-server
spec:
strategy:
canary:
steps:
- setWeight: 5
- pause: { duration: 5m }
- setWeight: 25
- pause: { duration: 5m }
- setWeight: 75
- pause: { duration: 5m }
canaryService: api-canary
stableService: api-stable
Infrastructure as Code Integration#
Your pipeline should provision infrastructure, not just deploy apps. Embed Terraform or Pulumi steps:
infrastructure:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform plan -out=tfplan
- run: terraform apply -auto-approve tfplan
if: github.ref == 'refs/heads/main'
This ensures infrastructure changes go through the same review and testing process as application code.
Secrets Management#
Never hardcode secrets. Use your CI provider's encrypted secrets store and inject them at runtime:
- GitHub Actions:
${{ secrets.MY_SECRET }}— encrypted at rest, masked in logs. - GitLab CI: CI/CD variables with "masked" and "protected" flags.
- External vaults: HashiCorp Vault, AWS Secrets Manager, or Google Secret Manager for production secrets.
Principle of least privilege: each pipeline job should only access the secrets it needs. Use environment-scoped secrets so staging jobs cannot read production credentials.
Monitoring Pipeline Health#
A pipeline is a system — monitor it like one.
Key metrics to track:
- Lead time: commit to production (DORA metric).
- Deployment frequency: how often you ship.
- Change failure rate: percentage of deploys that cause incidents.
- Mean time to recovery (MTTR): how fast you fix a bad deploy.
Set alerts on pipeline duration spikes (flaky tests, slow builds) and failure rate increases. GitHub Actions and GitLab both expose pipeline analytics dashboards.
Putting It All Together#
A mature CI/CD architecture looks like this:
Developer → PR → CI (lint + test + build) → Merge to main
→ CD (build image → push registry → canary deploy)
→ Monitor → Auto-promote or rollback
Start simple: one pipeline file, one environment. Add stages as your team and reliability requirements grow. The best pipeline is the one your team actually trusts enough to let it deploy to production without manual approval.
Design your next pipeline at codelit.io.
130 articles on system design at codelit.io/blog.
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
Related articles
Try these templates
OpenAI API Request Pipeline
7-stage pipeline from API call to token generation, handling millions of requests per minute.
8 componentsGitHub-like CI/CD Pipeline
Continuous integration and deployment system with parallel jobs, artifact caching, and environment management.
9 componentsMachine Learning Pipeline
End-to-end ML platform with data ingestion, feature engineering, training, serving, and monitoring.
8 componentsBuild this architecture
Generate an interactive CI/CD Pipeline Architecture in seconds.
Try it in Codelit →
Comments