Container Networking Guide: Docker, Kubernetes, CNI & Service Mesh
Containers change everything about how applications are packaged, but the network is what connects them. Understanding container networking is essential for debugging connectivity issues, designing secure architectures, and operating at scale.
Docker Networking Modes#
Docker provides several network drivers out of the box:
Bridge (default):
- Creates a virtual bridge (
docker0) on the host. - Each container gets a virtual ethernet interface (
vethpair) connected to the bridge. - Containers on the same bridge can communicate via IP. Port mapping (
-p 8080:80) exposes services to the host. - Suitable for single-host development.
Host:
- The container shares the host's network namespace directly.
- No network isolation — the container binds to host ports.
- Eliminates NAT overhead. Useful for performance-sensitive workloads.
None:
- No networking. The container has only a loopback interface.
- Used for batch jobs or security-sensitive workloads that must not communicate over the network.
Overlay:
- Spans multiple Docker hosts using VXLAN encapsulation.
- Required for Docker Swarm multi-host networking.
- Each overlay network gets its own subnet; containers resolve each other by service name.
Macvlan:
- Assigns a real MAC address to each container, making it appear as a physical device on the network.
- Useful for legacy applications that expect to be directly on the LAN.
Kubernetes Networking Model#
Kubernetes imposes three fundamental rules:
- Every pod gets its own IP address.
- Pods can communicate with any other pod without NAT (across nodes).
- Agents on a node can communicate with all pods on that node.
This flat network model simplifies application design — services address pods by IP without worrying about port conflicts or NAT translation.
Pod-to-Pod Communication#
On the same node, pods communicate through a virtual bridge (similar to Docker bridge mode). The container runtime creates a veth pair for each pod, connecting it to a bridge (cbr0 or cni0).
Across nodes, the CNI plugin handles routing. Common approaches:
- Overlay networking — Encapsulate pod traffic in VXLAN or Geneve tunnels. Works on any infrastructure but adds encapsulation overhead.
- Direct routing — Configure host routes or BGP so pod CIDRs are routable on the underlying network. No encapsulation overhead but requires network infrastructure support.
- Cloud-native routing — On AWS, GCP, or Azure, the CNI plugin programs cloud route tables or attaches secondary IPs directly to node NICs.
Services#
Pods are ephemeral — their IPs change on restart. Kubernetes Services provide stable endpoints:
- ClusterIP — A virtual IP reachable only within the cluster.
kube-proxyprograms iptables or IPVS rules to load-balance traffic across pod endpoints. - NodePort — Exposes the service on a static port on every node. External traffic hits
NodeIP:NodePortand is forwarded to the ClusterIP. - LoadBalancer — Provisions a cloud load balancer that routes external traffic to NodePorts. The standard way to expose services in cloud environments.
- Headless Service (
clusterIP: None) — Returns pod IPs directly via DNS. Used for stateful workloads where clients need to address specific pods.
DNS#
CoreDNS runs as a deployment in the cluster and serves DNS for services and pods:
my-service.my-namespace.svc.cluster.localresolves to the ClusterIP.- Headless services return A records for each pod.
- Pods get DNS search domains configured automatically, so
my-serviceresolves within the same namespace.
Ingress#
Ingress resources define HTTP/HTTPS routing rules. An Ingress controller (NGINX, Traefik, HAProxy, or cloud-native like AWS ALB Ingress) watches Ingress resources and configures the reverse proxy:
┌──────────────┐
Internet ──────▶│ Ingress │
│ Controller │
└──────┬───────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
/api/* /auth/* /static/*
svc-api svc-auth svc-static
- TLS termination at the ingress layer.
- Path-based and host-based routing.
- Rate limiting, authentication, and header manipulation via annotations or middleware.
The newer Gateway API is replacing Ingress with a more expressive, role-oriented model that separates infrastructure concerns from application routing.
CNI Plugins#
The Container Network Interface (CNI) is a specification that defines how network plugins configure pod networking. The kubelet calls the CNI plugin when a pod is created or destroyed.
Calico#
Calico is one of the most widely deployed CNI plugins:
- Routing: Uses BGP to distribute pod routes, enabling direct routing without overlay overhead. Falls back to VXLAN or IP-in-IP encapsulation when BGP is not available.
- Network Policy: Implements Kubernetes NetworkPolicy using iptables or eBPF. Supports Calico-specific policies for more advanced rules (global policies, DNS-based rules, application-layer policies).
- Performance: Direct routing mode avoids encapsulation overhead, making it one of the fastest options.
Cilium#
Cilium uses eBPF to implement networking, security, and observability entirely in the Linux kernel:
- eBPF dataplane: Replaces iptables with eBPF programs attached to network hooks. More efficient at scale (no linear rule scanning).
- Identity-based security: Assigns cryptographic identities to pods and enforces policies based on identity rather than IP addresses.
- L7 visibility: Inspects HTTP, gRPC, Kafka, and DNS traffic without sidecar proxies.
- Hubble: Built-in observability platform that provides flow logs, service maps, and metrics.
- Service mesh: Cilium can replace traditional sidecar-based service meshes with its eBPF-powered dataplane, reducing resource overhead.
Choosing Between Them#
| Concern | Calico | Cilium |
|---|---|---|
| Maturity | Established, battle-tested | Rapidly maturing, CNCF graduated |
| Dataplane | iptables or eBPF | eBPF-native |
| Network Policy | Kubernetes + extended | Kubernetes + L7 + identity-based |
| Observability | Basic flow logs | Hubble (deep L7 visibility) |
| Service mesh | Requires sidecar (Envoy/Istio) | Built-in (sidecar-free) |
| Best for | Simple, stable environments | Advanced security + observability |
Network Policies#
By default, Kubernetes allows all pod-to-pod traffic. NetworkPolicy resources restrict traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
This policy allows only pods labeled app: frontend to reach app: api on port 8080. All other ingress to the API pods is denied.
Debugging Container Networking#
When things go wrong:
- Check pod status and events:
kubectl describe podreveals network plugin errors. - Verify DNS:
kubectl exec -- nslookup my-serviceconfirms CoreDNS is resolving. - Test connectivity:
kubectl exec -- curl my-service:8080from another pod. - Inspect network policies: A missing or overly restrictive policy is a common cause of dropped traffic.
- Check CNI logs: CNI plugin logs on the node reveal IP allocation failures or routing issues.
- Packet capture:
tcpdumpon the node orkubectl debugwith a network tools container.
Key Takeaways#
- Docker provides bridge, host, overlay, and macvlan networking modes for different use cases.
- Kubernetes enforces a flat network where every pod gets a routable IP — no NAT between pods.
- Services (ClusterIP, NodePort, LoadBalancer) provide stable endpoints for ephemeral pods.
- Ingress controllers handle HTTP routing, TLS termination, and traffic management at the edge.
- Calico uses BGP and iptables/eBPF for high-performance routing and network policy enforcement.
- Cilium leverages eBPF for identity-based security, L7 visibility, and sidecar-free service mesh capabilities.
- Network policies are deny-by-default when applied — always start with allow rules for known traffic flows.
Container networking is the foundation that makes microservices possible. Understanding the layers — from veth pairs to CNI plugins to Ingress controllers — is what separates debugging in minutes from debugging in hours.
Build and explore system design concepts hands-on at codelit.io.
293 articles on system design at codelit.io/blog.
Try it on Codelit
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
Try these templates
Scalable SaaS Application
Modern SaaS with microservices, event-driven processing, and multi-tenant architecture.
10 componentsURL Shortener Service
Scalable URL shortening with analytics, custom aliases, and expiration.
7 componentsKubernetes Container Orchestration
K8s cluster with pod scheduling, service mesh, auto-scaling, and CI/CD deployment pipeline.
9 componentsBuild this architecture
Generate an interactive architecture for Container Networking Guide in seconds.
Try it in Codelit →
Comments