Kubernetes Networking Deep Dive: From Pods to Service Mesh
Kubernetes networking can feel like a black box until you understand the layers. Every pod gets an IP, every service gets a stable endpoint, and traffic flows through a stack of networking primitives — each solving a specific problem. This guide walks through the full stack from pod-level networking to service mesh.
The Kubernetes Networking Model#
Kubernetes imposes three fundamental rules:
- Every pod gets its own IP address. No NAT between pods.
- All pods can communicate with all other pods without NAT (unless restricted by NetworkPolicy).
- Agents on a node can communicate with all pods on that node.
These rules create a flat network where any pod can reach any other pod by IP. The implementation is delegated to a CNI plugin.
Pod Networking#
Each pod runs in its own network namespace. Containers within the same pod share that namespace — they communicate over localhost. Pods on the same node connect through a virtual bridge (typically cbr0 or cni0).
Node
┌─────────────────────────────────────────────┐
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Pod A │ │ Pod B │ │ Pod C │ │
│ │ 10.1.1.2 │ │ 10.1.1.3 │ │ 10.1.1.4 │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ ─────┴──────────────┴──────────────┴───── │
│ Virtual Bridge (cni0) │
│ │ │
│ eth0 (node IP) │
└─────────────────────┬───────────────────────┘
│
Network
Cross-node communication requires the CNI plugin to set up routes or overlays so that 10.1.1.2 on Node 1 can reach 10.1.2.3 on Node 2.
CNI Plugins#
The Container Network Interface (CNI) is a specification for configuring network interfaces in Linux containers. Kubernetes delegates all pod networking to the installed CNI plugin.
Calico#
Calico uses BGP to distribute routes across nodes — no overlay, no encapsulation overhead. Each node announces its pod CIDR to peers.
- Performance: Near native — no VXLAN encapsulation in routed mode.
- NetworkPolicy: Full support with rich policy language.
- Scale: Proven in clusters with thousands of nodes.
- eBPF dataplane: Optional eBPF mode replaces iptables for faster packet processing.
Cilium#
Cilium is built on eBPF — it programs the Linux kernel directly instead of using iptables chains.
- Performance: Lower latency and higher throughput than iptables-based CNIs.
- Observability: Hubble provides deep network visibility — flow logs, DNS queries, HTTP metrics.
- Identity-based policy: Policies reference Kubernetes labels, not IP addresses.
- Service mesh: Cilium can replace sidecar-based service meshes with eBPF-powered L7 networking.
Other CNI Plugins#
| Plugin | Approach | Best For |
|---|---|---|
| Flannel | VXLAN overlay | Simple setups, learning environments |
| Weave Net | Encrypted mesh overlay | Multi-cloud with encryption needs |
| AWS VPC CNI | Native VPC IPs for pods | EKS — pods get real VPC addresses |
| Azure CNI | Native Azure VNet IPs | AKS — pods get real VNet addresses |
Kubernetes Services#
Pods are ephemeral — they get new IPs when rescheduled. Services provide stable endpoints.
ClusterIP#
The default Service type. Creates a virtual IP reachable only within the cluster. kube-proxy (or the CNI plugin in eBPF mode) programs rules to load-balance traffic across backend pods.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
NodePort#
Exposes the service on a static port on every node's IP. External traffic hits NodeIP:NodePort and gets routed to a backend pod. Port range: 30000-32767.
LoadBalancer#
Provisions a cloud load balancer (AWS ELB, GCP LB, Azure LB) that forwards traffic to NodePorts. The standard way to expose services to the internet on managed Kubernetes.
Headless Services#
Set clusterIP: None to skip the virtual IP. DNS returns the pod IPs directly — useful for StatefulSets where clients need to reach specific pods (e.g., database replicas).
Ingress Controllers#
While LoadBalancer Services create one load balancer per service (expensive), Ingress consolidates routing rules behind a single load balancer.
Internet ──▶ Load Balancer ──▶ Ingress Controller
│
┌───────────────┼───────────────┐
▼ ▼ ▼
/api/* ──▶ /app/* ──▶ / ──▶
api-service app-service web-service
Popular Ingress controllers:
- NGINX Ingress Controller — Battle-tested, widely deployed.
- Traefik — Auto-discovery, native Let's Encrypt integration.
- HAProxy Ingress — High performance, advanced load-balancing algorithms.
- Contour — Envoy-based, supports HTTPProxy CRD for advanced routing.
- Gateway API — The successor to Ingress. Supports traffic splitting, header-based routing, and cross-namespace references.
DNS in Kubernetes#
Every Service gets a DNS record: my-service.my-namespace.svc.cluster.local. CoreDNS is the default cluster DNS provider.
DNS resolution flow:
- Pod makes a DNS query for
my-service. - The pod's
/etc/resolv.confpoints to the CoreDNS ClusterIP. - CoreDNS resolves
my-service.default.svc.cluster.localto the Service ClusterIP. kube-proxyor eBPF rules route the ClusterIP to a healthy backend pod.
Pod DNS policies:
ClusterFirst(default) — Queries go to CoreDNS first.Default— Inherit the node's DNS configuration.None— Fully custom DNS configuration viadnsConfig.
Headless Service DNS returns A records for each pod IP, allowing clients to implement their own load balancing.
NetworkPolicy#
By default, all pod-to-pod traffic is allowed. NetworkPolicy restricts traffic based on labels, namespaces, and CIDR blocks.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- port: 5432
This policy allows the api pods to receive traffic only from frontend pods on port 8080, and send traffic only to database pods on port 5432. All other traffic is denied.
Important: NetworkPolicy requires a CNI plugin that supports it. Flannel does not enforce NetworkPolicy. Calico and Cilium do.
Service Mesh Networking#
A service mesh adds a sidecar proxy (typically Envoy) to every pod, forming a dedicated networking layer for service-to-service communication.
┌──────────────────┐ ┌──────────────────┐
│ Pod A │ │ Pod B │
│ ┌──────────────┐ │ mTLS │ ┌──────────────┐ │
│ │ App │ │◀───────▶│ │ App │ │
│ └──────┬───────┘ │ │ └──────┬───────┘ │
│ ┌──────▼───────┐ │ │ ┌──────▼───────┐ │
│ │ Envoy Sidecar│ │ │ │ Envoy Sidecar│ │
│ └──────────────┘ │ │ └──────────────┘ │
└──────────────────┘ └──────────────────┘
What a service mesh provides:
- mTLS everywhere — Automatic certificate management and rotation.
- Traffic management — Canary deployments, traffic splitting, retries, timeouts.
- Observability — Distributed tracing, metrics, and access logs without code changes.
- Authorization policies — L7-aware policies (e.g., allow GET but deny POST).
Istio, Linkerd, and Cilium (sidecar-free) are the leading options. Cilium's eBPF approach avoids the sidecar overhead entirely.
Key Takeaways#
- Kubernetes networking is a layered stack: CNI for pod connectivity, Services for stable endpoints, Ingress for external routing, NetworkPolicy for segmentation.
- Choose your CNI plugin based on performance needs, policy requirements, and cloud platform. Cilium and Calico are the production leaders.
- Use NetworkPolicy as your first line of defense — deny all, then allow explicitly.
- DNS is the service discovery mechanism in Kubernetes. Understand
svc.cluster.localnaming and headless services. - Service meshes add mTLS, observability, and traffic management but introduce operational complexity. Evaluate whether your scale justifies the cost.
- Gateway API is replacing Ingress — start new projects with Gateway API for richer routing capabilities.
Kubernetes networking is not magic. It is a well-defined stack of Linux networking primitives — namespaces, bridges, routes, iptables, and eBPF — orchestrated by the control plane.
Build and explore system design concepts hands-on at codelit.io.
374 articles on system design at codelit.io/blog.
Try it on Codelit
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
Try these templates
Scalable SaaS Application
Modern SaaS with microservices, event-driven processing, and multi-tenant architecture.
10 componentsURL Shortener Service
Scalable URL shortening with analytics, custom aliases, and expiration.
7 componentsKubernetes Container Orchestration
K8s cluster with pod scheduling, service mesh, auto-scaling, and CI/CD deployment pipeline.
9 componentsBuild this architecture
Generate an interactive architecture for Kubernetes Networking Deep Dive in seconds.
Try it in Codelit →
Comments