Kubernetes Network Policies — Default Deny, Namespace Isolation, and Zero-Trust Networking
The Default Problem#
By default, every pod in a Kubernetes cluster can talk to every other pod. No restrictions. The frontend pod can reach the database directly. A compromised pod in the staging namespace can access production secrets. Network policies fix this by defining explicit rules for what traffic is allowed.
How Network Policies Work#
A NetworkPolicy is a Kubernetes resource that selects pods and defines allowed ingress (incoming) and egress (outgoing) traffic. Any traffic not explicitly allowed by a policy is denied once a pod is selected by at least one policy.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
This policy says: the api pod can receive traffic only from frontend on port 8080, and can send traffic only to database on port 5432. Everything else is blocked.
Default Deny — The Foundation#
The single most important policy. Without it, network policies only add allowances on top of the default "allow all" behavior.
Deny All Ingress in a Namespace#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
An empty podSelector selects all pods. No ingress rules means no traffic is allowed in. Every pod in the production namespace is now isolated by default. You must create explicit allow policies for legitimate traffic.
Deny All Egress in a Namespace#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
This blocks all outgoing traffic. Pods cannot reach the internet, DNS, or other services. You will need to allow DNS egress (port 53) or nothing will resolve.
Allow DNS (Required with Egress Deny)#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Namespace Isolation#
Prevent cross-namespace traffic entirely, then allow specific exceptions.
Block All Cross-Namespace Ingress#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
The podSelector: {} under from with no namespaceSelector means "only pods in this namespace." All cross-namespace traffic is blocked.
Allow Specific Namespace Access#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
team: monitoring
ports:
- protocol: TCP
port: 9090
Only the monitoring namespace can reach production pods on the metrics port.
Pod-to-Pod Rules#
Frontend to API to Database Chain#
# Frontend can only talk to API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-egress
namespace: production
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: api
ports:
- protocol: TCP
port: 8080
---
# Database only accepts connections from API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api
ports:
- protocol: TCP
port: 5432
The frontend cannot reach the database directly. The database only accepts connections from the API. A compromised frontend pod cannot exfiltrate data from the database.
Egress Policies#
Allow External API Access#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-external-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 443
This allows the API to make HTTPS calls to external services but blocks access to internal RFC 1918 addresses. Combined with the default deny, this prevents lateral movement within the cluster.
Block Metadata API (Cloud Security)#
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32
The cloud metadata endpoint (169.254.169.254) is a common attack vector. Blocking it prevents pods from accessing instance credentials.
Calico vs Cilium Policies#
Standard Kubernetes NetworkPolicy has limitations. Calico and Cilium extend it.
| Feature | K8s Native | Calico | Cilium |
|---|---|---|---|
| L3/L4 rules | Yes | Yes | Yes |
| Namespace isolation | Yes | Yes | Yes |
| DNS-based rules | No | Yes (GlobalNetworkPolicy) | Yes (FQDN) |
| L7 (HTTP path/method) | No | No (use Istio) | Yes (CiliumNetworkPolicy) |
| Global policies | No | GlobalNetworkPolicy | CiliumClusterwideNetworkPolicy |
| Host-level policies | No | Yes | Yes |
| Logging denied traffic | No | Yes | Yes (Hubble) |
Cilium L7 Policy Example#
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-l7-policy
namespace: production
spec:
endpointSelector:
matchLabels:
app: api
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/v1/products"
- method: POST
path: "/api/v1/orders"
This allows the frontend to GET /api/v1/products and POST /api/v1/orders but blocks DELETE or any other path. Layer 7 policies give you application-aware firewalling inside the cluster.
Calico Global Policy Example#
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-metadata
spec:
selector: all()
egress:
- action: Deny
destination:
nets:
- 169.254.169.254/32
order: 100
Global policies apply across all namespaces. This blocks metadata API access cluster-wide.
Debugging Connectivity#
When pods cannot communicate after applying policies, debug systematically.
Step 1: Check Which Policies Apply#
kubectl get networkpolicy -n production
kubectl describe networkpolicy api-policy -n production
Step 2: Verify Pod Labels#
kubectl get pods -n production --show-labels
Network policies select pods by label. A typo in labels means the policy does not apply.
Step 3: Test Connectivity#
kubectl exec -n production frontend-pod -- curl -v api-service:8080/health
kubectl exec -n production frontend-pod -- nc -zv database-service 5432
Step 4: Check CNI Logs#
Calico:
kubectl logs -n calico-system -l k8s-app=calico-node | grep -i deny
Cilium (Hubble):
hubble observe --namespace production --verdict DROPPED
Hubble shows real-time traffic flows with verdicts. Dropped traffic means a policy is blocking it.
Common Mistakes#
- Forgot DNS egress: Pods cannot resolve service names. Add port 53 egress to
kube-dns. - Label mismatch: Policy selects
app: apibut pod hasapp: api-server. - AND vs OR in selectors: Multiple items in a single
fromentry are AND. Separatefromentries are OR. - No CNI support: The default
kubenetCNI does not enforce network policies. You need Calico, Cilium, Weave, or another CNI that supports them.
Zero-Trust Networking Pattern#
Zero trust means no pod trusts any other pod by default. Every connection must be explicitly authorized.
Implementation Checklist#
1. Default deny ingress + egress in every namespace
2. Allow DNS egress (kube-dns only)
3. Allow only required pod-to-pod paths
4. Block cloud metadata endpoints
5. Use namespace labels for cross-namespace rules
6. Enable network policy logging (Calico/Cilium)
7. Audit policies regularly (drift detection)
8. Combine with service mesh mTLS for identity verification
Network Policies + Service Mesh#
Network policies handle L3/L4 (IP, port). Service mesh handles L7 (HTTP, gRPC) and identity (mTLS certificates). Together they provide defense in depth:
Network Policy: Pod A (port 8080) → Pod B (port 8080) [allowed]
Service Mesh: Pod A (identity: frontend.production) → Pod B [mTLS verified]
Even if a network policy allows traffic, the service mesh rejects it unless the caller has a valid identity certificate.
Summary#
- Default deny is the foundation. Apply it to every namespace.
- Liveness, readiness, and DNS egress must be explicitly allowed after default deny.
- Namespace isolation prevents cross-namespace lateral movement.
- Pod-to-pod rules enforce least-privilege communication paths.
- Cilium adds L7 (HTTP path/method) policies. Calico adds global policies and logging.
- Debug with labels, connectivity tests, and CNI logs (Hubble for Cilium).
Article #439 in the Codelit engineering series. Explore our full library of system design, infrastructure, and architecture guides at codelit.io.
Try it on Codelit
Chaos Mode
Simulate node failures and watch cascading impact across your architecture
Cost Estimator
See estimated AWS monthly costs for every component in your architecture
GitHub Integration
Paste a repo URL and generate architecture from your actual codebase
Related articles
Try these templates
Kubernetes Container Orchestration
K8s cluster with pod scheduling, service mesh, auto-scaling, and CI/CD deployment pipeline.
9 componentsSocial Network Platform
Full social network with profiles, posts, feed, friends, groups, and real-time notifications.
9 componentsLinkedIn Professional Network
Professional networking platform with feed, messaging, job search, recruiter tools, and connection graph.
10 componentsBuild this architecture
Generate an interactive architecture for Kubernetes Network Policies in seconds.
Try it in Codelit →
Comments