๐ Kubernetes Cluster Networking: The City of Pods
Imagine your Kubernetes cluster is a magical city where tiny workers (Pods) live in different neighborhoods. They need roads, addresses, and traffic controllers to talk to each other. Letโs explore how this city works!
๐๏ธ The Big Picture: Cluster Network Basics
Think of a Kubernetes cluster like a city with many houses. Each house is a Pod, and every house needs:
- An address (so mail carriers can find it)
- Roads (so people can visit each other)
- Traffic rules (so nobody crashes)
What Makes Cluster Networking Special?
In Kubernetes city, thereโs a magical rule:
Every Pod can talk to every other Pod directly - no special permission needed!
Itโs like having a city where every house has a direct road to every other house. No locked gates, no toll booths.
graph TD A["Pod A<br/>10.244.1.5"] -->|Direct Talk| B["Pod B<br/>10.244.2.8"] B -->|Direct Talk| C["Pod C<br/>10.244.3.2"] A -->|Direct Talk| C style A fill:#e3f2fd style B fill:#fff3e0 style C fill:#e8f5e9
The Three Golden Rules
| Rule | What It Means | Real Example |
|---|---|---|
| Pods talk freely | Any Pod reaches any Pod | Web app calls database |
| No NAT between Pods | Same IP everywhere | Pod sees its own real address |
| Nodes see all Pods | Workers see all Pods | Node can reach any Pod |
๐ CNI Plugins: The Road Builders
What is CNI?
CNI stands for Container Network Interface. Think of it as the road construction company for our city.
When a new Pod (house) is built, someone needs to:
- Build a road to it โ
- Give it an address โ
- Connect it to the main highway โ
Thatโs exactly what CNI plugins do!
How It Works (Simple Story)
- ๐๏ธ Kubernetes says: โBuild a new Pod!โ
- ๐ CNI plugin gets a call: โHey, network setup needed!โ
- ๐ค๏ธ CNI builds the network โroadโ to the Pod
- ๐ท๏ธ CNI gives the Pod its IP address
- โ Pod is now connected to the city!
Popular CNI Plugins
| Plugin | Superpower | Best For |
|---|---|---|
| Calico | Network security rules | Enterprise clusters |
| Flannel | Simple & lightweight | Small/medium clusters |
| Cilium | Super fast (uses eBPF) | High-performance needs |
| Weave | Easy encrypted traffic | Security-focused teams |
Example: What Happens When Pod Starts
graph TD K["Kubelet"] -->|1. Create Pod| CNI["CNI Plugin"] CNI -->|2. Setup Network| N["Network Interface"] N -->|3. Assign IP| P["Pod Ready!<br/>IP: 10.244.1.5"] style K fill:#bbdefb style CNI fill:#c8e6c9 style P fill:#fff9c4
๐ฎ Pod and Service CIDR: The Address System
What is CIDR?
CIDR (say it: โciderโ) is like a zip code system for our city. It tells us which addresses belong to which neighborhood.
Two Different Zip Code Zones
Kubernetes has two separate address neighborhoods:
| Zone | Purpose | Example Range | Who Lives Here |
|---|---|---|---|
| Pod CIDR | Addresses for Pods | 10.244.0.0/16 |
All your Pods |
| Service CIDR | Addresses for Services | 10.96.0.0/12 |
All your Services |
Why Two Different Zones?
Think of it this way:
- Pods = Individual houses (they come and go)
- Services = Post offices (stable addresses that never change)
When you send a letter (request) to a Post Office (Service), the Post Office knows which houses (Pods) to deliver it to!
Real Numbers Example
Your Cluster Setup:
โโโ Pod CIDR: 10.244.0.0/16
โ โโโ Node 1 Pods: 10.244.1.0/24 (256 addresses)
โ โโโ Node 2 Pods: 10.244.2.0/24 (256 addresses)
โ โโโ Node 3 Pods: 10.244.3.0/24 (256 addresses)
โ
โโโ Service CIDR: 10.96.0.0/12
โโโ kubernetes: 10.96.0.1
โโโ my-web-app: 10.96.45.123
โโโ my-database: 10.96.89.67
The Magic: Why This Matters
graph TD subgraph Pod Network P1["Pod 1<br/>10.244.1.5"] P2["Pod 2<br/>10.244.2.8"] end subgraph Service Network S["Service<br/>10.96.0.100"] end U["User Request"] --> S S --> P1 S --> P2 style S fill:#e1bee7 style P1 fill:#c8e6c9 style P2 fill:#c8e6c9
๐ฃ๏ธ Pod-to-Pod Communication: How Pods Talk
The Simplest Thing in Kubernetes
Remember our golden rule? Every Pod can talk to every other Pod directly!
Three Scenarios
1๏ธโฃ Same Node (Same Apartment Building)
Pods on the same node talk through a virtual bridge - like neighbors using the same elevator.
graph TD subgraph Node 1 B["Bridge<br/>cbr0"] P1["Pod A<br/>10.244.1.5"] --- B P2["Pod B<br/>10.244.1.6"] --- B end P1 -->|Direct via bridge| P2 style B fill:#ffcdd2 style P1 fill:#c8e6c9 style P2 fill:#bbdefb
Example: Pod A (10.244.1.5) calls Pod B (10.244.1.6)
- Traffic goes: Pod A โ Bridge โ Pod B
- Super fast! Never leaves the node.
2๏ธโฃ Different Nodes (Different Buildings)
Pods on different nodes need the overlay network - like cars driving between buildings.
graph LR subgraph Node 1 P1["Pod A<br/>10.244.1.5"] end subgraph Node 2 P2["Pod C<br/>10.244.2.8"] end P1 -->|Overlay Network| P2 style P1 fill:#c8e6c9 style P2 fill:#bbdefb
Example: Pod A (Node 1) calls Pod C (Node 2)
- Traffic is encapsulated (put in an envelope)
- Sent across the physical network
- Unwrapped at destination node
- Delivered to Pod C
3๏ธโฃ Through a Service (Using the Post Office)
Most real apps use Services for stable addressing.
Web Pod โ Service IP โ Database Pod
(10.96.0.50)
Quick Comparison
| Scenario | Speed | How It Works |
|---|---|---|
| Same Node | โก Fastest | Virtual bridge |
| Different Nodes | ๐ Fast | Overlay network |
| Via Service | ๐ Fast | kube-proxy routing |
๐ฆ kube-proxy Modes: The Traffic Controller
What is kube-proxy?
kube-proxy is like a traffic controller that runs on every node. Its job:
When traffic arrives for a Service, direct it to the right Pod!
Three Modes: How Traffic Gets Directed
Mode 1: iptables (The Rule Book) ๐
Default mode in most clusters
Think of it as a giant rulebook that says:
- โIf traffic goes to 10.96.0.50, send it to Pod at 10.244.1.5โ
graph LR T["Traffic"] --> IP["iptables Rules"] IP --> P1["Pod 1"] IP --> P2["Pod 2"] IP --> P3["Pod 3"] style IP fill:#ffecb3
| โ Pros | โ Cons |
|---|---|
| Simple & reliable | Slow with 1000+ services |
| Works everywhere | No fancy load balancing |
| Kernel-level speed | Rules pile up |
Mode 2: IPVS (The Smart Router) ๐ง
Better for large clusters
IPVS is like having a professional traffic management system instead of just rules.
graph LR T["Traffic"] --> IPVS["IPVS<br/>Load Balancer"] IPVS --> P1["Pod 1"] IPVS --> P2["Pod 2"] IPVS --> P3["Pod 3"] style IPVS fill:#c8e6c9
| โ Pros | โ Cons |
|---|---|
| Handles 10,000+ services | Needs IPVS kernel modules |
| Smart load balancing options | Slightly more complex |
| Better performance | Need to enable manually |
Load Balancing Options in IPVS:
- Round-robin (take turns)
- Least connections (send to least busy)
- Source hashing (same user โ same pod)
Mode 3: kernelspace/nftables (The New Kid) ๐
Newer, modern approach
Uses modern Linux networking features.
| โ Pros | โ Cons |
|---|---|
| Modern architecture | Not widely tested yet |
| Better than iptables | Newer Kubernetes needed |
Quick Mode Comparison
| Mode | Best For | Services Limit | Setup |
|---|---|---|---|
| iptables | Small-medium clusters | ~5,000 | Default |
| IPVS | Large clusters | 10,000+ | Enable manually |
| nftables | Future-proof setups | 10,000+ | Newest version |
How to Check Your Mode
# See what mode kube-proxy uses
kubectl get configmap \
kube-proxy -n kube-system \
-o yaml | grep mode
๐ฏ Putting It All Together
Letโs trace a real request through the entire system!
Story: Web App Calls Database
graph TD U["User"] -->|1. Request| S["Web Service<br/>10.96.0.50"] S -->|2. kube-proxy routes| WP["Web Pod<br/>10.244.1.5"] WP -->|3. Calls db-service| DS["DB Service<br/>10.96.0.100"] DS -->|4. kube-proxy routes| DP["DB Pod<br/>10.244.2.8"] DP -->|5. Response| WP WP -->|6. Response| U style S fill:#e1bee7 style DS fill:#e1bee7 style WP fill:#c8e6c9 style DP fill:#bbdefb
What Happened:
- User sends request to Web Service (10.96.0.50)
- kube-proxy on the node routes to Web Pod (10.244.1.5)
- Web Pod needs data, calls DB Service (10.96.0.100)
- kube-proxy routes to DB Pod (10.244.2.8) - maybe different node!
- DB Pod sends data back
- Response returns to user
The Key Players Summary
| Component | Job | Real World Analogy |
|---|---|---|
| CNI Plugin | Builds network, assigns IPs | Road construction company |
| Pod CIDR | Address space for Pods | Residential zip codes |
| Service CIDR | Address space for Services | Commercial zip codes |
| kube-proxy | Routes traffic to right Pod | Traffic controller |
๐ You Made It!
You now understand how the Kubernetes networking city works:
- ๐ค๏ธ CNI plugins build the roads and assign addresses
- ๐ฎ Pod & Service CIDR keep addresses organized in zones
- ๐ฃ๏ธ Pods talk directly to each other (same or different nodes)
- ๐ฆ kube-proxy directs traffic from Services to the right Pods
Remember: Every Pod is a citizen with equal rights to talk to any other Pod. Services are like post offices with stable addresses. And kube-proxy is the traffic controller making sure everyone finds their way!
Happy networking in your Kubernetes city! ๐
