Workload Controllers: StatefulSets and DaemonSets
The Story of Special Workers in a Kubernetes Factory
Imagine a huge toy factory. Inside, there are many workers (pods). Some workers are like clones – they can do any job and replace each other easily. But some workers are special. They have their own name badges, their own lockers, and their own tools. If one goes home, another can’t just take their place!
Today, we’ll meet two special types of workers: StatefulSets (workers who need their own identity) and DaemonSets (workers who must be on every factory floor).
🎭 StatefulSets: Workers with Permanent Name Badges
What is a StatefulSet?
A StatefulSet is like hiring workers who need to remember things.
Think of a library with three librarians:
- Alice manages shelf A and remembers where every book goes
- Bob manages shelf B
- Charlie manages shelf C
If Alice gets sick, you can’t just send any random person. The new person needs to become “Alice” – same name badge, same locker with Alice’s notes, same shelf!
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: librarian
spec:
serviceName: "library"
replicas: 3
selector:
matchLabels:
app: librarian
template:
metadata:
labels:
app: librarian
spec:
containers:
- name: keeper
image: library:v1
Key Point: StatefulSets are for apps that need to remember who they are and keep their data.
🏷️ StatefulSet Identity: Every Worker Gets a Name
The Magic of Predictable Names
In a regular job (Deployment), workers get random names like:
worker-xK7hQworker-9bZmL
But StatefulSet workers get numbered names:
librarian-0(first to start, last to leave)librarian-1(second)librarian-2(third)
The Three Parts of Identity
graph TD A[StatefulSet Pod Identity] --> B[🏷️ Stable Name] A --> C[🌐 DNS Address] A --> D[💾 Storage Volume] B --> E["librarian-0, librarian-1..."] C --> F["librarian-0.library.default"] D --> G["Each pod keeps its disk"]
1. Stable Name
Like a permanent employee ID. Even if librarian-0 restarts, it comes back as librarian-0.
2. Stable Network Identity Each worker gets their own phone number (DNS):
librarian-0.library.default.svc.cluster.locallibrarian-1.library.default.svc.cluster.local
3. Stable Storage
Each worker keeps their own locker (PersistentVolume). When librarian-0 comes back from vacation, their locker with all their notes is still there!
Example: Database Cluster
mysql-0 → Primary (takes writes)
mysql-1 → Replica (copies from mysql-0)
mysql-2 → Replica (copies from mysql-0)
Each knows their role because of their stable identity!
🔄 StatefulSet Update Strategies
How Do We Train Workers on New Methods?
When the library gets new rules, how do we teach all librarians?
Strategy 1: RollingUpdate (Default)
Like a relay race – train one person at a time, starting with the newest hire.
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
The update goes backwards:
- First, train
librarian-2(newest) - Then
librarian-1 - Finally
librarian-0(most senior)
Why backwards? Because librarian-0 is often the leader. We update followers first!
Partition: Partial Updates
Want to test new training on just the newest workers?
rollingUpdate:
partition: 2
This means: “Only update pods with index ≥ 2”
So librarian-2 gets the update. librarian-0 and librarian-1 stay on old version.
Perfect for: Testing changes safely before full rollout!
Strategy 2: OnDelete
Manual control – workers only learn new rules when they take a break.
updateStrategy:
type: OnDelete
You must manually delete pods. When they come back, they learn the new rules.
Good for: When you want full control over every single update.
graph TD A[Update Strategies] --> B[RollingUpdate] A --> C[OnDelete] B --> D["Auto update, newest first"] B --> E["Use partition for staged rollout"] C --> F["Manual delete triggers update"]
👮 DaemonSets: One Worker on Every Floor
What is a DaemonSet?
Imagine your factory has 10 floors. You need exactly one security guard on each floor. Not two, not zero – exactly one!
A DaemonSet ensures one copy of a pod runs on every node (computer) in your cluster.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: security-guard
spec:
selector:
matchLabels:
app: guard
template:
metadata:
labels:
app: guard
spec:
containers:
- name: monitor
image: security:v1
Real-World DaemonSet Uses
| Use Case | What It Does |
|---|---|
| Log Collector | Grabs logs from every machine |
| Monitoring Agent | Watches health of each node |
| Network Plugin | Sets up networking on each node |
| Storage Helper | Manages disks on each machine |
graph TD A[Cluster with 4 Nodes] --> B[Node 1] A --> C[Node 2] A --> D[Node 3] A --> E[Node 4] B --> F[🛡️ Guard Pod] C --> G[🛡️ Guard Pod] D --> H[🛡️ Guard Pod] E --> I[🛡️ Guard Pod]
Magic Feature: When a new node joins the cluster, DaemonSet automatically puts a guard there!
🔄 DaemonSet Update Strategies
How Do We Upgrade All Security Guards?
Strategy 1: RollingUpdate (Default)
One guard at a time gets new training. Others keep working.
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxUnavailable: How many guards can be in training at once?
1= one at a time (safest)3= three at once (faster)25%= quarter of all guards
Strategy 2: OnDelete
Guards only get new training when they quit and come back.
updateStrategy:
type: OnDelete
You control: Delete old guard → New guard appears with updates.
RollingUpdate Settings
rollingUpdate:
maxUnavailable: 2
maxSurge: 0
- maxUnavailable: Guards that can be offline at once
- maxSurge: Extra guards during update (usually 0 for DaemonSets since we want exactly 1 per node)
graph TD A[DaemonSet Update] --> B[RollingUpdate] A --> C[OnDelete] B --> D["Automatic, controlled pace"] B --> E["maxUnavailable controls speed"] C --> F["Manual pod deletion required"]
🔍 Workload Type Comparison
The Big Picture: Deployment vs StatefulSet vs DaemonSet
Think of hiring different types of workers:
| Feature | Deployment | StatefulSet | DaemonSet |
|---|---|---|---|
| Worker Type | Interchangeable clones | Named specialists | One per location |
| Names | Random (pod-xK7hQ) | Ordered (app-0, app-1) | One per node |
| Storage | Shared or none | Each keeps their own | Usually local |
| Scaling | Any number | Ordered up/down | Matches nodes |
| Updates | Any order | Reverse order | Node by node |
| Use Case | Stateless web apps | Databases, caches | Logging, monitoring |
Quick Decision Guide
graph TD A[What do you need?] --> B{Need to remember data?} B -->|No| C{One copy per machine?} B -->|Yes| D[Use StatefulSet] C -->|Yes| E[Use DaemonSet] C -->|No| F[Use Deployment] D --> G["Databases, Kafka, Redis Cluster"] E --> H["Logging, Monitoring, Network"] F --> I["Web servers, APIs, Workers"]
Real Examples
Deployment: Your website servers
- Any pod can handle any request
- Scale up/down freely
- No special identity needed
StatefulSet: Your MongoDB cluster
mongo-0is primary, knows it’s #0mongo-1andmongo-2are replicas- Each keeps its data volume
DaemonSet: Your log shipper (Fluentd)
- Every node needs exactly one
- Collects logs from that specific node
- Auto-added to new nodes
🎯 Summary: What We Learned
-
StatefulSets = Workers with permanent ID badges and lockers
- Stable names:
app-0,app-1,app-2 - Stable storage: Each keeps their disk
- Stable network: Each has their own DNS
- Stable names:
-
StatefulSet Updates = Train workers in reverse order
- RollingUpdate: Automatic, newest first
- Partition: Test on some workers first
- OnDelete: Manual control
-
DaemonSets = One security guard per floor
- Exactly one pod per node
- Auto-scales with cluster size
- Perfect for node-level services
-
DaemonSet Updates = Upgrade guards systematically
- RollingUpdate: One at a time (controlled)
- OnDelete: Manual deletion triggers update
-
Choosing the Right Type
- Stateless apps → Deployment
- Apps needing identity/storage → StatefulSet
- One per node → DaemonSet
💡 Remember This!
StatefulSets are like hospital nurses – each has their own patients (data) and their own badge number. You can’t just swap them!
DaemonSets are like fire extinguishers – you need exactly one on every floor, and when you add a new floor, one automatically appears!
You’ve got this! These controllers are just different ways to organize workers in your Kubernetes factory. Match the controller to your app’s needs, and you’re golden! 🌟