Advanced Resource Management

Back

Loading concept...

Advanced Kubernetes Resource Management šŸŽÆ

The Story of the Careful Librarian

Imagine you’re a librarian at the busiest library in the world. Every day, thousands of books come and go. But here’s the tricky part: you can’t just throw away a book the moment someone returns it. Sometimes, other people are still reading pages from that book. Sometimes, you need to make sure all the bookmarks are removed first.

This is exactly what Kubernetes does with Advanced Resource Management! Let’s explore three powerful tools that help Kubernetes be the world’s best librarian.


šŸ—‘ļø Finalizers and Garbage Collection (GC)

What Are Finalizers?

Think of finalizers as a ā€œdo this first before deleting meā€ checklist.

When you tell Kubernetes to delete something, it doesn’t just vanish immediately. First, it checks if there are any special cleanup tasks to complete.

graph TD A["Delete Request"] --> B{Has Finalizers?} B -->|Yes| C["Run Cleanup Tasks"] C --> D["Remove Finalizer"] D --> B B -->|No| E["Actually Delete"]

Real-Life Example

Imagine you have a Pod that saves data to an external database. Before deleting the Pod, you want to:

  1. Save any unsaved data
  2. Close the database connection
  3. Tell the database ā€œI’m leaving!ā€

Finalizers make sure these steps happen before the Pod disappears.

Simple Finalizer Example

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  finalizers:
    - my-company.com/cleanup
spec:
  containers:
    - name: app
      image: my-app:v1

What happens when you delete this Pod?

  1. Kubernetes marks it for deletion
  2. Waits for my-company.com/cleanup to finish
  3. Only then does the Pod actually disappear

Garbage Collection (GC)

Garbage Collection is like having a smart cleaning robot in your library.

When a parent book (let’s say, a big encyclopedia) is deleted, what happens to all its chapters (child volumes)?

Kubernetes uses owner references to track this:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: my-replicaset
  ownerReferences:
    - apiVersion: apps/v1
      kind: Deployment
      name: my-deployment
      uid: abc-123-xyz

Two Ways to Clean Up

Mode What Happens Real-Life Analogy
Foreground Wait for children to delete first ā€œKids leave the playground before I lock the gateā€
Background Delete parent immediately, children later ā€œI’ll lock the gate, someone else will get the kidsā€
# Delete with foreground cascading
kubectl delete deployment my-app \
  --cascade=foreground

šŸ“ Server-Side Apply

The Problem: ā€œWho Changed This?ā€

Imagine two chefs working on the same recipe at once. Chef A adds salt. Chef B adds pepper. But wait—Chef B accidentally removes Chef A’s salt!

This is the conflict problem in Kubernetes.

The Old Way (Client-Side Apply)

graph TD A["You Send Full Object"] --> B["Server Replaces Everything"] B --> C["Other Changes Lost! 😱"]

The New Way (Server-Side Apply)

graph TD A["You Send Only Changes"] --> B["Server Tracks Who Owns What"] B --> C[Everyone's Changes Safe! šŸŽ‰]

How It Works

Server-Side Apply keeps track of who owns which field. Each field has a ā€œmanagerā€ who last touched it.

# Apply with server-side apply
kubectl apply --server-side \
  -f my-resource.yaml

Seeing Field Managers

kubectl get pod my-pod -o yaml

Look for the managedFields section:

managedFields:
  - manager: kubectl-client-side-apply
    operation: Apply
    fieldsV1:
      f:spec:
        f:containers: {}
  - manager: my-controller
    operation: Apply
    fieldsV1:
      f:metadata:
        f:labels: {}

Handling Conflicts

What if two managers try to change the same field?

# Force your changes (take ownership)
kubectl apply --server-side \
  --force-conflicts \
  -f my-resource.yaml

Be careful! This is like saying ā€œI don’t care who was editing this field, it’s mine now!ā€


šŸŽ›ļø Field Management

The Big Picture

Field Management is the traffic system that makes Server-Side Apply work. It answers one crucial question:

ā€œWho is responsible for each piece of this object?ā€

Three Key Concepts

graph TD A["Field Management"] --> B["Managers"] A --> C["Operations"] A --> D["Ownership Rules"] B --> E["Who changed it?"] C --> F["How did they change it?"] D --> G["Who wins conflicts?"]

1. Managers

Every change has a manager name. It’s like signing your work.

managedFields:
  - manager: "helm"
    operation: Apply
  - manager: "kubectl"
    operation: Update
  - manager: "my-controller"
    operation: Apply

2. Operations

Operation What It Means
Apply ā€œI declare how this field should beā€
Update ā€œI’m making a one-time changeā€

3. Ownership and Sharing

Exclusive Fields: Only one manager can own it.

Shared Fields: Multiple managers can add to it (like a list).

# Labels can have multiple managers!
metadata:
  labels:
    app: my-app        # Owned by: helm
    version: v1        # Owned by: my-controller
    env: production    # Owned by: kubectl

Practical Example

Imagine a Deployment managed by three different tools:

# Initial creation by Helm
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  replicas: 3
  template:
    spec:
      containers:
        - name: web
          image: nginx:1.19

Now a controller wants to add a sidecar:

# Controller's patch (Server-Side Apply)
spec:
  template:
    spec:
      containers:
        - name: logging-sidecar
          image: fluent-bit:1.8

Result: Both containers exist! The controller owns the sidecar, Helm owns the main container.

Viewing All Field Managers

kubectl get deployment web-app \
  -o jsonpath='{.metadata.managedFields[*].manager}'

Output:

helm my-controller kubectl

šŸŽ“ Putting It All Together

Here’s how these three concepts work as a team:

graph TD A["You Create Resource"] --> B["Server-Side Apply"] B --> C["Field Management Tracks Ownership"] C --> D["Resource Exists"] D --> E["You Delete Resource"] E --> F{Has Finalizers?} F -->|Yes| G["Run Cleanup"] G --> H["Remove Finalizer"] H --> F F -->|No| I["Garbage Collection"] I --> J["Delete Children"] J --> K["Resource Gone"]

Key Takeaways

Concept Purpose Remember It As
Finalizers Pre-deletion checklist ā€œWait! I have things to do first!ā€
GC Automatic cleanup of related resources ā€œThe cleaning robotā€
Server-Side Apply Safe multi-user editing ā€œEveryone signs their workā€
Field Management Tracks who owns what ā€œThe librarian’s ledgerā€

šŸ’” Pro Tips

  1. Always name your managers - Use descriptive names like company-name/controller-name

  2. Use foreground deletion for critical resources - Ensures children are cleaned up first

  3. Check managed fields before force-applying - Know what you’re overwriting

  4. Keep finalizers simple - Long-running finalizers can block deletion forever

  5. Test with --dry-run=server - Preview changes before applying:

    kubectl apply --server-side \
      --dry-run=server \
      -f my-resource.yaml
    

šŸŽ‰ You Did It!

You now understand how Kubernetes:

  • Uses Finalizers to run cleanup before deletion
  • Uses Garbage Collection to automatically remove orphaned resources
  • Uses Server-Side Apply to safely merge changes from multiple sources
  • Uses Field Management to track ownership of every field

You’re no longer just a Kubernetes user—you’re a Kubernetes librarian who knows exactly how every book is managed, cleaned, and organized!

Loading story...

Story - Premium Content

Please sign in to view this story and start learning.

Upgrade to Premium to unlock full access to all stories.

Stay Tuned!

Story is coming soon.

Story Preview

Story - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.