🚀 Go Memory & Performance: The City Inside Your Computer
The Big Picture: Your Program is a Bustling City
Imagine your Go program is a busy city. Memory is the land where buildings stand. Performance is how fast the city runs. Today, you’ll become the mayor who understands every corner of this city!
🗑️ Garbage Collection: The Automatic Cleanup Crew
What is Garbage Collection?
Think of your city having a magic cleanup crew. They work while everyone is busy. They find old toys (data) nobody uses anymore and recycle them.
Simple Example:
func makeGreeting() {
name := "Alice" // Create data
fmt.Println(name)
} // Function ends, "Alice" becomes garbage
// The cleanup crew recycles it!
How Does Go’s GC Work?
Go uses a tri-color marking system:
- 🤍 White: “Maybe garbage” - might get cleaned
- ⚫ Black: “Definitely needed” - safe forever
- 🔘 Gray: “Still checking” - being examined
graph TD A["Start GC"] --> B["Mark all objects WHITE"] B --> C["Mark ROOT objects GRAY"] C --> D["Process GRAY objects"] D --> E["If object has references"] E --> F["Mark children GRAY"] F --> G["Mark parent BLACK"] G --> D E --> H["No more GRAY?"] H --> I["Sweep WHITE objects"] I --> J["Done!"]
Real Life Impact
// BAD: Creates lots of garbage
func badLoop() {
for i := 0; i < 1000000; i++ {
s := fmt.Sprintf("item %d", i)
_ = s
}
}
// GOOD: Reuse memory
func goodLoop() {
var builder strings.Builder
for i := 0; i < 1000000; i++ {
builder.Reset()
builder.WriteString("item ")
builder.WriteString(strconv.Itoa(i))
_ = builder.String()
}
}
📚 Stack vs Heap: Two Types of Storage
The Stack: Your Desk
The stack is like your desk. Fast to use. Small space. When you leave, everything gets cleared automatically.
The Heap: The Warehouse
The heap is like a big warehouse. More space. Slower to access. Needs the cleanup crew (GC) to organize.
graph TD subgraph STACK["📚 STACK - Fast & Automatic"] S1["Local variables"] S2["Function parameters"] S3["Return addresses"] end subgraph HEAP["🏭 HEAP - Big & Managed"] H1["Shared data"] H2["Large objects"] H3["Long-lived data"] end S1 --> |"Small, local"| STACK H1 --> |"Big, shared"| HEAP
Code Example
func stackExample() int {
x := 42 // Lives on STACK
return x // Fast! Auto-cleaned
}
func heapExample() *int {
x := 42 // Starts on stack
return &x // Escapes to HEAP!
}
Key Insight:
- Stack = Fast, automatic, limited
- Heap = Flexible, slower, needs GC
🔍 Escape Analysis: Where Does Data Live?
The Detective Work
Go’s compiler is like a detective. It looks at your code and asks: “Where should this data live?”
If data escapes the function, it goes to the heap. If it stays local, it stays on the stack.
See It In Action
Run this command to see escape analysis:
go build -gcflags="-m" main.go
Examples That Escape
// ESCAPES: Returns pointer
func escape1() *int {
x := 10
return &x // x escapes to heap
}
// ESCAPES: Stored in interface
func escape2() interface{} {
x := 10
return x // x escapes to heap
}
// STAYS: Local use only
func noEscape() int {
x := 10
return x // x stays on stack
}
Why Does This Matter?
| Scenario | Where | Speed |
|---|---|---|
| Local variable, no pointer | Stack | ⚡ Super fast |
| Return pointer | Heap | 🐢 Slower |
| Interface conversion | Heap | 🐢 Slower |
| Closure captures | Heap | 🐢 Slower |
🍕 Slice Internals: Pizza Boxes Explained
What’s Inside a Slice?
A slice is like a pizza box label that points to actual pizzas (data).
Every slice has THREE parts:
- Pointer - Where’s the pizza?
- Length - How many slices can I eat?
- Capacity - How many slices fit in the box?
graph LR subgraph SLICE["Slice Header - 24 bytes"] P["Pointer"] L["Length: 3"] C["Capacity: 5"] end subgraph ARRAY["Underlying Array"] A1["10"] A2["20"] A3["30"] A4["..."] A5["..."] end P --> A1
Code Example
func sliceDemo() {
// Create slice with len=3, cap=5
s := make([]int, 3, 5)
fmt.Println(len(s)) // 3
fmt.Println(cap(s)) // 5
// Append within capacity (fast!)
s = append(s, 4, 5)
// Append beyond capacity (slow!)
// Creates new bigger array
s = append(s, 6)
}
The Gotcha: Shared Arrays!
original := []int{1, 2, 3, 4, 5}
slice1 := original[1:3] // [2, 3]
slice1[0] = 999
// SURPRISE! original is now:
// [1, 999, 3, 4, 5]
Tip: Use copy() to avoid surprises:
safe := make([]int, len(slice1))
copy(safe, slice1)
♻️ sync.Pool: The Recycling Center
What is sync.Pool?
Remember how recycling saves resources? sync.Pool is Go’s object recycling center.
Instead of creating new objects (expensive), you borrow from the pool and return when done.
Simple Example
var bufferPool = sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
func processData(data []byte) {
// BORROW from pool
buf := bufferPool.Get().(*bytes.Buffer)
// USE the buffer
buf.Write(data)
result := buf.String()
// RETURN to pool (reset first!)
buf.Reset()
bufferPool.Put(buf)
_ = result
}
When to Use sync.Pool?
| Use Case | Good Fit? |
|---|---|
| Temporary buffers | ✅ Yes! |
| HTTP request objects | ✅ Yes! |
| Long-lived objects | ❌ No |
| Objects with state | ❌ No |
Warning: Pool objects can be garbage collected anytime. Don’t store important data!
⚛️ Atomic Package: The Traffic Light
What are Atomics?
Imagine a busy intersection. Without traffic lights, cars crash. Atomics are traffic lights for your data.
They let multiple goroutines safely read/write the same variable.
Basic Operations
var counter int64
func safeIncrement() {
// ATOMIC: Safe for many goroutines
atomic.AddInt64(&counter, 1)
}
func safeRead() int64 {
// ATOMIC: Gets latest value
return atomic.LoadInt64(&counter)
}
func safeSet(val int64) {
// ATOMIC: Sets safely
atomic.StoreInt64(&counter, val)
}
Compare-And-Swap: The Magic Trick
var value int64 = 100
func tryUpdate(old, new int64) bool {
// Only update if current == old
return atomic.CompareAndSwapInt64(
&value, old, new)
}
// Usage:
success := tryUpdate(100, 200)
// Only succeeds if value is still 100
Atomics vs Mutex
| Feature | Atomic | Mutex |
|---|---|---|
| Speed | ⚡ Faster | 🐢 Slower |
| Complexity | Simple ops only | Any operation |
| Use case | Counters, flags | Complex updates |
📊 Profiling: The Health Check
What is Profiling?
Like a doctor checks your health, profiling checks your program’s health. It finds:
- Where is time spent? (CPU)
- Where is memory used? (Memory)
- Where are locks waiting? (Blocking)
CPU Profiling
import "runtime/pprof"
func main() {
f, _ := os.Create("cpu.prof")
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
// Your program runs here
doWork()
}
Analyze with:
go tool pprof cpu.prof
Memory Profiling
func main() {
// Run your program
doWork()
// Take memory snapshot
f, _ := os.Create("mem.prof")
pprof.WriteHeapProfile(f)
f.Close()
}
Easy HTTP Profiling
import _ "net/http/pprof"
func main() {
go func() {
http.ListenAndServe(":6060", nil)
}()
// Your app runs here
}
Visit http://localhost:6060/debug/pprof/
Reading pprof Output
(pprof) top 10
Showing top 10 nodes
flat flat% cum cum%
500ms 50.00% 500ms 50.00% main.slowFunc
300ms 30.00% 800ms 80.00% main.processData
- flat: Time in this function only
- cum: Time including called functions
🏎️ Performance Optimization: Speed Secrets
Rule #1: Measure First!
Never guess. Always profile. The bottleneck is often surprising!
Quick Wins
1. Pre-allocate Slices
// SLOW: Many allocations
var result []int
for i := 0; i < 1000; i++ {
result = append(result, i)
}
// FAST: One allocation
result := make([]int, 0, 1000)
for i := 0; i < 1000; i++ {
result = append(result, i)
}
2. Use strings.Builder
// SLOW: Many string copies
s := ""
for i := 0; i < 1000; i++ {
s += "x"
}
// FAST: Single buffer
var b strings.Builder
for i := 0; i < 1000; i++ {
b.WriteString("x")
}
s := b.String()
3. Avoid Interface Allocations
// SLOW: Allocates for interface
func slow(x interface{}) { }
// FAST: Use concrete types
func fast(x int) { }
4. Reduce Allocations in Loops
// SLOW: New slice each iteration
for i := 0; i < 1000; i++ {
buf := make([]byte, 1024)
process(buf)
}
// FAST: Reuse slice
buf := make([]byte, 1024)
for i := 0; i < 1000; i++ {
process(buf)
}
The Optimization Checklist
| Check | Action |
|---|---|
| 🔍 Profile first | Never guess! |
| 📦 Pre-allocate | Use make(slice, 0, cap) |
| ♻️ Reuse objects | Use sync.Pool |
| 📚 Stack over heap | Avoid escapes |
| ⚛️ Use atomics | For simple counters |
| 🔗 Reduce GC | Fewer allocations |
🎯 Summary: Your New Superpowers
You now understand:
- Garbage Collection - The automatic cleanup crew
- Stack vs Heap - Fast desk vs big warehouse
- Escape Analysis - The compiler detective
- Slice Internals - Pizza boxes with pointers
- sync.Pool - The recycling center
- Atomics - Traffic lights for data
- Profiling - The health check
- Optimization - Speed secrets
Remember: Always measure before optimizing. The real bottleneck is often surprising!
graph TD A["Write Code"] --> B["Profile It"] B --> C{Found Bottleneck?} C -->|Yes| D["Optimize"] D --> B C -->|No| E["Ship It! 🚀"]
You’re now ready to build fast, efficient Go programs! 🎉
