Go Memory Model and GC
1. Memory Allocation Basics
1.1 Stack and Heap
┌─────────────────────────────┐
│ Heap │ ← Dynamic allocation, GC managed
│ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │ obj │ │ obj │ │ obj │ │
│ └─────┘ └─────┘ └─────┘ │
├─────────────────────────────┤
│ goroutine Stack 1 │ ← Starts at 2KB, grows automatically
│ ┌─────────────────────┐ │
│ │ Local variables, function parameters │ │
│ └─────────────────────┘ │
├─────────────────────────────┤
│ goroutine Stack 2 │
│ ┌─────────────────────┐ │
│ │ Local variables, function parameters │ │
│ └─────────────────────┘ │
└─────────────────────────────┘
- Stack allocation: Extremely fast (only needs to move the stack pointer), automatically reclaimed when function returns
- Heap allocation: Requires GC reclamation, has overhead
- The Go compiler decides whether variables are allocated on the stack or heap through escape analysis
1.2 Escape Analysis
# View escape analysis results
go build -gcflags="-m" main.go
go build -gcflags="-m -m" main.go # More detailed
// No escape: local variable, allocated on the stack
func noEscape() int {
x := 42
return x // Value copy, x does not escape
}
// Escape: returns pointer, allocated on the heap
func escape() *int {
x := 42
return &x // x escapes to the heap because it's still used after the function returns
}
// go build -gcflags="-m" output: moved to heap: x
// Escape: interface type parameter
func printVal(v interface{}) {
fmt.Println(v) // v escapes (due to interface{})
}
// Escape: closure reference
func closure() func() int {
x := 0
return func() int {
x++ // x is captured by the closure, escapes to the heap
return x
}
}
// Escape: sent to channel
func chanEscape(ch chan *int) {
x := 42
ch <- &x // x escapes
}
// No escape: slice of known size
func noEscapeSlice() {
s := make([]int, 10) // No escape (size known at compile time)
_ = s
}
// Escape: slice of runtime size
func escapeSlice(n int) {
s := make([]int, n) // Escape (size unknown at compile time)
_ = s
}
1.3 When Variables Are Allocated on the Heap
| Scenario | Escapes? |
|---|---|
| Returning a pointer to a local variable | Yes |
| Assigned to interface{} | Yes |
| Closure captures local variable | Yes |
| Sending a pointer to a channel | Yes |
| Slice/map runtime size | Possibly |
| Object exceeding a certain size | Yes |
| Pure value type local variable | No |
| Slice of compile-time known size | No |
2. Go Memory Allocator
Go's memory allocator is based on the tcmalloc (Thread-Caching Malloc) philosophy.
2.1 Three-Tier Structure
goroutine ──► mcache (one per P, lock-free)
│
▼
mcentral (global, locked, by size class)
│
▼
mheap (global, requests from OS)
│
▼
Operating System (mmap/sbrk)
- mcache: Each P has one, no locking required when allocating small objects
- mcentral: When mcache is exhausted, objects are fetched from mcentral, which requires locking
- mheap: When mcentral is also insufficient, memory is requested from mheap, and mheap requests memory from the OS
2.2 mspan and size class
Go categorizes objects into approximately 70 size classes based on their size (8B, 16B, 32B, 48B, ...32KB).
- Tiny objects (<16B): Use tiny allocator, multiple small objects can share a memory block
- Small objects (16B-32KB): Allocated from mcache according to size class
- Large objects (>32KB): Allocated directly from mheap
// Observe allocation behavior
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Heap allocations: %d\n", m.Mallocs)
fmt.Printf("Heap frees: %d\n", m.Frees)
fmt.Printf("Heap in-use bytes: %d\n", m.HeapAlloc)
fmt.Printf("System memory: %d\n", m.Sys)
3. Garbage Collection (GC)
3.1 Tri-color Marking Algorithm
Go uses a concurrent tri-color mark-and-sweep algorithm.
White: Not marked, reclaimed after GC
Gray: Marked but child objects not scanned
Black: Marked and child objects scanned, kept
Marking process:
1. All objects are initially white
2. Starting from root objects (stack, global variables), mark them gray
3. Take a gray object, scan all objects it references:
- White → Mark as gray
- Already gray or black → Skip
- Mark current object as black
4. Repeat step 3 until there are no gray objects
5. Remaining white objects are garbage, reclaim them
Initial: [White A] → [White B] → [White C]
↗
[White D] → [White E]
Step1: [Gray A] → [White B] → [White C] (A is a root object)
↗
[Gray D] → [White E] (D is a root object)
Step2: [Black A] → [Gray B] → [White C] (Scan A, B turns gray)
↗
[Black D] → [Gray E] (Scan D, E turns gray)
Step3: [Black A] → [Black B] → [Gray C] (Scan B, C turns gray)
↗
[Black D] → [Black E] (Scan E, C is already gray)
Step4: [Black A] → [Black B] → [Black C] (Scan C, complete)
[Black D] → [Black E]
→ No white objects, all kept
3.2 Write Barrier
When GC runs concurrently with user code, the user might modify reference relationships. Write barriers ensure that live objects are not missed.
Go 1.8+ uses a Hybrid Write Barrier:
- At the start of marking, all objects on the stack are marked black
- Newly created objects on the heap are marked black
- Objects pointed to by overwritten pointers are marked gray
3.3 GC Trigger Conditions
// 1. Triggered when heap memory grows to GOGC% of its size after the last GC (default 100%, i.e., doubles)
// Environment variable control
GOGC=100 // Default, triggered when heap grows by 100%
GOGC=200 // Triggered when heap grows by 200% (GC frequency decreases, memory usage increases)
GOGC=50 // Triggered when heap grows by 50% (GC is more frequent, memory usage decreases)
GOGC=off // Disable GC
// 2. Timed trigger: More than 2 minutes since the last GC
// 3. Manual trigger
runtime.GC()
3.4 GOMEMLIMIT (Go 1.19+)
Soft memory limit, when approaching the limit, GC will be more aggressive in reclamation.
// Environment variables
GOMEMLIMIT=1GiB // Soft limit 1GB
GOMEMLIMIT=512MiB
// Set in code
import "runtime/debug"
debug.SetMemoryLimit(1 << 30) // 1GB
// Typical usage: container environment
// Container limit 2GB, set GOMEMLIMIT=1.5GiB
// Leave 0.5GB for non-heap memory (stack, goroutines, etc.)
3.5 GC Phases
User Code Running │ STW │ Concurrent Mark │ STW │ Concurrent Sweep │ User Code Running
│Mark │ │Mark │ │
│Setup│ │Term │ │
│ │ │ │ │
<1ms <1ms
- Mark Setup (STW): Enable write barrier, usually <1ms
- Concurrent Mark: Runs in parallel with user code
- Mark Termination (STW): Disable write barrier, usually <1ms
- Concurrent Sweep: Reclaims white objects
4. Common Memory Leak Scenarios
4.1 goroutine Leaks
// Leak: goroutine forever blocked on channel
func leak() {
ch := make(chan int)
go func() {
val := <-ch // Never receives data, goroutine leaks
fmt.Println(val)
}()
// Function returns, no one sends data to ch, goroutine never ends
}
// Fix: Use context for cancellation
func noLeak(ctx context.Context) {
ch := make(chan int)
go func() {
select {
case val := <-ch:
fmt.Println(val)
case <-ctx.Done():
return // Can exit
}
}()
}
4.2 time.After Used in Loops
// Leak: A new Timer is created in each loop iteration, only GC'd when triggered
func leak() {
ch := make(chan int)
for {
select {
case v := <-ch:
fmt.Println(v)
case <-time.After(5 * time.Second): // A new Timer is created in each loop
fmt.Println("timeout")
}
}
}
// Fix: Use time.NewTimer and manually Reset
func noLeak() {
ch := make(chan int)
timer := time.NewTimer(5 * time.Second)
defer timer.Stop()
for {
timer.Reset(5 * time.Second)
select {
case v := <-ch:
fmt.Println(v)
case <-timer.C:
fmt.Println("timeout")
}
}
}
4.3 Slice Underlying Array Not Released
// Leak: Slice references a small part of a large array
func leak() []byte {
data := make([]byte, 1<<20) // 1MB
// ... populate data
return data[:10] // Returns 10 bytes, but the underlying 1MB array will not be reclaimed
}
// Fix: Copy the required data
func noLeak() []byte {
data := make([]byte, 1<<20)
result := make([]byte, 10)
copy(result, data[:10])
return result
}
4.4 Global Variable Holding References
// Leak: Global map continuously grows
var cache = make(map[string][]byte)
func processRequest(key string, data []byte) {
cache[key] = data // Only adds, never deletes, memory continuously grows
}
// Fix: Set an expiration policy or size limit
// Use LRU cache or periodic cleanup
5. Common Functions in the runtime Package
package main
import (
"fmt"
"runtime"
"runtime/debug"
)
func main() {
// CPU and Goroutine Information
fmt.Println("Number of CPU cores:", runtime.NumCPU())
fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
fmt.Println("Number of goroutines:", runtime.NumGoroutine())
fmt.Println("Go Version:", runtime.Version())
// Memory Statistics
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Heap allocated: %d MB\n", m.HeapAlloc/1024/1024)
fmt.Printf("System memory: %d MB\n", m.Sys/1024/1024)
fmt.Printf("GC count: %d\n", m.NumGC)
fmt.Printf("Last GC pause: %d μs\n", m.PauseNs[(m.NumGC+255)%256]/1000)
// Manually trigger GC
runtime.GC()
// Set GC percentage (equivalent to GOGC environment variable)
debug.SetGCPercent(200) // Trigger GC when heap grows by 200%
// Set memory limit (Go 1.19+)
debug.SetMemoryLimit(512 << 20) // 512MB
// Release unused memory to OS
debug.FreeOSMemory()
// SetFinalizer: Execute cleanup before object is GC'd
// Note: Not guaranteed to execute, do not use for critical cleanup
type Resource struct{ Name string }
r := &Resource{"test"}
runtime.SetFinalizer(r, func(r *Resource) {
fmt.Println("Cleaning up resource:", r.Name)
})
}
主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/archives/6785