Goroutines — The Heart of Go Concurrency
The most powerful feature that distinguishes Go from other languages is its concurrency support. Creating threads in Java or Python is expensive and complex, but in Go, a single go keyword creates a lightweight concurrent execution unit. This unit is called a goroutine.
What Is a Goroutine?
A goroutine is a lightweight thread managed by the Go runtime. Unlike OS threads, goroutines start with far less memory (initially 2–8KB stack) and the stack grows automatically as needed. You can run thousands or even tens of thousands of goroutines simultaneously without issue.
OS Thread: ~1MB stack memory, managed by OS scheduler
Goroutine: ~2KB stack memory, managed by Go runtime scheduler
Starting a Goroutine with the go Keyword
Add the go keyword before a function call to run it in a new goroutine.
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
fmt.Printf("Hello, %s!\n", name)
}
func main() {
// Run in goroutine — executes in parallel with main
go sayHello("Alice")
go sayHello("Bob")
go sayHello("Charlie")
// If main exits first, goroutines are forcibly terminated
// Use time.Sleep to wait briefly (use WaitGroup in real code)
time.Sleep(100 * time.Millisecond)
fmt.Println("main function exiting")
}
Important: When the main function exits, all running goroutines are forcibly terminated. Without time.Sleep, goroutines may not even get a chance to run.
M:N Scheduler — GOMAXPROCS
The Go runtime uses an M:N scheduler that maps M goroutines onto N OS threads (typically equal to the number of CPU cores).
package main
import (
"fmt"
"runtime"
)
func main() {
// Print number of logical CPUs
fmt.Println("CPU cores:", runtime.NumCPU())
fmt.Println("Current GOMAXPROCS:", runtime.GOMAXPROCS(0))
// Check number of running goroutines
fmt.Println("Goroutine count:", runtime.NumGoroutine())
// Set GOMAXPROCS (default is NumCPU)
// runtime.GOMAXPROCS(4)
}
GOMAXPROCS defaults to the number of CPU cores, automatically utilizing multi-core processors.
sync.WaitGroup — Waiting for Goroutines to Complete
Using time.Sleep to wait is not ideal. sync.WaitGroup lets you wait precisely until all goroutines have finished.
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Decrement counter when function exits
fmt.Printf("Worker %d starting\n", id)
// Perform actual work...
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // Increment counter
go worker(i, &wg)
}
wg.Wait() // Wait until counter reaches zero
fmt.Println("All workers complete")
}
WaitGroup Rules
- Call
wg.Add(n)before starting the goroutine - Call
wg.Done()withdeferinside the goroutine (runs even on panic) - Pass WaitGroup as a pointer (
*sync.WaitGroup)
Automatic Stack Growth
Since Go 1.4, the initial goroutine stack size is only 2KB. When the stack runs low, the Go runtime automatically copies it to a larger one. This allows thousands of goroutines to run without memory concerns.
package main
import (
"fmt"
"sync"
)
// Running 10,000 goroutines concurrently
func main() {
var wg sync.WaitGroup
results := make([]int, 10000)
for i := 0; i < 10000; i++ {
wg.Add(1)
go func(idx int) {
defer wg.Done()
results[idx] = idx * idx
}(i)
}
wg.Wait()
fmt.Printf("First 5 results: %v\n", results[:5])
fmt.Printf("Last 5 results: %v\n", results[9995:])
}
Closures and Goroutines — A Common Pitfall
When starting goroutines in a loop, closures capture variables in a way that can produce unexpected results.
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
// ❌ Wrong pattern — all goroutines reference the same i
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
fmt.Println(i) // Mostly prints 5 (i's value after loop ends)
}()
}
wg.Wait()
fmt.Println("---")
// ✅ Correct pattern 1 — pass value as argument
for i := 0; i < 5; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
fmt.Println(n) // 0, 1, 2, 3, 4 (order is random)
}(i)
}
wg.Wait()
fmt.Println("---")
// ✅ Correct pattern 2 — shadow loop variable (auto-fixed in Go 1.22+)
for i := 0; i < 5; i++ {
i := i // New variable via shadowing
wg.Add(1)
go func() {
defer wg.Done()
fmt.Println(i)
}()
}
wg.Wait()
}
Go 1.22 and later automatically create a new loop variable per iteration, fixing this issue automatically.
Starting Goroutines with Anonymous Functions
You can run goroutines inline without declaring a separate function.
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
data := []string{"apple", "banana", "cherry", "date", "elderberry"}
for _, fruit := range data {
wg.Add(1)
fruit := fruit // Prevent loop variable capture
go func() {
defer wg.Done()
// Process each fruit in parallel
processed := fmt.Sprintf("[processed] %s", fruit)
fmt.Println(processed)
}()
}
wg.Wait()
fmt.Println("All processing complete")
}
Real-World Example — Parallel HTTP Requests
One of the most practical uses of goroutines is sending multiple HTTP requests concurrently.
package main
import (
"fmt"
"net/http"
"sync"
"time"
)
type Result struct {
URL string
Status int
Error error
}
func checkURL(url string, wg *sync.WaitGroup, results chan<- Result) {
defer wg.Done()
client := &http.Client{Timeout: 5 * time.Second}
resp, err := client.Get(url)
if err != nil {
results <- Result{URL: url, Error: err}
return
}
defer resp.Body.Close()
results <- Result{URL: url, Status: resp.StatusCode}
}
func main() {
urls := []string{
"https://golang.org",
"https://github.com",
"https://google.com",
}
var wg sync.WaitGroup
results := make(chan Result, len(urls))
start := time.Now()
for _, url := range urls {
wg.Add(1)
go checkURL(url, &wg, results)
}
// Close channel after all goroutines finish
go func() {
wg.Wait()
close(results)
}()
// Collect results
for r := range results {
if r.Error != nil {
fmt.Printf("❌ %s: %v\n", r.URL, r.Error)
} else {
fmt.Printf("✅ %s: %d\n", r.URL, r.Status)
}
}
fmt.Printf("\nElapsed: %v (faster than sequential)\n", time.Since(start))
}
Goroutine Leaks
A goroutine that never terminates is called a goroutine leak— similar to a memory leak.
package main
import (
"fmt"
"runtime"
"time"
)
// ❌ Leak — goroutine blocks forever if no one reads from ch
func leakyFunc(ch chan<- int) {
ch <- 42
}
// ✅ Use context or done channel to send termination signal
func safeFunc(ch chan<- int, done <-chan struct{}) {
select {
case ch <- 42:
case <-done:
fmt.Println("goroutine exited cleanly")
}
}
func main() {
fmt.Println("Initial goroutine count:", runtime.NumGoroutine())
// Leak example
for i := 0; i < 10; i++ {
ch := make(chan int) // Nobody reads from this channel
go leakyFunc(ch)
}
time.Sleep(10 * time.Millisecond)
fmt.Println("After leak:", runtime.NumGoroutine()) // 10+
// Clean exit example
done := make(chan struct{})
ch := make(chan int, 1)
go safeFunc(ch, done)
close(done) // Send termination signal
time.Sleep(10 * time.Millisecond)
fmt.Println("After clean exit:", runtime.NumGoroutine())
}
Goroutine vs Thread Comparison
| Aspect | OS Thread | Goroutine |
|---|---|---|
| Stack size | ~1MB (fixed) | 2KB ~ (dynamic) |
| Creation cost | High (μs~ms) | Low (ns) |
| Scheduler | OS kernel | Go runtime |
| Context switch | Slow | Fast |
| Communication | Shared memory | Channels recommended |
| Max count | Hundreds~thousands | Hundreds of thousands |
Key Takeaways
go func()— runs a function asynchronously in a new goroutinesync.WaitGroup— waits for multiple goroutines to complete- Watch loop variable capture— pass as argument or shadow with new variable
- main exits = all goroutines stop— synchronize with WaitGroup or channels
- Prevent goroutine leaks— propagate cancellation signals with context package