All Benchmarks

Concurrent Map Access — Go Benchmark

A benchmark to compare the performance of different concurrent map access implementations in Go.

mapconcurrencysync

The classic map access is done by using the map[key] syntax. This implementation works fine in the most cases, but it is not thread-safe. A solution is to use the sync.Map type or to add a mutex to the map. This benchmark shows which implementation is the fastest.

linux/amd64AMD Ryzen 9 9950X3D 16-Core Processorbenchmarks/concurrent-map-access
Compare at
CPUs

1 CPU

Read

Fastest

Mutex

Slowest

Sync
Write

Fastest

Mutex

Slowest

Sync

32 CPUs

Read

Fastest

Sync

Slowest

Mutex
Write

Fastest

Sync

Slowest

Mutex
Performance Comparison (lower is better)
CPU:

Mutex

Fastest (Read, 1 CPU)Slowest (Read, 32 CPUs)Fastest (Write, 1 CPU)Slowest (Write, 32 CPUs)

Uses a sync.RWMutex to protect a plain map[int]int. Reads take a shared lock (RLock), writes take an exclusive lock (Lock). This is the standard approach when you control all access sites and need fine-grained locking with concurrent readers.

Performance — Mutex (lower is better)
CPU:
// mapSize is the key space used by all benchmarks in this package.
const mapSize = 1000

func BenchmarkMutex_write(b *testing.B) {
	var mu sync.RWMutex
	m := make(map[int]int)

	b.RunParallel(func(pb *testing.PB) {
		i := 0
		for pb.Next() {
			mu.Lock()
			m[i%mapSize] = i
			mu.Unlock()
			i++
		}
	})
}

func BenchmarkMutex_read(b *testing.B) {
	var mu sync.RWMutex
	m := make(map[int]int, mapSize)
	for i := range mapSize {
		m[i] = i
	}

	b.ResetTimer()
	b.RunParallel(func(pb *testing.PB) {
		i := 0
		for pb.Next() {
			mu.RLock()
			_ = m[i%mapSize]
			mu.RUnlock()
			i++
		}
	})
}
1 CPU

Read

1.1×faster(12%)thanSync

Write

1.6×faster(59%)thanSync
32 CPUs

Read

1.5×slower(49%)thanSync

Write

1.7×slower(74%)thanSync

Sync

Fastest (Read, 32 CPUs)Slowest (Read, 1 CPU)Fastest (Write, 32 CPUs)Slowest (Write, 1 CPU)

Uses the sync.Map type from the standard library. It is inherently thread-safe and needs no external locking. Optimised for keys that are stable over time — performs best when entries are written once and read many times.

Performance — Sync (lower is better)
CPU:
// mapSize is the key space used by all benchmarks in this package.
const mapSize = 1000

func BenchmarkSync_write(b *testing.B) {
	var m sync.Map

	b.RunParallel(func(pb *testing.PB) {
		i := 0
		for pb.Next() {
			m.Store(i%mapSize, i)
			i++
		}
	})
}

func BenchmarkSync_read(b *testing.B) {
	var m sync.Map
	for i := range mapSize {
		m.Store(i, i)
	}

	b.ResetTimer()
	b.RunParallel(func(pb *testing.PB) {
		i := 0
		for pb.Next() {
			m.Load(i % mapSize)
			i++
		}
	})
}
1 CPU

Read

1.1×slower(12%)thanMutex

Write

1.6×slower(59%)thanMutex
32 CPUs

Read

1.5×faster(49%)thanMutex

Write

1.7×faster(74%)thanMutex

Contributors