All Benchmarks

Array vs Slice — Go Benchmark

Compare fixed-size arrays, pre-allocated slices, and dynamically grown slices.

arraysliceallocation

Compares three ways to store a sequence of integers in Go: a fixed-size array, a slice pre-allocated with make, and a slice grown dynamically via append. The array and pre-allocated slice are written to by index, while the dynamic slice starts empty each iteration and grows through repeated appends. This highlights the cost of slice growth and bounds-checking relative to compile-time-fixed arrays.

linux/amd64AMD Ryzen 9 9950X3D 16-Core Processorbenchmarks/array-vs-slice
Compare at
CPUs

1 CPU

Fastest

Array

32 CPUs

Fastest

Array
Performance Comparison (lower is better)
CPU:
1

Array

Fastest

A fixed-size [1000]int array allocated on the stack. Each iteration writes to every index. Because the size is known at compile time, the compiler can elide bounds checks and the data stays contiguous in a single stack frame with zero heap allocations.

CPU Scaling — Array (lower is better)
const size = 1000

func BenchmarkArray_run(b *testing.B) {
	var arr [size]int

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		for j := 0; j < size; j++ {
			arr[j] = j
		}
	}

	sink = arr[size-1]
}
1 CPU
8.9×faster(787%)thanDynamic Slice
1×faster(1%)thanPreallocated Slice
32 CPUs
17.7×faster(1670%)thanDynamic Slice
Same speed asPreallocated Slice
2

Preallocated Slice

Uses make([]int, 1000) to allocate the full backing array once before the benchmark loop. Each iteration writes to every index, similar to the array benchmark. The slice header adds a small overhead compared to a raw array, but avoids all growth-related allocations.

CPU Scaling — Preallocated Slice (lower is better)
const size = 1000

func BenchmarkPreallocatedSlice_run(b *testing.B) {
	slice := make([]int, size)

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		for j := 0; j < size; j++ {
			slice[j] = j
		}
	}

	sink = slice[size-1]
}
1 CPU
1×slower(1%)thanArray
8.8×faster(779%)thanDynamic Slice
32 CPUs
Same speed asArray
17.7×faster(1669%)thanDynamic Slice
3

Dynamic Slice

Slowest

Starts with a nil slice and grows it via append on every element. Each time the underlying array runs out of capacity, the runtime allocates a larger backing array and copies existing elements over. This represents the worst case when the final size is unknown upfront.

CPU Scaling — Dynamic Slice (lower is better)
const size = 1000

func BenchmarkDynamicSlice_run(b *testing.B) {
	for i := 0; i < b.N; i++ {
		var slice []int
		for j := 0; j < size; j++ {
			slice = append(slice, j)
		}
		sink = slice[size-1]
	}
}
1 CPU
8.9×slower(787%)thanArray
8.8×slower(779%)thanPreallocated Slice
32 CPUs
17.7×slower(1670%)thanArray
17.7×slower(1669%)thanPreallocated Slice

Contributors