What is the purpose of criterion::BenchmarkId for parameterizing benchmarks across multiple input sizes?
BenchmarkId provides a structured way to identify and differentiate benchmark iterations when running the same benchmark with varying parameters, enabling you to measure how performance scales across different input sizes or configurations while maintaining clear, comparable results in reports. Without BenchmarkId, you would need separate benchmark functions for each parameter value, leading to code duplication and difficult comparisons.
The Problem: Multiple Input Sizes
use criterion::{black_box, criterion_group, criterion_main, Criterion};
// Without BenchmarkId, you'd need separate functions for each size
fn bench_sort_100(c: &mut Criterion) {
let mut data: Vec<i32> = (0..100).collect();
c.bench_function("sort_100", |b| {
b.iter(|| {
data.sort();
black_box(&data);
});
});
}
fn bench_sort_1000(c: &mut Criterion) {
let mut data: Vec<i32> = (0..1000).collect();
c.bench_function("sort_1000", |b| {
b.iter(|| {
data.sort();
black_box(&data);
});
});
}
fn bench_sort_10000(c: &mut Criterion) {
let mut data: Vec<i32> = (0..10000).collect();
c.bench_function("sort_10000", |b| {
b.iter(|| {
data.sort();
black_box(&data);
});
});
}
// This is repetitive and makes comparisons harder
criterion_group!(benches, bench_sort_100, bench_sort_1000, bench_sort_10000);
criterion_main!(benches);Separate functions for each size create duplication and scattered results.
BenchmarkId to the Rescue
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
fn bench_sort(c: &mut Criterion) {
let sizes: Vec<usize> = vec![100, 1000, 10000, 100000];
let mut group = c.benchmark_group("sort");
for size in sizes {
// BenchmarkId combines group name with parameter value
group.bench_with_input(BenchmarkId::new("size", size), &size, |b, &size| {
let mut data: Vec<i32> = (0..size as i32).collect();
b.iter(|| {
data.sort();
black_box(&data);
});
});
}
group.finish();
}
criterion_group!(benches, bench_sort);
criterion_main!(benches);BenchmarkId::new creates a unique identifier combining a parameter name and value.
BenchmarkId Structure
use criterion::BenchmarkId;
fn benchmark_id_structure() {
// BenchmarkId combines:
// 1. A function/parameter name (the "what")
// 2. A parameter value (the "how much")
let id1 = BenchmarkId::new("input_size", 100);
let id2 = BenchmarkId::new("input_size", 1000);
let id3 = BenchmarkId::new("algorithm", "quicksort");
let id4 = BenchmarkId::new("algorithm", "mergesort");
// The ID creates distinct benchmark names in reports:
// - sort/input_size/100
// - sort/input_size/1000
// - sort/algorithm/quicksort
// - sort/algorithm/mergesort
// BenchmarkId is Display-able for reports
println!("{}", id1); // "input_size/100"
}
// BenchmarkId can use any type that implements Display
fn benchmark_id_types() {
let id1: BenchmarkId<usize> = BenchmarkId::new("size", 100);
let id2: BenchmarkId<&str> = BenchmarkId::new("algorithm", "quick");
let id3: BenchmarkId<String> = BenchmarkId::new("name", "test".to_string());
}BenchmarkId parameterizes benchmarks with any Display type.
Comparing Multiple Algorithms
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
fn bubble_sort(data: &mut [i32]) {
for i in 0..data.len() {
for j in 0..data.len() - 1 - i {
if data[j] > data[j + 1] {
data.swap(j, j + 1);
}
}
}
}
fn quick_sort(data: &mut [i32]) {
data.sort(); // Use built-in quicksort
}
fn bench_algorithms(c: &mut Criterion) {
let mut group = c.benchmark_group("sorting_algorithms");
let sizes: Vec<usize> = vec![10, 100, 1000];
let algorithms: &[(&str, fn(&mut [i32]))] = &[
("bubble_sort", bubble_sort),
("quick_sort", quick_sort),
];
for (algo_name, algo_fn) in algorithms {
for size in &sizes {
let id = BenchmarkId::new(*algo_name, size);
group.bench_with_input(id, size, |b, &size| {
let mut data: Vec<i32> = (0..size as i32).rev().collect(); // Reverse sorted
b.iter(|| {
let mut data_clone = data.clone();
algo_fn(&mut data_clone);
black_box(data_clone);
});
});
}
}
group.finish();
}
criterion_group!(benches, bench_algorithms);
criterion_main!(benches);BenchmarkId enables comparing multiple algorithms across multiple input sizes.
Visualizing Scaling Behavior
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
fn bench_scaling(c: &mut Criterion) {
let mut group = c.benchmark_group("hash_insert");
let sizes: Vec<usize> = vec![100, 1000, 10000, 100000, 1000000];
for size in sizes {
group.throughput(Throughput::Elements(size as u64));
group.bench_with_input(BenchmarkId::new("insert_n", size), &size, |b, &size| {
let mut map = std::collections::HashMap::new();
b.iter(|| {
map.clear();
for i in 0..size {
map.insert(i, i * 2);
}
black_box(&map);
});
});
}
group.finish();
}
criterion_group!(benches, bench_scaling);
criterion_main!(benches);
// This generates output showing how performance scales:
// hash_insert/insert_n/100 time: [1.23 µs 1.25 µs 1.27 µs]
// hash_insert/insert_n/1000 time: [12.3 µs 12.5 µs 12.7 µs]
// hash_insert/insert_n/10000 time: [123 µs 125 µs 127 µs]
// ...Throughput combined with BenchmarkId helps measure scaling behavior.
Multiple Parameters
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
fn bench_multi_param(c: &mut Criterion) {
let mut group = c.benchmark_group("matrix_multiply");
// Two dimensions: matrix size and thread count
let sizes: Vec<usize> = vec![10, 50, 100];
let threads: Vec<usize> = vec![1, 2, 4, 8];
for size in &sizes {
for thread_count in &threads {
// Combine parameters in the ID
let id = BenchmarkId::new(
format!("size_{}", size),
*thread_count,
);
group.bench_with_input(id, &(*size, *thread_count), |b, &(size, threads)| {
let a = vec![1.0f64; size * size];
let b_matrix = vec![1.0f64; size * size];
b.iter(|| {
// Simulate matrix multiply with thread count
let result = simulate_multiply(&a, &b_matrix, size, threads);
black_box(result);
});
});
}
}
group.finish();
}
fn simulate_multiply(a: &[f64], b: &[f64], size: usize, _threads: usize) -> Vec<f64> {
// Simplified - would actually use threads
let mut result = vec![0.0; size * size];
for i in 0..size {
for j in 0..size {
for k in 0..size {
result[i * size + j] += a[i * size + k] * b[k * size + j];
}
}
}
result
}
criterion_group!(benches, bench_multi_param);
criterion_main!(benches);For multiple parameters, encode one in the name and use another as the value.
Parameterized Input Data
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
fn bench_string_operations(c: &mut Criterion) {
let mut group = c.benchmark_group("string_search");
// Different input characteristics
let inputs: Vec<(&str, &str)> = vec![
("short_match", "hello"),
("long_match", "hello world this is a longer string"),
("no_match", "xyzabc123"),
("edge_case", ""), // Empty string
];
for (name, pattern) in &inputs {
group.bench_with_input(
BenchmarkId::new("contains", name),
*pattern,
|b, &pattern| {
let haystack = "hello world this is a longer string to search through";
b.iter(|| {
black_box(haystack.contains(pattern));
});
},
);
}
group.finish();
}
criterion_group!(benches, bench_string_operations);
criterion_main!(benches);Use descriptive names for qualitative parameter differences.
Bench Input Access Pattern
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion};
fn bench_input_pattern(c: &mut Criterion) {
let mut group = c.benchmark_group("vec_push");
let sizes: Vec<usize> = vec![100, 1000, 10000];
for size in &sizes {
// bench_with_input signature:
// fn bench_with_input<I, F>(&mut self, id: BenchmarkId, input: &I, f: F)
// - id: identifies this iteration
// - input: passed to closure
// - f: closure receives &mut Bencher and &I
group.bench_with_input(
BenchmarkId::new("capacity", size),
size, // This becomes the &size parameter in closure
|b, &size| { // size is borrowed from sizes vec
b.iter(|| {
let mut v = Vec::with_capacity(size);
for i in 0..size {
v.push(i);
}
black_box(v);
});
},
);
}
group.finish();
}
criterion_group!(benches, bench_input_pattern);
criterion_main!(benches);The input parameter is passed through to the benchmark closure.
Comparing Results with Charts
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, PlotConfiguration, AxisScale};
fn bench_with_charts(c: &mut Criterion) {
let mut group = c.benchmark_group("lookup_scaling");
// Configure for log scale if measuring exponential scaling
let plot_config = PlotConfiguration::default()
.summary_scale(AxisScale::Logarithmic);
group.plot_config(plot_config);
let sizes: Vec<usize> = vec![
10, 100, 1000, 10000, 100000, 1000000
];
// Compare HashMap vs BTreeMap lookup scaling
for size in &sizes {
// HashMap - O(1) expected
group.bench_with_input(
BenchmarkId::new("HashMap", size),
size,
|b, &size| {
let map: std::collections::HashMap<i32, i32> =
(0..size as i32).map(|i| (i, i)).collect();
b.iter(|| {
for i in 0..size as i32 {
black_box(map.get(&i));
}
});
},
);
// BTreeMap - O(log n)
group.bench_with_input(
BenchmarkId::new("BTreeMap", size),
size,
|b, &size| {
let map: std::collections::BTreeMap<i32, i32> =
(0..size as i32).map(|i| (i, i)).collect();
b.iter(|| {
for i in 0..size as i32 {
black_box(map.get(&i));
}
});
},
);
}
group.finish();
}
criterion_group!(benches, bench_with_charts);
criterion_main!(benches);
// The HTML report will show:
// - Line chart comparing HashMap vs BTreeMap scaling
// - Clear visualization of O(1) vs O(log n) behavior
// - Data tables with exact measurementsBenchmarkId creates structured output that Criterion uses for comparison charts.
Using Parameter Values
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
fn bench_with_parameter_values(c: &mut Criterion) {
let mut group = c.benchmark_group("file_read");
// Parameter represents actual bytes to read
let file_sizes: Vec<usize> = vec![1024, 10240, 102400]; // 1KB, 10KB, 100KB
for size in &file_sizes {
// Throughput lets Criterion know how much work is done
group.throughput(Throughput::Bytes(*size as u64));
group.bench_with_input(BenchmarkId::new("read", size), size, |b, &size| {
let data = vec![0u8; size];
b.iter(|| {
// Simulate reading size bytes
black_box(&data);
});
});
}
group.finish();
}
// When you use throughput(), Criterion can show:
// - Throughput in bytes/sec
// - Throughput in elements/sec
// - This helps compare benchmarks with different work amountsThroughput combined with BenchmarkId enables meaningful throughput measurements.
Real-World Example: Algorithm Selection
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use std::collections::{BTreeMap, HashMap};
fn find_kth(data: &mut [i32], k: usize) -> i32 {
// Quickselect algorithm - O(n) average
data.select_nth_unstable(k);
data[k]
}
fn find_kth_sort(data: &mut [i32], k: usize) -> i32 {
// Sort and index - O(n log n)
data.sort();
data[k]
}
fn bench_kth_selection(c: &mut Criterion) {
let mut group = c.benchmark_group("kth_element");
let sizes: Vec<usize> = vec![100, 1000, 10000, 100000];
let k_percentages: Vec<f64> = vec![0.01, 0.5, 0.99]; // 1%, 50%, 99th percentile
for size in &sizes {
for &k_pct in &k_percentages {
let k = (size as f64 * k_pct) as usize;
// Quickselect
group.bench_with_input(
BenchmarkId::new(format!("quickselect/{}", k_pct), size),
&(*size, k),
|b, &(size, k)| {
b.iter_batched(
|| (0..size as i32).rev().collect::<Vec<_>>(),
|mut data| {
find_kth(&mut data, k);
black_box(data);
},
criterion::BatchSize::SmallInput,
);
},
);
// Sort-based
group.bench_with_input(
BenchmarkId::new(format!("sort_based/{}", k_pct), size),
&(*size, k),
|b, &(size, k)| {
b.iter_batched(
|| (0..size as i32).rev().collect::<Vec<_>>(),
|mut data| {
find_kth_sort(&mut data, k);
black_box(data);
},
criterion::BatchSize::SmallInput,
);
},
);
}
}
group.finish();
}
criterion_group!(benches, bench_kth_selection);
criterion_main!(benches);Complex algorithm comparisons are clear with structured BenchmarkId naming.
Group Organization
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
// Groups organize related benchmarks together
fn bench_collections(c: &mut Criterion) {
// Group 1: Insert operations
{
let mut group = c.benchmark_group("insert");
for size in &[100, 1000, 10000] {
group.bench_with_input(
BenchmarkId::new("Vec", size),
size,
|b, &size| {
b.iter(|| {
let mut v = Vec::new();
for i in 0..size { v.push(i); }
v
});
},
);
group.bench_with_input(
BenchmarkId::new("HashMap", size),
size,
|b, &size| {
b.iter(|| {
let mut m = HashMap::new();
for i in 0..size { m.insert(i, i); }
m
});
},
);
}
group.finish();
}
// Group 2: Lookup operations
{
let mut group = c.benchmark_group("lookup");
for size in &[100, 1000, 10000] {
// Similar pattern for lookup benchmarks
}
group.finish();
}
// Groups appear as separate sections in HTML reports
// Each group has its own charts and comparisons
}
use std::collections::HashMap;
criterion_group!(benches, bench_collections);
criterion_main!(benches);Groups separate different benchmark categories in reports.
Summary Table
fn summary() {
// | Feature | Without BenchmarkId | With BenchmarkId |
// |----------------------------|----------------------------|------------------------------|
// | Multiple input sizes | Separate functions | Single function with loop |
// | Code duplication | High | Low |
// | Comparison reports | Manual | Automatic |
// | Scaling visualization | Difficult | Built-in charts |
// | Parameter tracking | Name-based only | Name + value |
// | Result organization | Flat | Hierarchical groups |
// BenchmarkId best practices:
// 1. Use descriptive parameter names
// 2. Include units in values when helpful
// 3. Use throughput() for performance scaling
// 4. Group related benchmarks together
// 5. Use log scale for exponential data
}Synthesis
Quick reference:
use criterion::{BenchmarkId, Criterion};
fn bench_example(c: &mut Criterion) {
let mut group = c.benchmark_group("my_bench");
for size in [100, 1000, 10000] {
group.bench_with_input(
BenchmarkId::new("size", size),
&size,
|b, &size| {
b.iter(|| {
// Benchmark code using size
size
});
},
);
}
group.finish();
}Key insight: BenchmarkId solves the fundamental problem of measuring how code performance varies with input size without code duplication. Without it, you would need to write separate benchmark functions for each input size, making the code harder to maintain and the results harder to compare. BenchmarkId::new("parameter_name", value) creates unique identifiers that Criterion uses to organize results hierarchically—first by benchmark group, then by parameter name, then by parameter value. This structure enables Criterion's HTML reports to generate comparison charts showing how different algorithms scale across input sizes, making it easy to see that Algorithm A is faster for small inputs while Algorithm B is faster for large inputs, or to identify the crossover point. The bench_with_input method passes the parameter value to your benchmark closure, ensuring the same input is used consistently for warmup, measurement, and iteration counting. Combined with Throughput, BenchmarkId enables Criterion to show not just execution time, but throughput (elements/second, bytes/second), giving a clearer picture of actual performance characteristics across different workloads.
