Loading page…
Rust walkthroughs
Loading page…
{"args":{"content":"# How does criterion::BenchmarkId parameterize benchmarks for comparing different input sizes?
BenchmarkId identifies individual benchmark runs within a benchmark group, allowing you to compare performance across different input sizes, configurations, or parameters while keeping results organized and meaningful. Each BenchmarkId combines a function name with a parameter value, creating unique identifiers that Criterion uses to group, compare, and display benchmark results. This enables meaningful performance comparisons: you can benchmark the same algorithm with different input sizes and see how performance scales, or compare different implementations with the same inputs. Use BenchmarkId when you need to measure performance across a parameter space, not just for a single fixed input.
use criterion::{BenchmarkId, Criterion, black_box};
fn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"sorting\");\n \n for size in [100, 1000, 10000].iter() {\n group.bench_with_input(\n BenchmarkId::new(\"sort\", size),\n size,\n |b, &size| {\n b.iter(|| {\n let mut data: Vec<i32> = (0..size).collect();\n data.sort();\n black_box(data)\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId::new()` creates a unique identifier combining the function name and parameter.
## Comparing Algorithms Across Input Sizes
```rust
use criterion::{BenchmarkId, Criterion, black_box};
fn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"search\");\n \n for size in [100, 1000, 10000, 100000] {\n let data: Vec<i32> = (0..size).collect();\n let target = size / 2; // Middle element\n \n group.bench_with_input(\n BenchmarkId::new(\"linear_search\", size),\n &(\"linear\", &data, target),\n |b, &(method, data, target)| {\n b.iter(|| {\n data.iter().position(|&x| x == target)\n });\n },\n );\n \n group.bench_with_input(\n BenchmarkId::new(\"binary_search\", size),\n &(\"binary\", &data, target),\n |b, &(method, data, target)| {\n b.iter(|| {\n data.binary_search(&target)\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` distinguishes between different algorithms at each input size.
## Parameterized Benchmark Structure
```rust
use criterion::{BenchmarkId, Criterion, black_box};
fn fibonacci(n: u64) -> u64 {\n if n < 2 { n } else { fibonacci(n - 1) + fibonacci(n - 2) }\n}\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"fibonacci\");\n \n // Parameterize by input value\n for n in [10, 15, 20, 25, 30] {\n group.bench_with_input(\n BenchmarkId::new(\"recursive\", n),\n &n,\n |b, &n| {\n b.iter(|| fibonacci(n));\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` labels each benchmark with its parameter value for clear output.
## Multiple Parameters with BenchmarkId
```rust
use criterion::{BenchmarkId, Criterion, black_box};\n\nfn process_chunk(data: &[i32], chunk_size: usize) -> i32 {\n data.chunks(chunk_size)\n .map(|chunk| chunk.iter().sum())\n .sum()\n}\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"chunk_processing\");\n \n let data_size = 10000;\n let data: Vec<i32> = (0..data_size).collect();\n \n // Compare different chunk sizes\n for chunk_size in [10, 100, 1000] {\n group.bench_with_input(\n BenchmarkId::new(\"chunks\", chunk_size),\n &chunk_size,\n |b, &chunk_size| {\n b.iter(|| process_chunk(&data, chunk_size));\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` parameterizes by chunk size, showing how processing time varies.
## Formatting Parameter Values
```rust
use criterion::{BenchmarkId, Criterion};\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"hash_maps\");\n \n // Custom parameter formatting\n for size in [100, 1_000, 10_000, 100_000] {\n group.bench_with_input(\n BenchmarkId::new(\"insert\", format!(\"{}K\", size / 1000)),\n &size,\n |b, &size| {\n b.iter(|| {\n let mut map = std::collections::HashMap::new();\n for i in 0..size {\n map.insert(i, i * 2);\n }\n map\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId::new()` accepts any `Display` type for custom parameter formatting.
## Comparing Different Implementations
```rust
use criterion::{BenchmarkId, Criterion, black_box};\nuse std::collections::{HashMap, BTreeMap};\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"map_insert\");\n \n for size in [100, 1000, 10000] {\n // HashMap insertion\n group.bench_with_input(\n BenchmarkId::new(\"HashMap\", size),\n &size,\n |b, &size| {\n b.iter(|| {\n let mut map = HashMap::new();\n for i in 0..size {\n map.insert(i, i);\n }\n black_box(map)\n });\n },\n );\n \n // BTreeMap insertion\n group.bench_with_input(\n BenchmarkId::new(\"BTreeMap\", size),\n &size,\n |b, &size| {\n b.iter(|| {\n let mut map = BTreeMap::new();\n for i in 0..size {\n map.insert(i, i);\n }\n black_box(map)\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` distinguishes between implementations at each size, enabling fair comparison.
## Throughput Measurement
```rust
use criterion::{BenchmarkId, Criterion, Throughput, black_box};\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"file_parsing\");\n \n for size in [100, 1_000, 10_000] {\n group.throughput(Throughput::Elements(size));\n \n group.bench_with_input(\n BenchmarkId::new(\"parse_items\", size),\n &size,\n |b, &size| {\n b.iter(|| {\n let items: Vec<String> = (0..size).map(|i| format!(\"item{}\", i)).collect();\n black_box(items)\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` with throughput enables rate-based measurements (elements/second).
## Benchmark Groups and IDs
```rust
use criterion::{BenchmarkId, Criterion, black_box};\n\nfn main() {\n let mut c = Criterion::default();\n \n // Group 1: String operations\n let mut group = c.benchmark_group(\"string_ops\");\n \n for len in [10, 100, 1000] {\n group.bench_with_input(\n BenchmarkId::new(\"concat\", len),\n &len,\n |b, &len| {\n b.iter(|| {\n let s1 = \"x\".repeat(len);\n let s2 = \"y\".repeat(len);\n format!(\"{}{}\", s1, s2)\n });\n },\n );\n }\n \n group.finish();\n \n // Group 2: Number operations\n let mut group = c.benchmark_group(\"number_ops\");\n \n for count in [100, 1000, 10000] {\n group.bench_with_input(\n BenchmarkId::new(\"sum\", count),\n &count,\n |b, &count| {\n b.iter(|| {\n (0..count).sum::<u64>()\n });\n },\n );\n }\n \n group.finish();\n}\n```
Benchmark groups organize related `BenchmarkId`s together for clear reporting.
## Complex Parameter Types
```rust
use criterion::{BenchmarkId, Criterion, black_box};\n\n#[derive(Debug, Clone)]\nstruct Config {\n size: usize,\n threads: usize,\n}\n\nimpl std::fmt::Display for Config {\n fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n write!(f, \"size={}-threads={}\", self.size, self.threads)\n }\n}\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"parallel_processing\");\n \n let configs = vec![\n Config { size: 1000, threads: 1 },\n Config { size: 1000, threads: 2 },\n Config { size: 1000, threads: 4 },\n Config { size: 10000, threads: 1 },\n Config { size: 10000, threads: 4 },\n ];\n \n for config in configs {\n group.bench_with_input(\n BenchmarkId::new(\"process\", config.clone()),\n &config,\n |b, config| {\n b.iter(|| {\n // Simulate processing\n (0..config.size).map(|x| x * config.threads).sum::<usize>()\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` works with custom types implementing `Display`.
## Warm-up and Sample Configuration
```rust
use criterion::{BenchmarkId, Criterion};\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"matrix_multiply\");\n \n // Configure warm-up and samples per parameter\n for size in [10, 50, 100] {\n group.bench_with_input(\n BenchmarkId::new(\"multiply\", size),\n &size,\n |b, &size| {\n b.iter(|| {\n // Simulate matrix multiplication\n let matrix: Vec<Vec<i32>> = (0..size)\n .map(|_| (0..size).collect())\n .collect();\n matrix\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` works with all Criterion configuration options.
## Size-Dependent Performance Analysis
```rust
use criterion::{BenchmarkId, Criterion, black_box};\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"lookup_performance\");\n \n // Measure how lookup time scales with size\n let sizes = [10, 100, 1_000, 10_000, 100_000, 1_000_000];\n \n for size in sizes {\n let data: Vec<i32> = (0..size).collect();\n \n // Linear lookup in Vec\n group.bench_with_input(\n BenchmarkId::new(\"vec_lookup\", size),\n &size,\n |b, _| {\n let target = size / 2; // Middle element\n b.iter(|| {\n data.iter().position(|&x| x == target as i32)\n });\n },\n );\n \n // Hash map lookup\n let map: std::collections::HashMap<i32, i32> = \n (0..size).map(|i| (i, i)).collect();\n group.bench_with_input(\n BenchmarkId::new(\"hashmap_lookup\", size),\n &size,\n |b, _| {\n let target = size / 2;\n b.iter(|| {\n map.get(&(target as i32))\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` enables scaling analysis across multiple orders of magnitude.
## Memory-Related Benchmarking
```rust
use criterion::{BenchmarkId, Criterion, black_box};\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"allocation\");\n \n // Compare allocation strategies\n for size in [100, 1_000, 10_000] {\n // Without pre-allocation\n group.bench_with_input(\n BenchmarkId::new(\"push_no_capacity\", size),\n &size,\n |b, &size| {\n b.iter(|| {\n let mut vec = Vec::new();\n for i in 0..size {\n vec.push(i);\n }\n vec\n });\n },\n );\n \n // With pre-allocation\n group.bench_with_input(\n BenchmarkId::new(\"push_with_capacity\", size),\n &size,\n |b, &size| {\n b.iter(|| {\n let mut vec = Vec::with_capacity(size);\n for i in 0..size {\n vec.push(i);\n }\n vec\n });\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` labels different allocation strategies at each size.
## Benchmarking with Input Data Generation
```rust
use criterion::{BenchmarkId, Criterion, black_box};\nuse rand::seq::SliceRandom;\nuse rand::thread_rng;\n\nfn main() {\n let mut c = Criterion::default();\n \n let mut group = c.benchmark_group(\"sorting_algorithms\");\n \n for size in [100, 1_000, 10_000] {\n // Pre-generate random data outside benchmark\n let mut data: Vec<i32> = (0..size).collect();\n data.shuffle(&mut thread_rng());\n \n group.bench_with_input(\n BenchmarkId::new(\"sort\", size),\n &data,\n |b, data| {\n b.iter_batched(\n || data.clone(), // Setup: clone data\n |mut data| {\n data.sort();\n black_box(data)\n },\n criterion::BatchSize::SmallInput,\n );\n },\n );\n \n group.bench_with_input(\n BenchmarkId::new(\"sort_unstable\", size),\n &data,\n |b, data| {\n b.iter_batched(\n || data.clone(),\n |mut data| {\n data.sort_unstable();\n black_box(data)\n },\n criterion::BatchSize::SmallInput,\n );\n },\n );\n }\n \n group.finish();\n}\n```
`BenchmarkId` works with `iter_batched` for setup that depends on parameters.
## Synthesis
**BenchmarkId purpose**:
- Identifies individual benchmark runs within a group
- Combines function name with parameter value
- Creates unique labels for organized output
- Enables comparison across parameter space
**Key components**:
- `BenchmarkId::new(name, parameter)` creates a new ID
- `name`: Function or algorithm identifier
- `parameter`: Any type implementing `Display`
- Used with `group.bench_with_input()`
**Common patterns**:
- Compare algorithms at each input size
- Measure scaling across orders of magnitude
- Compare implementations with same parameters
- Track performance across configurations
**Integration with Criterion**:
- Works with benchmark groups
- Compatible with throughput configuration
- Supports all iteration methods (iter, iter_batched, etc.)
- Enables statistical comparison across IDs
**Parameter formatting**:
- Can use integers directly: `BenchmarkId::new(\"name\", 1000)`
- Custom formatting: `BenchmarkId::new(\"name\", format!(\"{}K\", size))`
- Custom types with `Display` implementation
- Complex parameters with detailed formatting
**Output organization**:
- Groups organize related benchmarks
- IDs provide clear labels in reports
- HTML reports show comparisons across IDs
- Statistical analysis across parameter values
**Use cases**:
- Algorithm complexity analysis (O(n), O(n log n))
- Implementation comparison (HashMap vs BTreeMap)
- Configuration impact (chunk size, buffer size)
- Scaling analysis (memory, time across sizes)
**Key insight**: `BenchmarkId` transforms benchmarks from single-point measurements into multi-dimensional analysis. Instead of answering \"how fast is sort?\" you answer \"how does sort performance scale with input size?\" The parameter becomes a dimension of the benchmark, and Criterion can plot, compare, and analyze performance across this dimension. This enables algorithmic complexity measurement (showing O(n²) vs O(n log n) behavior), configuration optimization (finding the best buffer size), and scaling analysis (determining when an algorithm becomes impractical). `BenchmarkId` is essential for any benchmark where performance depends on more than one factor.","path":"/articles/296_criterion_benchmark_id.md"},"tool_call":"file.create"}