Loading page…
Rust walkthroughs
Loading page…
criterion::Throughput for normalizing benchmark results by data size?criterion::Throughput tells the benchmark harness how much data was processed during each iteration, allowing Criterion to report throughput metrics like bytes per second or elements per second instead of just time-per-iteration—this normalizes results across different input sizes and enables meaningful comparisons between benchmarks that operate on different data volumes. When you set Throughput::Bytes(n), Criterion calculates throughput as n / iteration_time, giving you metrics like "processed 1.2 GB/s" that are more meaningful for data-processing code than raw iteration times. This is essential when benchmarking functions whose work scales with input size, because absolute time varies with data volume but throughput remains comparable.
use criterion::{Criterion, Throughput, BenchmarkId};
fn basic_throughput() {
let mut c = Criterion::default();
c.bench_function("process_1kb", |b| {
let data = vec
![0u8; 1024];
b.throughput(Throughput::Bytes(1024));
b.iter(|| process_data(&data));
});
}
fn process_data(data: &[u8]) -> u8 {
data.iter().sum()
}Throughput::Bytes(1024) tells Criterion the benchmark processes 1024 bytes per iteration.
use criterion::Throughput;
fn throughput_types() {
// Bytes processed
let bytes = Throughput::Bytes(1024);
// Elements/iterations
let elements = Throughput::Elements(1000);
// Both allow Criterion to calculate rate metrics
// Bytes: MB/s, GB/s
// Elements: items/s, iterations/s
}Throughput supports both byte-based and element-based measurements.
use criterion::{Criterion, BenchmarkId, Throughput};
fn compare_sizes() {
let mut c = Criterion::default();
let sizes = [64, 256, 1024, 4096, 16384];
let mut group = c.benchmark_group("data_processing");
for size in sizes {
let data = vec
![0u8; size];
group.throughput(Throughput::Bytes(size as u64));
group.bench_with_input(
BenchmarkId::new("process", size),
&data,
|b, data| {
b.iter(|| process_data(data));
}
);
}
group.finish();
}Setting throughput at the group level applies it to all benchmarks in the group.
use criterion::{Criterion, Throughput};
fn output_comparison() {
let mut c = Criterion::default();
// Without throughput - only time reported
c.bench_function("without_throughput", |b| {
let data = vec
![0u8; 1024];
b.iter(|| process_data(&data));
});
// Output: 1.23 µs per iteration
// With throughput - time AND rate reported
c.bench_function("with_throughput", |b| {
let data = vec
![0u8; 1024];
b.throughput(Throughput::Bytes(1024));
b.iter(|| process_data(&data));
});
// Output: 1.23 µs per iteration
// 833 MB/s
}Without throughput, only time metrics are reported; with throughput, rate metrics appear.
use criterion::Throughput;
fn throughput_values() {
// Throughput is the amount processed PER ITERATION
// Not the total amount processed in the benchmark
// If one iteration processes 1 KB:
Throughput::Bytes(1024);
// If one iteration processes 1 million items:
Throughput::Elements(1_000_000);
// The throughput value must match what one iter() call processes
// Common patterns:
// Reading 1 MB from file:
Throughput::Bytes(1024 * 1024);
// Processing 1000 JSON objects:
Throughput::Elements(1000);
// Parsing 500 lines:
Throughput::Elements(500);
}The throughput value reflects what a single iter() call processes.
use criterion::{Criterion, Throughput, BenchmarkId};
fn why_throughput_matters() {
// Consider two implementations:
// Implementation A: 100 µs for 1 KB
// Implementation B: 1 ms for 10 MB
// Without throughput, A looks faster (100 µs vs 1 ms)
// With throughput:
// A: 1 KB / 100 µs = 10 MB/s
// B: 10 MB / 1 ms = 10 GB/s (wait, that's wrong...)
// B: 10 MB / 1 ms = 10 MB/ms = 10 GB/s... no
// B: 10,000 KB / 1000 µs = 10 KB/µs = 10,000 KB/ms = 10 MB/ms = 10 GB/s
// Hmm, let me recalculate:
// A: 1 KB in 100 µs = 10 KB/ms = 10 MB/s
// B: 10 MB in 1 ms = 10 MB/ms = 10 GB/s
// Wait, that's still showing B as faster.
// Let me think again...
// Actually throughput makes comparison fair:
// A: 10 MB/s
// B: 10 GB/s
// B processes more data per second
// The point: comparing raw times is misleading
// when the work being done differs
}Throughput metrics reveal the true performance for data-processing workloads.
use criterion::{Criterion, Throughput, BenchmarkId};
fn data_pipeline() {
let mut c = Criterion::default();
let mut group = c.benchmark_group("json_parser");
let sizes = [100, 1000, 10000];
for size in sizes {
let json_data = generate_json_array(size);
let bytes = json_data.len() as u64;
group.throughput(Throughput::Bytes(bytes));
group.bench_with_input(
BenchmarkId::new("parse", size),
&json_data,
|b, data| {
b.iter(|| parse_json(data));
}
);
}
group.finish();
}
fn generate_json_array(n: usize) -> String {
format
!("[{}]", (0..n).map(|_| "null").collect::<Vec<_>>().join(","))
}
fn parse_json(data: &str) -> serde_json::Value {
serde_json::from_str(data).unwrap()
}For parsers, report throughput in both bytes and elements.
use criterion::{Criterion, Throughput};
fn elements_vs_bytes() {
let mut c = Criterion::default();
// Bytes: for I/O, memory operations, serialization
c.bench_function("copy_bytes", |b| {
let data = vec
![0u8; 1024 * 1024];
b.throughput(Throughput::Bytes(1024 * 1024));
b.iter(|| data.clone());
});
// Elements: for collections, algorithms, iteration
c.bench_function("process_items", |b| {
let items: Vec<i32> = (0..10000).collect();
b.throughput(Throughput::Elements(10000));
b.iter(|| items.iter().sum::<i32>());
});
}Use bytes for I/O-bound work; elements for algorithm-bound work.
use criterion::{Criterion, Throughput};
fn multiple_metrics() {
// Sometimes you want both bytes and elements
// Unfortunately, Criterion only supports one throughput per benchmark
// Workaround: report both in separate benchmarks
let mut c = Criterion::default();
let data = vec
![0u8; 1024];
let elements = data.len() / 4; // Each element is 4 bytes
c.bench_function("by_bytes", |b| {
b.throughput(Throughput::Bytes(1024));
b.iter(|| process(&data));
});
c.bench_function("by_elements", |b| {
b.throughput(Throughput::Elements(elements as u64));
b.iter(|| process(&data));
});
}Criterion supports one throughput type per benchmark; create separate benchmarks for different metrics.
use criterion::{Criterion, Throughput, BenchmarkId};
fn group_throughput() {
let mut c = Criterion::default();
let mut group = c.benchmark_group("comparison");
// Set throughput at group level
group.throughput(Throughput::Bytes(1024));
// All benchmarks in group use same throughput
for algorithm in ["algo_a", "algo_b", "algo_c"] {
group.bench_with_input(
BenchmarkId::new("sort", algorithm),
&algorithm,
|b, algo| {
let data = vec
![0u8; 1024];
b.iter(|| sort_data(&data, algo));
}
);
}
group.finish();
}Group-level throughput applies to all benchmarks in the group.
use criterion::{Criterion, Throughput, BenchmarkId};
fn dynamic_throughput() {
let mut c = Criterion::default();
let mut group = c.benchmark_group("variable_input");
let test_cases = vec
![
(100, 100),
(500, 500),
(1000, 1000),
];
for (size, expected_elements) in test_cases {
group.throughput(Throughput::Elements(expected_elements));
group.bench_with_input(
BenchmarkId::new("process", size),
&size,
|b, &size| {
let data = generate_data(size);
b.iter(|| process(&data));
}
);
}
group.finish();
}Set throughput dynamically based on each input's characteristics.
use criterion::{Criterion, Throughput};
fn interpreting_results() {
// Criterion output with throughput:
//
// process/1000
// time: [1.2345 ms 1.2500 ms 1.2655 ms]
// thrpt: [790.00 MiB/s 800.00 MiB/s 810.00 MiB/s]
//
// The throughput shows data processed per second
// This is more useful for comparing across sizes
// Example: linear scaling
// Size 1000: 1 ms, 1 MB/s
// Size 2000: 2 ms, 1 MB/s (same throughput, larger input)
// Size 4000: 4 ms, 1 MB/s (constant throughput = linear scaling)
// Example: sublinear scaling
// Size 1000: 1 ms, 1 MB/s
// Size 2000: 3 ms, 0.67 MB/s (slower per byte)
// Could indicate O(n²) or worse algorithm
}Throughput reveals scaling characteristics across input sizes.
use criterion::{Criterion, Throughput};
fn regression_detection() {
let mut c = Criterion::default();
c.bench_function("serial_data", |b| {
let data = vec
![0u8; 1024 * 1024];
b.throughput(Throughput::Bytes(1024 * 1024));
b.iter(|| process_serial(&data));
});
// Criterion tracks both time and throughput over time
// Regression in throughput is often more meaningful:
// - 10% slower time might be due to input size variation
// - 10% slower throughput is real performance loss
// With throughput:
// Baseline: 100 MB/s
// New: 80 MB/s <- 20% regression, actionable
}Throughput-based regression detection catches real performance issues.
use criterion::{Criterion, Throughput, BenchmarkId};
fn hash_benchmark() {
let mut c = Criterion::default();
let mut group = c.benchmark_group("hash_functions");
let sizes = [32, 64, 256, 1024, 4096];
for size in sizes {
let data = vec
![0u8; size];
group.throughput(Throughput::Bytes(size as u64));
group.bench_with_input(
BenchmarkId::new("blake3", size),
&data,
|b, data| b.iter(|| blake3::hash(data))
);
group.bench_with_input(
BenchmarkId::new("sha256", size),
&data,
|b, data| b.iter(|| sha256(data))
);
}
group.finish();
// Output shows throughput for each algorithm at each size
// Can compare:
// - Algorithm efficiency at same size
// - Scaling characteristics across sizes
// - Identify optimal algorithm for use case
}
fn sha256(data: &[u8]) -> [u8; 32] {
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(data);
hasher.finalize().into()
}Throughput enables fair comparison of algorithms across input sizes.
use criterion::{Criterion, Throughput};
fn throughput_vs_time() {
let mut c = Criterion::default();
c.bench_function("time_vs_throughput", |b| {
let data = vec
![0u8; 1024];
b.throughput(Throughput::Bytes(1024));
b.iter(|| process(&data));
});
// Both metrics are reported:
// Time: [12.345 µs 12.500 µs 12.655 µs]
// Thrpt: [80.0 MiB/s 81.0 MiB/s 82.0 MiB/s]
// Time tells you: how long does it take?
// Throughput tells you: how much work per second?
// Use time when:
// - Latency matters (request handling)
// - Absolute speed matters (game loop)
// Use throughput when:
// - Comparing across sizes
// - Data processing rate matters (file I/O)
// - Scaling analysis (algorithm complexity)
}Both time and throughput provide valuable information; choose based on context.
use criterion::{Criterion, Throughput};
fn complex_workload() {
let mut c = Criterion::default();
// Database query benchmark
c.bench_function("query_processing", |b| {
b.throughput(Throughput::Elements(100)); // 100 rows
b.iter(|| {
let rows = query_database("SELECT * FROM users LIMIT 100");
process_rows(&rows);
});
});
// What if iterations vary?
// The throughput must represent the AVERAGE per iteration
// If iterations process variable amounts:
// - Either fix the amount per iteration
// - Or use an average/representative value
// Best practice: make iterations deterministic
}For consistent results, each iteration should process a known amount of data.
use criterion::{Criterion, Throughput};
fn html_reports() {
let mut c = Criterion::default()
.output_directory("target/criterion");
// Throughput appears in generated HTML reports
// Charts show:
// - Throughput over time (for regression tracking)
// - Throughput comparison between versions
// - Throughput across input sizes (scaling charts)
c.bench_function("data_processing", |b| {
let data = vec
![0u8; 1024 * 1024];
b.throughput(Throughput::Bytes(1024 * 1024));
b.iter(|| process(&data));
});
// The HTML report includes throughput graphs
// Useful for presentations and regression analysis
}Criterion's HTML reports visualize throughput trends over time.
What Throughput provides:
// Without throughput:
// Benchmark: "process 1 KB"
// Output: 100 µs
// Interpretation: "takes 100 microseconds"
// With throughput:
// Benchmark: "process 1 KB"
// Output: 100 µs, 10 MB/s
// Interpretation: "takes 100 microseconds, processes 10 MB/s"
// Throughput adds context about work doneWhen to use Throughput:
// Use Throughput when:
// - Benchmark processes data (I/O, serialization, parsing)
// - Comparing across input sizes
// - Measuring algorithm scaling
// - Reporting data processing rate
// Skip Throughput when:
// - Benchmark doesn't have clear "amount processed"
// - Measuring pure latency (single operation)
// - Input size is fixed and comparison across sizes not neededThroughput types:
// Throughput::Bytes(n):
// - File I/O, network, memory operations
// - Reports: B/s, KB/s, MB/s, GB/s
// Throughput::Elements(n):
// - Collection processing, algorithms
// - Reports: items/s, iterations/sKey insight: criterion::Throughput normalizes benchmark results by telling Criterion how much work each iteration performs, enabling meaningful throughput metrics (bytes/second, elements/second) alongside time measurements. This is essential when benchmarking data-processing code because throughput remains comparable across different input sizes—comparing "100 MB/s" between algorithms is more meaningful than comparing raw times for different data volumes. Set Throughput::Bytes(n) for I/O operations and Throughput::Elements(n) for collection processing, and Criterion will report both time and rate metrics, enabling fair comparisons and regression detection for data-intensive workloads.