The Cargo Guide
Testing, Benchmarking, and Documentation Workflows
Why Cargo Owns More Than Building
Cargo is not only a package manager and build tool. It is also the front door for testing, benchmarking, examples, and documentation generation. In practice, this means a large part of a Rust developer's workflow runs through Cargo commands rather than through ad hoc scripts.
A useful mental model is:
cargo buildcompilescargo testvalidates behaviorcargo benchmeasures performancecargo docgenerates documentation
These are not separate ecosystems. They are coordinated parts of one Cargo-driven workflow.
A Small Example Package
Suppose we start with a small package:
cargo new workflow_lab --lib
cd workflow_labManifest:
[package]
name = "workflow_lab"
version = "0.1.0"
edition = "2024"Library code:
/// Returns the square of a number.
///
/// # Examples
///
/// ```
/// assert_eq!(workflow_lab::square(4), 16);
/// ```
pub fn square(x: i32) -> i32 {
x * x
}This is enough to demonstrate unit tests, integration tests, doctests, and documentation generation.
Unit Tests
Unit tests usually live inside the crate source itself, typically in a #[cfg(test)] module.
Example:
pub fn square(x: i32) -> i32 {
x * x
}
#[cfg(test)]
mod tests {
use super::square;
#[test]
fn squares_correctly() {
assert_eq!(square(5), 25);
}
}Run them with:
cargo testA useful mental model is:
- unit tests test a crate from the inside
- they have access to internal module structure and private items through normal Rust testing patterns
Integration Tests
Integration tests usually live under the tests/ directory and use the crate more like an external consumer would.
Example layout:
workflow_lab/
āāā Cargo.toml
āāā src/
ā āāā lib.rs
āāā tests/
āāā api.rsExample integration test:
use workflow_lab::square;
#[test]
fn squares_correctly() {
assert_eq!(square(6), 36);
}Run all tests with:
cargo testOr run just that integration test target:
cargo test --test apiA useful mental model is:
- unit tests test from inside the crate
- integration tests test from outside the crate boundary
Examples as Part of the Workflow
Examples live under examples/ and are a normal part of package structure.
Example:
workflow_lab/
āāā examples/
āāā quickstart.rsExample source:
use workflow_lab::square;
fn main() {
println!("{}", square(7));
}Run an example with:
cargo run --example quickstartExamples are useful because they act as executable documentation and usage sketches.
A subtle but important point is that examples are also built by cargo test by default to ensure they continue to compile, even if they are not always run as test targets unless configured that way.
Benchmark Targets
Benchmark targets usually live under benches/.
Example layout:
workflow_lab/
āāā benches/
āāā perf.rsA very small illustrative benchmark-like file might look like this:
fn main() {
println!("benchmark target placeholder");
}Run benchmark targets with:
cargo benchOr run one named bench target:
cargo bench --bench perfA useful mental model is:
cargo benchis the benchmark-oriented peer ofcargo test- benchmark targets are part of the package's target surface, not a separate build system
Documentation Tests
Documentation tests, often called doctests, come from Rust code blocks embedded in documentation comments.
Example:
/// Returns the square of a number.
///
/// # Examples
///
/// ```
/// assert_eq!(workflow_lab::square(4), 16);
/// ```
pub fn square(x: i32) -> i32 {
x * x
}Then:
cargo testwill also run that documentation example as a test by default for the library target.
A useful mental model is:
- doctests live in docs, not in
tests/ - they validate that examples shown to users still compile and behave as documented
Doctests vs Test Targets
Doctests are different from unit tests and integration tests in an important way. Unit and integration tests are ordinary Rust test targets. Doctests are extracted from documentation comments and tested through rustdoc.
That means a crate can have both:
- source-based test targets under
src/andtests/ - documentation-based tests inside comments
A useful distinction is:
- test targets validate dedicated test code
- doctests validate user-facing documentation examples
A Concrete Doctest and Integration Test Comparison
Suppose src/lib.rs contains:
/// Returns the square of a number.
///
/// ```
/// assert_eq!(workflow_lab::square(3), 9);
/// ```
pub fn square(x: i32) -> i32 {
x * x
}And tests/api.rs contains:
use workflow_lab::square;
#[test]
fn api_squares_correctly() {
assert_eq!(square(3), 9);
}Then cargo test checks both the API behavior and the documentation example, but the tests come from different parts of the package workflow.
Generating Documentation
Cargo generates documentation with cargo doc.
Example:
cargo docAnd a common interactive variant is:
cargo doc --openSuppose the library contains:
/// Returns the square of a number.
pub fn square(x: i32) -> i32 {
x * x
}Then cargo doc builds HTML documentation into the target directory, usually under target/doc/.
A useful mental model is:
cargo docis the normal documentation build workflow- doctests and doc generation are related, but not the same operation
Documentation Generation vs Documentation Testing
It is easy to confuse cargo doc and doctests because both involve documentation.
The distinction is:
cargo docgenerates documentation outputcargo testruns doctests by default for the library target
This means documentation can be both a publishable artifact and a validation surface.
Selective Test Execution
Cargo supports narrowing test execution in several ways.
Run only tests whose names match a filter:
cargo test squareRun only one integration test target:
cargo test --test apiRun only library tests:
cargo test --libPass extra arguments to the test harness after --:
cargo test -- --nocaptureA useful mental model is:
- Cargo first decides which test targets to build and run
- the test harness can then be given extra execution options after
--
Selective Example and Benchmark Execution
Cargo also supports selective execution for examples and benchmark targets.
Examples:
cargo run --example quickstart
cargo build --examples
cargo bench --bench perf
cargo bench --benchesThis matters in larger packages where examples or benches are numerous and you do not want every command to act on all of them.
Feature-Conditioned Test Matrices
Once a crate has features, testing often becomes a matrix problem rather than a single-command workflow.
Suppose the manifest contains:
[dependencies]
serde = { version = "1", optional = true, features = ["derive"] }
[features]
default = ["text"]
text = []
json = []
serde_support = ["dep:serde"]Then a realistic test matrix might include:
cargo test
cargo test --no-default-features
cargo test --features json
cargo test --features "json serde_support"
cargo test --all-featuresA useful mental model is:
- default feature testing is not enough once optional capability surfaces exist
- testing representative feature combinations is part of package quality
Why Feature Matrices Matter
A crate can compile and test successfully under default features while still failing under another supported feature combination.
For example, code guarded by:
#[cfg(feature = "json")]
pub fn output_mode() -> &'static str {
"json"
}may never be compiled in ordinary default-only test runs.
That is why feature-conditioned testing is an important Cargo workflow rather than just an advanced curiosity.
A Small Feature-Test Example
Suppose src/lib.rs contains:
#[cfg(feature = "json")]
pub fn output_mode() -> &'static str {
"json"
}
#[cfg(not(feature = "json"))]
pub fn output_mode() -> &'static str {
"text"
}
#[cfg(test)]
mod tests {
use super::output_mode;
#[test]
fn mode_is_valid() {
assert!(matches!(output_mode(), "text" | "json"));
}
}Then these commands exercise different build conditions:
cargo test
cargo test --features jsonWorkspace-Wide Testing Strategies
In a workspace, testing often happens at more than one scope.
Suppose the workspace root manifest contains:
[workspace]
members = ["app", "core", "tools"]
default-members = ["app", "core"]
resolver = "3"Then from the workspace root you can run:
cargo test
cargo test --workspace
cargo test -p core
cargo test --workspace --exclude toolsA useful mental model is:
- workspace root without flags uses workspace selection rules
--workspacemeans all members-pnarrows to one package--excludetrims a wide selection
Why Workspace-Wide Testing Needs Strategy
In a small workspace, cargo test --workspace may be enough. In a large monorepo, testing strategy often becomes more layered.
A practical approach might include:
- fast default-member tests for common changes
- package-targeted tests for specific crates
- periodic full-workspace validation
- feature-matrix testing for shared library crates
This is especially important when the workspace contains both internal libraries and top-level binaries.
Examples, Tests, and Docs in One Package
A more complete package layout might look like this:
workflow_lab/
āāā Cargo.toml
āāā src/
ā āāā lib.rs
āāā examples/
ā āāā quickstart.rs
āāā tests/
ā āāā api.rs
āāā benches/
āāā perf.rsWith commands like:
cargo test
cargo run --example quickstart
cargo bench --bench perf
cargo doc --openThis makes it clear that Cargo coordinates many validation and teaching surfaces from one package structure.
Target Flags That Influence Default Workflows
Cargo target definitions can influence default testing, benchmarking, and documentation behavior.
For example, target settings such as test, doctest, bench, doc, and harness can change whether a target participates by default in cargo test, cargo bench, or cargo doc.
Example target declaration:
[[example]]
name = "quickstart"
path = "examples/quickstart.rs"
test = trueAnd a library target can control doctest participation:
[lib]
path = "src/lib.rs"
doctest = trueThis means the manifest can shape the workflow surface, not just the filesystem layout.
A Simple Documentation-Centered Workflow
A crate that values documentation quality often uses a workflow like this:
cargo doc
cargo testThe first ensures docs build correctly. The second ensures code examples embedded in docs still work.
Suppose a function is documented like this:
/// Returns the square of a number.
///
/// ```
/// assert_eq!(workflow_lab::square(8), 64);
/// ```
pub fn square(x: i32) -> i32 {
x * x
}Then the documentation is not only readable. It is also validated.
A Practical CI Workflow
A simple CI workflow for one package might look like this:
cargo test
cargo test --all-features
cargo docAnd for a workspace:
cargo test --workspace
cargo doc --workspaceA more selective CI for a featureful library crate might add:
cargo test --no-default-features
cargo test --features json
cargo test --all-featuresThis illustrates how testing, documentation, and feature coverage can be composed into one workflow.
Common Beginner Mistakes
Mistake 1: thinking cargo test only means unit tests.
Mistake 2: forgetting that doctests are part of the default library test workflow.
Mistake 3: assuming examples are only teaching files and never part of validation.
Mistake 4: never testing non-default feature combinations.
Mistake 5: using workspace roots without understanding how package selection changes test scope.
Mistake 6: confusing cargo doc with doctest execution.
Hands-On Exercise
Create a small crate with unit tests, an integration test, an example, and a doctest.
Start here:
cargo new test_doc_lab --lib
cd test_doc_lab
mkdir -p tests examples benchesUse this library source:
/// Returns the square of a number.
///
/// ```
/// assert_eq!(test_doc_lab::square(4), 16);
/// ```
pub fn square(x: i32) -> i32 {
x * x
}
#[cfg(test)]
mod tests {
use super::square;
#[test]
fn unit_square() {
assert_eq!(square(5), 25);
}
}Add an integration test in tests/api.rs:
use test_doc_lab::square;
#[test]
fn integration_square() {
assert_eq!(square(6), 36);
}Add an example in examples/quickstart.rs:
use test_doc_lab::square;
fn main() {
println!("{}", square(7));
}Then run:
cargo test
cargo test --lib
cargo test --test api
cargo run --example quickstart
cargo doc --openThis makes Cargo's testing and documentation workflow surfaces concrete.
Mental Model Summary
A strong mental model for testing, benchmarking, and documentation workflows in Cargo is:
cargo testcoordinates unit tests, integration tests, and doctests- examples are part of the package workflow and are built by
cargo testby default cargo benchhandles benchmark targetscargo docgenerates documentation artifacts- doctests validate documentation examples, while test targets validate dedicated test code
- featureful crates need representative feature-matrix testing
- workspaces need explicit package-selection strategy for testing scope
Once this model is stable, Cargo's testing and documentation commands become much easier to use as one coherent workflow rather than as separate tools.
