The Cargo Guide
Troubleshooting and Diagnostics
Why Cargo Troubleshooting Needs a Mental Model
Cargo errors often look like one-line failures, but they usually come from one of a few deeper layers: dependency resolution, feature selection, target and linker setup, build scripts, configuration precedence, registry or authentication behavior, cache state, or network access. A useful mental model is:
- resolver problems mean Cargo could not build a valid dependency graph
- build problems mean Cargo found a graph but could not compile or link it
- environment problems mean Cargo's graph and commands are being shaped by config, caches, auth, or network state in ways you may not have noticed
Troubleshooting gets much easier when you first identify which layer the failure belongs to.
The Main Diagnostic Surfaces
Cargo has several especially important diagnostic surfaces:
- terminal error messages from the failing command
cargo treefor graph inspectioncargo metadata --format-version 1for machine-readable structure.cargo/config.tomland environment variables for configuration statebuild.rsoutput and emitted directives for build-script-related issues- lockfile and source configuration for dependency provenance and graph stability
A useful mental model is:
- use the narrowest diagnostic surface that matches the likely failure layer
- do not start by changing random manifest lines when the problem may actually be config, cache, or environment
A Small Example Project
Suppose you start with a small package:
cargo new troubleshoot_demo --lib
cd troubleshoot_demoManifest:
[package]
name = "troubleshoot_demo"
version = "0.1.0"
edition = "2024"
[dependencies]
serde = { version = "1", optional = true, features = ["derive"] }
regex = "1"
[features]
default = []
json = []
serde_support = ["dep:serde"]Source:
#[cfg(feature = "serde_support")]
use serde::Serialize;
#[cfg_attr(feature = "serde_support", derive(Serialize))]
pub struct Item {
pub name: String
}
pub fn has_number(s: &str) -> bool {
regex::Regex::new(r"\d").unwrap().is_match(s)
}This is enough to demonstrate several common failure modes.
Reading Resolver Errors
Resolver errors happen when Cargo cannot choose a dependency graph that satisfies the declared requirements. These often involve incompatible version requirements, source mismatches, or feature expectations that cannot be satisfied together.
A useful mental model is:
- resolver errors happen before actual compilation begins
- they are graph-construction failures, not code-generation failures
When you see a resolver error, the first questions to ask are:
- which crate names appear in the conflict?
- are the constraints version-based, source-based, or feature-based?
- is the lockfile part of the problem, or is the manifest itself inconsistent?
A Small Version-Conflict Example
Suppose one part of your graph wants one version line and another part wants an incompatible one.
Conceptually:
[dependencies]
crate_a = "1"
crate_b = "1"Even though your manifest looks simple, crate_a and crate_b may pull incompatible versions of some shared transitive crate. When Cargo cannot find a valid graph, the right next move is often inspection rather than editing blindly.
Useful commands:
cargo tree
cargo tree -dWhy cargo tree Is Often the First Resolver Tool
cargo tree is one of the best first tools for understanding why Cargo chose or could not choose a graph.
Examples:
cargo tree
cargo tree -d
cargo tree -i some_crate
cargo tree -e featuresA useful mental model is:
cargo treeshows the resolved or partially resolved graph shape you need to reason about-dhighlights duplicate versions-ianswers who pulled in a crate-e featureshelps explain feature-related graph shape
Why Cargo Chose a Given Version
When you want to know why a particular crate version appears, the key questions are:
- who depends on it?
- what version requirements or sources constrained it?
- did the lockfile pin it?
Useful commands include:
cargo tree -i some_crate
cargo tree -d
cargo update -p some_crateA useful mental model is:
- a chosen version is usually the result of direct constraints, transitive constraints, and lockfile state together
- you need graph inspection before you can decide whether to change the manifest, the lockfile, or neither
Feature Mismatch Errors
Feature mismatch errors often happen when code assumes a feature is enabled but the active build configuration does not actually enable it, or when multiple parts of the graph activate features in ways that surprise you.
A useful mental model is:
- features only affect code and dependencies that are active in the current build configuration
- Cargo's feature system is generally additive, not exclusive
That means many feature problems are really problems of build scope and graph visibility.
A Simple Feature-Scope Example
Suppose your code contains:
#[cfg(feature = "json")]
pub fn output_mode() -> &'static str {
"json"
}If you build without that feature:
cargo buildthen that code is not part of the active compilation. If you need to diagnose feature-specific behavior, activate the relevant feature set explicitly:
cargo build --features json
cargo test --all-features
cargo tree -e featuresMutually Exclusive Feature Traps
A common feature problem is designing features as if exactly one will be active, even though Cargo may unify them additively. If the code assumes only one of two features can ever be enabled, the graph may still activate both in real builds.
When feature behavior seems contradictory, inspect both:
cargo tree -e features
cargo metadata --format-version 1A useful mental model is:
- if features look inconsistent, the active graph is often broader than the local manifest layout suggested
Target and Linker Failures
Target and linker failures usually happen after dependency resolution has succeeded. In these cases Cargo knows what to build, but the target toolchain, linker, runner, or native dependencies are not configured correctly.
A useful mental model is:
- resolver failures are about graph selection
- linker failures are about producing a final artifact for the chosen target
These often appear in cross-compilation or native-library workflows.
A Simple Linker Configuration Example
Suppose .cargo/config.toml contains:
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
runner = "qemu-aarch64"If that linker is missing or incorrect, a command like:
cargo build --target aarch64-unknown-linux-gnumay fail even though the Rust code itself is fine. In these cases, inspect:
- target triple
- linker path or name
- whether host and target have been confused
- whether native libraries or headers are available for the target
Host vs Target Confusion
A major source of cross-compilation confusion is mixing up host and target. Build scripts and proc macros are host-side artifacts, while the final crate may be target-side.
A useful debugging build script is:
fn main() {
let host = std::env::var("HOST").unwrap();
let target = std::env::var("TARGET").unwrap();
println!("cargo::warning=host={host} target={target}");
}This is often the fastest way to see whether a cross-build failure is really about the target, or about host-side tooling used during the build.
Build-Script Debugging
Build-script problems are common because build.rs can inspect the environment, emit Cargo directives, compile native code, generate source, and influence later compilation.
A useful mental model is:
- build scripts are executable build-time programs
- they are often the first place to inspect when a crate with native integration or generated code fails unexpectedly
A Minimal Diagnostic build.rs
A small diagnostic build script can make build-time context visible:
fn main() {
println!("cargo::warning=OUT_DIR={}", std::env::var("OUT_DIR").unwrap());
println!("cargo::warning=TARGET={}", std::env::var("TARGET").unwrap());
println!("cargo::warning=PROFILE={}", std::env::var("PROFILE").unwrap());
}Then run:
cargo build -vA useful mental model is:
- if build scripts are part of the problem, make their inputs and assumptions visible first
Rerun Trigger Confusion
Sometimes build scripts rerun too often or not when expected. This usually means the rerun triggers do not accurately describe the script's real inputs.
Example:
fn main() {
println!("cargo::rerun-if-changed=build.rs");
println!("cargo::rerun-if-changed=schema/input.txt");
println!("cargo::rerun-if-env-changed=MY_NATIVE_LIB_DIR");
}A useful mental model is:
- build-script freshness depends on declared inputs plus Cargo's own build tracking
- when reruns look strange, suspect undeclared or misdeclared inputs first
Config Precedence Confusion
Cargo configuration is hierarchical and unified from multiple locations. That means configuration surprises are often precedence surprises.
A useful mental model is:
- local config can override broader config
- environment variables can override config values
- command-line flags can override both when applicable
When Cargo behaves unexpectedly, the right question is often not only "what is configured?" but "which layer is winning?"
A Small Config Example
Suppose you have project config:
[build]
target-dir = "target-local"
[alias]
xtest = "test --workspace"But the shell also sets:
CARGO_TARGET_DIR=/tmp/cargo-targetThen a build may use the environment-provided target directory instead of the project-local one. A useful debugging pattern is to make configuration explicit and reduce the number of active override layers temporarily.
When to Inspect Config
Inspect configuration when the problem involves:
- unexpected target directories
- surprising target or linker behavior
- registry source or mirror behavior
- aliases doing unexpected things
- network policy or offline behavior
- environment-dependent behavior that differs across machines
A useful mental model is:
- if Cargo seems to be doing something you did not ask for, config and environment are often the missing explanation
Registry and Authentication Failures
Registry failures often look like one of three categories:
- wrong registry source or index configuration
- missing or invalid credentials
- policy mismatch between the package and the intended publish or fetch source
A useful mental model is:
- dependency source problems are often config problems
- publish failures are often auth or registry-policy problems
A Small Registry Config Example
Suppose .cargo/config.toml contains:
[registries.company]
index = "sparse+https://packages.example.com/index/"And the shell injects a token:
export CARGO_REGISTRIES_COMPANY_TOKEN="$COMPANY_REGISTRY_TOKEN"If cargo publish --registry company fails, inspect:
- whether the registry name in config matches the manifest or command
- whether the index URL is correct
- whether the token is present and scoped correctly
- whether the package's
publishrestrictions allow that registry
Token and Auth Debugging Mindset
A practical auth-debugging mindset is:
- verify the registry name first
- verify the credential injection path second
- avoid assuming a stored local token and a CI-injected token behave the same way
A useful mental model is:
- registry configuration identifies the destination
- auth configuration proves permission to use it
Cache Corruption or Cache-State Suspicion
Sometimes the issue is not the manifest or graph, but local state in the target directory or Cargo home caches.
A practical debugging pattern is to isolate one cache realm at a time.
Examples:
CARGO_TARGET_DIR=/tmp/clean-target cargo build
CARGO_HOME=/tmp/fresh-cargo-home cargo buildA useful mental model is:
- if changing the target dir fixes it, suspect local build artifact state
- if changing Cargo home fixes it, suspect registry, git, or source cache state
Cleaning Strategically
Cargo supports cargo clean, but it should be used strategically, not automatically.
Example:
cargo cleanA useful mental model is:
cargo cleanresets local build artifacts- it may solve stale-artifact problems, but it also throws away useful cache state
If a problem disappears after a clean build, that suggests the failure was tied to local artifact freshness rather than to the manifest itself.
Network Issues
Network failures can appear as registry access failures, timeout problems, mirror failures, or git dependency fetch problems.
A useful mental model is:
- not all dependency failures are resolver failures
- some are simply source-availability or network-policy failures
Useful debugging moves include:
cargo fetch
cargo build --offlineThis helps separate dependency-availability preparation from the build itself.
Offline and Frozen as Diagnostics
Offline and frozen modes can be useful diagnostic tools as well as operational modes.
Examples:
cargo build --offline
cargo build --frozenA useful mental model is:
- if a normal build succeeds but
--offlinefails, dependency availability is incomplete locally - if
--lockedor--frozenfails, lockfile drift or network dependence may have been hidden in the ordinary workflow
When to Inspect Environment Variables
Inspect environment variables when the failure involves:
- target directory changes
- Cargo home changes
- debug logging requests
- registry tokens
- build-script behavior that depends on
HOST,TARGET,OUT_DIR, or feature indicators
Useful examples include:
CARGO_LOG=debug cargo build
CARGO_TARGET_DIR=/tmp/test-target cargo build
CARGO_HOME=/tmp/test-home cargo fetchA useful mental model is:
- environment variables are one of Cargo's strongest override layers
- they are also one of the easiest ways to isolate a problem
When to Inspect cargo metadata
Inspect cargo metadata when you need structured answers about:
- which manifest Cargo is using
- which packages are in the workspace
- what targets exist
- what feature declarations or package structure the project has
Examples:
cargo metadata --format-version 1 --no-deps
cargo metadata --format-version 1 --manifest-path Cargo.tomlA useful mental model is:
cargo metadatais the right tool when the question is about project structure, not build output
When to Inspect cargo tree
Inspect cargo tree when the question is about the dependency graph.
Examples:
cargo tree
cargo tree -d
cargo tree -i some_crate
cargo tree -e featuresA useful mental model is:
- use
cargo metadatafor package and workspace structure - use
cargo treefor dependency and feature graph shape
When to Inspect Build Output Itself
Sometimes the right move is simply to make Cargo and the build more verbose.
Examples:
cargo build -v
CARGO_LOG=debug cargo buildThis is especially useful for:
- build-script investigation
- linker command visibility
- source fetch behavior
- configuration surprises that do not show up clearly in ordinary error text
A Compact Diagnostic Workflow
A good general-purpose diagnostic sequence is:
cargo tree
cargo tree -e features
cargo metadata --format-version 1 --no-deps
cargo build -v
CARGO_LOG=debug cargo buildAnd if environment or cache state is suspected:
CARGO_TARGET_DIR=/tmp/clean-target cargo build
CARGO_HOME=/tmp/fresh-home cargo buildThis sequence helps you move from graph questions to structure questions to execution questions in a controlled way.
A Small End-to-End Example
Suppose a crate fails only when building with a feature and a non-default target.
A disciplined sequence might be:
cargo tree -e features
cargo build --features json --target aarch64-unknown-linux-gnu -v
CARGO_LOG=debug cargo build --features json --target aarch64-unknown-linux-gnuIf the crate has a build script, add diagnostic output there:
fn main() {
println!("cargo::warning=HOST={}", std::env::var("HOST").unwrap());
println!("cargo::warning=TARGET={}", std::env::var("TARGET").unwrap());
}This usually narrows the problem much faster than rewriting manifest entries by guesswork.
Common Beginner Mistakes
Mistake 1: changing dependency declarations before checking whether the problem is really config, cache, or environment.
Mistake 2: treating feature failures as if the default feature graph explained everything.
Mistake 3: debugging target and linker failures without checking host versus target assumptions.
Mistake 4: ignoring build scripts even when the crate obviously uses native integration or generated code.
Mistake 5: using cargo clean as the first response instead of a diagnostic step.
Mistake 6: forgetting that cargo tree, cargo metadata, environment overrides, and verbose output each answer different kinds of questions.
Hands-On Exercise
Take a small crate and practice diagnosing three different classes of failure.
First, introduce a feature-gated code path and inspect it with:
cargo tree -e features
cargo build --features jsonSecond, create a simple .cargo/config.toml with a custom target-dir and then override it from the shell:
[build]
target-dir = "target-local"CARGO_TARGET_DIR=/tmp/test-target cargo buildThird, add a diagnostic build.rs that prints HOST, TARGET, and OUT_DIR, then run:
cargo build -vThis exercise helps separate graph problems, config-precedence problems, and build-script-context problems, which is the core skill in Cargo troubleshooting.
Mental Model Summary
A strong mental model for Cargo troubleshooting is:
- first identify the layer: resolver, features, target/linker, build script, config, auth, cache, or network
- use
cargo treefor dependency and feature graph questions - use
cargo metadatafor project and workspace structure questions - use verbose output and
CARGO_LOGfor execution and configuration tracing - inspect environment and config when Cargo's behavior feels surprising rather than merely broken
- isolate cache and source state when stale artifacts or local machine differences are suspected
Once this model is stable, Cargo diagnostics become much easier to approach as structured investigation instead of trial-and-error manifest editing.
