The Cargo Guide
Build Caching, Target Directories, and Performance
Why Build Caching Matters
Cargo performance is not only about compiler speed. It is also about what Cargo can reuse between builds, where artifacts live, how much disk they consume, and when cached data stops being useful.
A useful mental model is:
- Cargo stores build artifacts in
target/ - Cargo stores downloaded and global dependency cache data in Cargo home
- performance depends heavily on whether Cargo can reuse prior work instead of rebuilding or re-downloading everything
This is why build caching is both a performance topic and an operational topic.
The Two Main Cache Realms
Cargo caching is easiest to understand if you separate it into two broad realms.
First, project-local build artifacts:
my_project/
āāā target/Second, global Cargo home caches used across projects:
$CARGO_HOME/A practical mental model is:
target/is where compiled outputs and related local build state live- Cargo home is where downloaded crates, indexes, git dependency data, and other global cache material live
What Goes Into target/
The target/ directory is Cargo's local artifact workspace for a project or workspace.
A typical shape looks like this:
target/
āāā debug/
āāā release/
āāā doc/
āāā tmp/
āāā CACHEDIR.TAGAnd inside a profile-specific directory you often see subdirectories like:
target/debug/
āāā my_binary
āāā deps/
āāā incremental/
āāā build/
āāā examples/A useful mental model is:
- top-level profile directories like
debug/andrelease/hold build outputs for those profiles deps/contains dependency artifactsincremental/contains incremental compilation statebuild/contains build-script-related outputsdoc/contains generated documentation
A Small Example Project
Suppose you create a small package:
cargo new cache_demo
cd cache_demoSource:
pub fn sum_up_to(n: u64) -> u64 {
(0..=n).sum()
}
fn main() {
println!("{}", sum_up_to(1_000_000));
}Build it:
cargo buildNow inspect the local artifact tree:
find target -maxdepth 2 -type d | sortThis is one of the fastest ways to make Cargo's local caching model feel concrete.
Shared target Directories
In a workspace, Cargo uses a shared target/ directory at the workspace root by default.
Example:
my_workspace/
āāā Cargo.toml
āāā Cargo.lock
āāā target/
āāā app/
āāā core/This matters because workspace members often depend on each other and share parts of the same build graph.
A shared target/ directory helps Cargo reuse artifacts across members instead of each member maintaining its own isolated build output tree.
Why Shared target Directories Improve Performance
A shared target/ directory improves performance because it avoids unnecessary duplication of build outputs and gives Cargo one coordinated place to reuse artifacts across packages in the same workspace.
A practical intuition is:
- shared target directory means more opportunities for reuse
- isolated target directories mean more repeated work
How Profiles Shape the target Directory
Cargo organizes many artifacts by profile.
For example:
cargo build
cargo build --release
cargo docThese commonly populate different areas such as:
target/debug/
target/release/
target/doc/This matters because a debug build and a release build are not just different commands. They often produce separate artifact sets with different reuse characteristics.
Incremental Compilation Basics
Incremental compilation means the compiler and Cargo try to reuse prior compilation work when only part of the code changed.
A typical experience looks like this:
cargo build
# first build
cargo build
# second build, often much faster if little changedNow change a small function:
pub fn sum_up_to(n: u64) -> u64 {
(0..=n).sum::<u64>()
}Then build again:
cargo buildCargo can often reuse a large portion of prior work instead of starting from zero.
Where Incremental State Lives
Incremental compilation state typically lives under the incremental/ subdirectory inside the profile-specific build output area.
For example:
target/debug/incremental/This is one reason cargo clean has such a strong effect: it removes more than the final executable. It also removes the accumulated state that makes repeated builds faster.
When Incremental Compilation Helps Most
Incremental compilation is most valuable in the normal edit-build-check loop during development.
Typical pattern:
cargo check
cargo build
cargo testWhen code changes are small and frequent, reuse matters a great deal. That is why development-oriented profiles usually benefit strongly from incremental behavior.
Cache Invalidation Basics
Cache invalidation in Cargo means that some prior build outputs can no longer be trusted and must be recomputed.
This can happen when you change things like:
- source files
- feature sets
- compiler version
- profile settings
- dependency versions
- target platform
- build script outputs
A practical mental model is:
- small local edits often invalidate only part of the build
- broader changes can invalidate much more of the artifact tree
Why Some Tiny Changes Cause Bigger Rebuilds
Not all edits are equal from the build system's perspective.
For example, changing one function body may be cheap, while changing a public API used by many crates or changing features may affect much more of the graph.
This is why Cargo performance can sometimes feel uneven even when the source edit looked small.
Build Scripts and Extra Rebuild Pressure
Build scripts can add extra rebuild pressure because their outputs affect downstream compilation.
For example:
// build.rs
fn main() {
println!("cargo:rerun-if-changed=build.rs");
}If build-script inputs change, Cargo may need to regenerate build-script outputs and then rebuild dependent code.
That means build caching is not only about Rust source files. Build scripts are part of the invalidation story too.
Cleaning Strategically
Cargo provides cargo clean to remove local build artifacts.
cargo cleanThis is useful when:
- you want to reclaim space in
target/ - you suspect stale artifact state
- you want to force a fully fresh local build
But cleaning aggressively has a cost: it throws away reuse.
A good mental model is:
cargo cleanis a reset button, not a routine optimization
Why Not to Clean Too Casually
Beginners sometimes use cargo clean whenever a build feels odd. That can work, but it also discards the very cache state that makes Cargo fast.
A healthier habit is:
- clean when there is a reason
- do not treat cleaning as a default step after ordinary edits
A Practical Cleaning Workflow
A simple strategic cleaning workflow might look like this:
cargo build
cargo test
cargo clean
cargo buildThis makes the performance difference concrete. The build after cleaning usually takes longer because incremental and artifact reuse have been removed.
Selecting a Different target Directory
Cargo supports changing the target directory, which can be useful in CI, scripted environments, or storage-constrained setups.
Example command-line override:
cargo build --target-dir /tmp/cargo-targetExample environment variable:
CARGO_TARGET_DIR=/tmp/cargo-target cargo buildThis can help when:
- multiple related invocations should share the same target dir intentionally
- local disk layout makes the default inconvenient
- CI wants a cacheable or isolated artifact location
Why target Directory Location Affects Performance
The target directory location affects performance because reuse depends on stable access to previous artifacts. If builds constantly switch target directories, reuse is reduced. If a CI system preserves a target directory cache effectively, repeated builds may be much faster.
That means target directory policy is often a performance decision, not just a filesystem preference.
Global Cache Basics
Beyond target/, Cargo also maintains global caches in Cargo home for downloaded dependency material.
This commonly includes things like:
- registry index data
- downloaded
.cratearchives - extracted source cache data
- git dependency clones and checkouts
A useful mental model is:
target/is local compiled output- Cargo home caches are shared source and download infrastructure across projects
Why Global Caches Matter for Performance
Global caches matter because they determine whether Cargo must re-download dependencies or can reuse data already available on the machine.
That affects:
- first build cost on a machine
- repeated builds across many projects
- offline behavior
- CI cache efficiency
A machine with a warm Cargo home usually feels much faster than a completely cold environment.
Disk Pressure and Cache Growth
Cargo's caches can consume substantial disk space over time.
There are two main growth areas:
- project-local
target/directories - global Cargo home caches that accumulate downloads and source data across many builds and projects
This means Cargo performance and disk usage are linked. Large caches may be good for reuse, but bad for storage pressure.
Recognizing Local vs Global Disk Pressure
A helpful distinction is:
- if one repository is huge, its
target/directory may be the main culprit - if many repositories have been built over time, Cargo home may be the larger long-term storage burden
That distinction helps you decide whether the right response is local cleaning, global cache cleanup, or better CI caching strategy.
Automatic Global Cache Garbage Collection
Cargo now includes automatic garbage collection for global caches. This was stabilized in Cargo 1.88. The goal is to prevent Cargo home from growing without bound over time. Cargo can automatically remove old cached files, with different age behavior depending on whether the content came from the network or the local system. Automatic garbage collection does not run when Cargo is invoked with --offline or --frozen. :contentReference[oaicite:1]{index=1}
Why Automatic Cache GC Changes Operational Assumptions
Before automatic cache garbage collection, it was easier to assume that old globally cached dependency data would simply remain present forever unless manually deleted. That assumption is no longer safe.
A practical implication is:
- long-lived machines may eventually lose old cache entries automatically
- offline or frozen workflows should not rely on ancient cache state continuing to exist forever
This is especially important for build hosts, developer laptops, and CI runners that are expected to behave predictably over long periods. :contentReference[oaicite:2]{index=2}
Current Automatic GC Timing Defaults
Cargo 1.88's stabilized automatic garbage collection removes files downloaded from the network if they have not been accessed in 3 months, and removes files obtained from the local system if they have not been accessed in 1 month. Cargo versions 1.78 and newer track the access information needed for this behavior. :contentReference[oaicite:3]{index=3}
Configuring Automatic Cache Cleanup Frequency
Cargo supports a cache.auto-clean-frequency configuration setting for controlling automatic cache cleanup frequency.
Illustrative configuration:
[cache]
auto-clean-frequency = "never"This kind of configuration is relevant when a team has a specific reason to keep caches around longer or wants to avoid automatic cleanup on certain long-lived machines. :contentReference[oaicite:4]{index=4}
Manual Global Cache Cleanup
Cargo also has unstable manual garbage-collection support for global caches through cargo clean gc -Zgc.
Example:
cargo clean gc -ZgcExample with a limit:
cargo clean gc -Zgc --max-download-age=1weekThis is a more advanced mechanism than ordinary project-local cargo clean, and it applies to global cache material rather than just one project's target/ directory. :contentReference[oaicite:5]{index=5}
Global Cache Cleanup Configuration
Cargo's unstable cache GC configuration supports more detailed limits through settings under [cache.global-clean].
Illustrative configuration:
[cache.global-clean]
max-src-age = "1 month"
max-crate-age = "3 months"
max-index-age = "3 months"
max-git-co-age = "1 month"
max-git-db-age = "3 months"This gives fine-grained control over which classes of cache data are cleaned and how old they can become before removal. :contentReference[oaicite:6]{index=6}
What Offline and Frozen Mean for Cache Behavior
Because automatic garbage collection does not run when Cargo is invoked with --offline or --frozen, these modes can interact differently with cache lifecycle assumptions. They also mean Cargo cannot rely on the network to refill missing cache entries.
A practical consequence is that strict offline workflows are healthiest when they are prepared intentionally rather than depending on old cache state by accident. :contentReference[oaicite:7]{index=7}
A Practical Performance Workflow
A performance-aware local workflow often looks like this:
cargo check
cargo build
cargo testAnd only occasionally:
cargo cleanThis keeps incremental state warm and avoids throwing away useful artifact reuse unnecessarily.
A Practical CI Caching Workflow
In CI, cache strategy is often a major performance factor.
A useful pattern is:
cargo fetch
cargo build --locked
cargo test --lockedAnd where appropriate, preserve:
- Cargo home caches
- selected target-directory artifacts
The right balance depends on how expensive the build is and how stable the cache keys are, but the general principle is simple: cache what is expensive and stable enough to reuse.
A Full Example Session
Here is a small sequence that makes several cache concepts concrete.
Create a package:
cargo new perf_cache_lab
cd perf_cache_labUse this code:
pub fn work(n: u64) -> u64 {
(0..=n).sum()
}
fn main() {
println!("{}", work(2_000_000));
}Build several times:
cargo build
cargo build
cargo testInspect the target directory:
find target -maxdepth 2 -type d | sortNow clean and rebuild:
cargo clean
cargo buildThen compare the experience with and without retained local artifacts.
Common Beginner Mistakes
Mistake 1: thinking Cargo performance is only about compiler speed rather than reuse of prior work.
Mistake 2: treating cargo clean as a routine optimization.
Mistake 3: not distinguishing project-local target/ artifacts from global Cargo home caches.
Mistake 4: assuming global caches live forever on long-lived machines.
Mistake 5: changing target directory strategy casually and then wondering why reuse disappears.
Mistake 6: relying on offline workflows without making sure the needed caches are actually present.
Mental Model Summary
A strong mental model for Cargo build caching and performance is:
target/is the local artifact workspace for a project or workspace- Cargo home stores shared global cache data like downloaded crates, index data, and git dependency state
- shared target directories improve reuse across workspace members
- incremental compilation speeds up repeated development builds by reusing prior work
- cache invalidation happens when enough build-relevant inputs change
cargo cleanshould be used strategically because it removes useful reuse state- disk pressure comes from both local target directories and long-lived global caches
- automatic global cache garbage collection means old cached files may disappear over time, especially on long-lived machines
Once this model is stable, Cargo performance becomes much easier to reason about as a caching and reuse problem rather than a black box.
