What are the trade-offs between lru::LruCache::get_mut and get for cache entry mutation?

get_mut returns a mutable reference to a cache entry's value without promoting it to the most-recently-used position, while get returns an immutable reference and does promote the entry to MRU. This difference affects cache eviction order: get_mut allows in-place mutations without changing the LRU order, useful when updating cached data shouldn't extend its lifetime in the cache. The trade-off is that get_mut cannot move the entry to the head position—use get when you want to signal that the entry is still relevant, use get_mut when you need to modify without affecting eviction priority.

Basic LruCache Access Patterns

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<String, Vec<u8>> = LruCache::new(NonZeroUsize::new(3).unwrap());
    
    cache.put("a".to_string(), vec![1]);
    cache.put("b".to_string(), vec![2]);
    cache.put("c".to_string(), vec![3]);
    
    // Cache order (most to least recent): c, b, a
    
    // get() returns immutable reference AND promotes to MRU
    let value = cache.get(&"a".to_string());
    // Cache order now: a, c, b
    // "a" is now most recently used
    
    // get_mut() returns mutable reference WITHOUT promoting
    let value = cache.get_mut(&"b".to_string());
    // Cache order remains: a, c, b
    // "b" is still in the same position
}

The key difference: get promotes the entry; get_mut does not.

Understanding LRU Promotion

use lru::LruCache;
use std::num::NonZeroUsize;
 
fn main() {
    let mut cache: LruCache<i32, String> = LruCache::new(NonZeroUsize::new(3).unwrap());
    
    cache.put(1, "one".to_string());
    cache.put(2, "two".to_string());
    cache.put(3, "three".to_string());
    
    // Order: 3 (MRU), 2, 1 (LRU)
    
    // get() promotes entry to most recent
    cache.get(&1);
    // Order: 1 (MRU), 3, 2
    
    // get_mut() does NOT promote
    if let Some(val) = cache.get_mut(&2) {
        val.push_str(" modified");
    }
    // Order: 1 (MRU), 3, 2
    // "2" is still LRU despite being modified
    
    // When we add a new entry:
    cache.put(4, "four".to_string());
    // Order: 4 (MRU), 1, 3
    // "2" was evicted because it was LRU
}

get signals "this entry is still being used"; get_mut allows mutation without that signal.

When to Use get_mut: Preserving Eviction Order

use lru::LruCache;
use std::num::NonZeroUsize;
 
struct CacheManager {
    cache: LruCache<String, Data>,
}
 
struct Data {
    value: i32,
    last_modified: std::time::Instant,
}
 
impl CacheManager {
    fn update_value(&mut self, key: &str, new_value: i32) {
        // Use get_mut when updating shouldn't extend lifetime
        if let Some(data) = self.cache.get_mut(&key.to_string()) {
            data.value = new_value;
            data.last_modified = std::time::Instant::now();
        }
        // The entry's LRU position is unchanged
        // If it was about to be evicted, it still will be
    }
    
    fn access_value(&mut self, key: &str) -> Option<i32> {
        // Use get when access should extend lifetime
        self.cache.get(&key.to_string()).map(|d| d.value)
    }
}
 
// Scenario: Background update task modifies entries
// but shouldn't prevent eviction of stale data
 
fn background_refresh(cache: &mut LruCache<String, Data>) {
    for key in cache.keys_lru() {
        if let Some(data) = cache.get_mut(&key) {
            // Refresh the value
            data.value = fetch_latest(&key);
            // But don't promote - let old entries age out naturally
        }
    }
}

Use get_mut when modifications are maintenance, not access patterns.

When to Use get: Signaling Continued Relevance

use lru::LruCache;
use std::num::NonZeroUsize;
 
struct WebCache {
    pages: LruCache<String, PageContent>,
}
 
struct PageContent {
    html: String,
    fetched_at: std::time::Instant,
}
 
impl WebCache {
    fn get_page(&mut self, url: &str) -> Option<&String> {
        // get() promotes because user access means it's relevant
        self.pages.get(&url.to_string()).map(|p| &p.html)
        // Popular pages stay in cache; unpopular ones get evicted
    }
    
    fn refresh_page(&mut self, url: &str) {
        // get_mut() doesn't promote because refresh isn't user access
        if let Some(page) = self.pages.get_mut(&url.to_string()) {
            page.html = fetch_page(url);
            page.fetched_at = std::time::Instant::now();
        }
    }
}
 
// User access -> get() -> entry promoted
// Background refresh -> get_mut() -> entry position unchanged

Access patterns determine cache retention; use get for user-driven access.

Combining get and put for Promotion and Mutation

use lru::LruCache;
use std::num::NonZeroUsize;
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(NonZeroUsize::new(5).unwrap());
    
    cache.put("a".to_string(), 1);
    cache.put("b".to_string(), 2);
    cache.put("c".to_string(), 3);
    
    // Scenario: Want to promote AND mutate
    
    // Option 1: get() then update via put()
    if let Some(&val) = cache.get(&"a".to_string()) {
        cache.put("a".to_string(), val + 10);
    }
    // This promotes "a" to MRU and updates value
    
    // Option 2: promote manually, then get_mut
    // (LruCache doesn't have promote() method)
    
    // Option 3: Use peek_mut (doesn't promote) then manually promote
    // peek_mut exists but doesn't promote
    
    // For "promote and mutate" use case, put() is often best
}

When you need both promotion and mutation, get() followed by put() may be clearer.

peek and peek_mut: No Promotion

use lru::LruCache;
use std::num::NonZeroUsize;
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(NonZeroUsize::new(3).unwrap());
    
    cache.put("a".to_string(), 1);
    cache.put("b".to_string(), 2);
    cache.put("c".to_string(), 3);
    // Order: c (MRU), b, a (LRU)
    
    // peek() returns reference WITHOUT promoting
    let value = cache.peek(&"a".to_string());
    // Order unchanged: c, b, a
    
    // peek_mut() returns mutable reference WITHOUT promoting
    if let Some(val) = cache.peek_mut(&"b".to_string()) {
        *val += 10;
    }
    // Order unchanged: c, b, a
    
    // get() would promote:
    cache.get(&"a".to_string());
    // Order now: a (MRU), c, b
    
    // get_mut() wouldn't promote (like peek_mut)
    // But wait - what's the difference between get_mut and peek_mut?
}

Both peek_mut and get_mut return mutable references without promotion.

Difference Between peek_mut and get_mut

use lru::LruCache;
use std::num::NonZeroUsize;
 
// Key insight from documentation:
// - peek() / peek_mut() never promote, used for inspection
// - get() promotes, returns immutable reference
// - get_mut() does NOT promote, returns mutable reference
 
// The naming is a bit confusing. Let's clarify:
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(NonZeroUsize::new(3).unwrap());
    
    cache.put("a".to_string(), 1);
    cache.put("b".to_string(), 2);
    cache.put("c".to_string(), 3);
    
    // peek() and peek_mut(): Never promote
    // Use when you want to inspect/modify without any LRU effects
    
    // get(): Promotes entry
    // Use when you want to signal access (extend lifetime)
    
    // get_mut(): Does NOT promote (like peek_mut)
    // Use when you need to mutate without promotion
    
    // The difference between peek_mut and get_mut:
    // They're essentially the same in terms of promotion behavior
    // get_mut is just named to pair with get()
}
 
// Actually, checking the lru crate source:
// peek_mut and get_mut both return mutable references without promotion
// They're aliases for the same underlying behavior
// The naming difference is semantic:
// - peek_mut: "I'm peeking at it"
// - get_mut: "I'm getting mutable access"

Both get_mut and peek_mut avoid promotion; they provide similar functionality.

Reference Lifetime Considerations

use lru::LruCache;
use std::num::NonZeroUsize;
 
fn main() {
    let mut cache: LruCache<String, Vec<i32>> = LruCache::new(NonZeroUsize::new(3).unwrap());
    
    cache.put("data".to_string(), vec![1, 2, 3]);
    
    // get_mut returns Option<&mut V>
    // The reference is valid as long as you hold it
    
    if let Some(data) = cache.get_mut(&"data".to_string()) {
        data.push(4);  // Modify in place
        
        // You can make multiple modifications
        data.push(5);
        data.push(6);
        
        // The reference is borrowed from the cache
        // You cannot call other cache methods while holding it
        // cache.put("other".to_string(), vec![]);  // Won't compile!
    }
    
    // After the reference is dropped, cache is usable again
    cache.put("other".to_string(), vec![10]);
}
 
// The mutable reference borrows the cache mutably
// This prevents concurrent modifications (safe by design)

Mutable references borrow the cache, preventing conflicting operations while held.

Performance Implications

use lru::LruCache;
use std::num::NonZeroUsize;
 
fn main() {
    // Promotion has a cost:
    // - get() must unlink node from current position
    // - get() must link node at head position
    // - This involves pointer manipulation in the linked list
    
    // get_mut() avoids this cost:
    // - Just finds the node and returns mutable reference
    // - No linked list manipulation
    
    // For high-frequency mutations, get_mut is faster
    
    let mut cache: LruCache<i32, Data> = LruCache::new(NonZeroUsize::new(10_000).unwrap());
    
    // Scenario: Frequent counter increments
    
    // With get() - promotion cost on every access
    for i in 0..1000 {
        if let Some(data) = cache.get(&i) {
            // This promotes every time
        }
    }
    
    // With get_mut() - no promotion cost
    for i in 0..1000 {
        if let Some(data) = cache.get_mut(&i) {
            data.counter += 1;
        }
    }
    
    // For write-heavy workloads where access pattern doesn't matter,
    // get_mut avoids the promotion overhead
}
 
struct Data {
    counter: u64,
}

get_mut avoids the linked list manipulation cost of promotion.

Memory Layout and Cache Structure

use lru::LruCache;
use std::num::NonZeroUsize;
 
// LruCache uses a LinkedHashMap + doubly-linked list:
// - HashMap for O(1) lookup by key
// - Linked list for O(1) promotion/eviction
 
// Operations complexity:
// - get(): O(1) lookup + O(1) promotion
// - get_mut(): O(1) lookup (no promotion)
// - put(): O(1) insert/promote + possible O(1) eviction
// - pop(): O(1) remove LRU
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(NonZeroUsize::new(100).unwrap());
    
    // The cache maintains:
    // 1. A HashMap from keys to values
    // 2. A doubly-linked list of keys for LRU ordering
    
    // When you call get():
    // 1. HashMap lookup: O(1)
    // 2. Find position in linked list: O(1) (node pointers stored)
    // 3. Unlink from current position: O(1)
    // 4. Link at head: O(1)
    
    // When you call get_mut():
    // 1. HashMap lookup: O(1)
    // 2. Return mutable reference
    // That's it - no linked list manipulation
}

Both get and get_mut are O(1), but get_mut avoids the pointer operations.

Iterating and Mutating

use lru::LruCache;
use std::num::NonZeroUsize;
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(NonZeroUsize::new(5).unwrap());
    
    cache.put("a".to_string(), 1);
    cache.put("b".to_string(), 2);
    cache.put("c".to_string(), 3);
    
    // Iterating over entries
    
    // Immutable iteration (no promotion)
    for (key, value) in cache.iter() {
        println!("{}: {}", key, value);
    }
    // Order: c, b, a (MRU to LRU)
    
    // Mutable iteration (doesn't affect cache structure)
    for (key, value) in cache.iter_mut() {
        *value += 10;
    }
    // Values modified but order unchanged
    
    // If you need to promote during iteration:
    // You can't directly - would require modifying the cache structure
    // Standard pattern: collect keys, then promote
    let keys: Vec<_> = cache.iter().map(|(k, _)| k.clone()).collect();
    for key in keys {
        cache.get(&key);  // Promotes
    }
}

iter() and iter_mut() don't affect LRU order; they're for bulk operations.

Contention in Concurrent Scenarios

use lru::LruCache;
use std::num::NonZeroUsize;
use std::sync::Mutex;
 
// LruCache is not thread-safe by default
// For concurrent access, wrap in Mutex
 
struct SharedCache {
    cache: Mutex<LruCache<String, i32>>,
}
 
impl SharedCache {
    fn get_value(&self, key: &str) -> Option<i32> {
        let mut cache = self.cache.lock().unwrap();
        cache.get(&key.to_string()).copied()
        // Lock held during get() AND promotion
    }
    
    fn update_value(&self, key: &str, delta: i32) -> bool {
        let mut cache = self.cache.lock().unwrap();
        if let Some(value) = cache.get_mut(&key.to_string()) {
            *value += delta;
            true
        } else {
            false
        }
        // Lock held, but no promotion overhead
    }
}
 
// In high-contention scenarios, get_mut() reduces lock hold time:
// - No linked list manipulation
// - Faster operation = shorter critical section
 
// For even better concurrency, consider:
// - Sharded LRU caches
// - Lock-free LRU implementations
// - Using a different cache strategy (e.g., TTL-based)

In concurrent scenarios, get_mut's simpler operation reduces lock contention.

Practical Use Cases

use lru::LruCache;
use std::num::NonZeroUsize;
 
// Use case 1: Access tracking
fn user_access(cache: &mut LruCache<String, Page>, user: &str, page: &str) -> Option<&str> {
    // Use get() - user access should promote
    cache.get(&format!("{}/{}", user, page)).map(|p| p.content.as_str())
}
 
// Use case 2: Background refresh
fn refresh_cache(cache: &mut LruCache<String, Data>) {
    // Use get_mut() - refresh shouldn't affect eviction
    for key in get_stale_keys() {
        if let Some(data) = cache.get_mut(&key) {
            data.refresh();
        }
    }
}
 
// Use case 3: Metrics collection
fn record_hit(cache: &mut LruCache<String, Data>, key: &str) {
    // Use get_mut() - recording hit shouldn't promote
    if let Some(data) = cache.get_mut(key) {
        data.hit_count += 1;
    }
}
 
// Use case 4: Cache warm-up
fn warm_cache(cache: &mut LruCache<String, Data>, keys: Vec<String>) {
    // Use put() to insert with MRU position
    for key in keys {
        if !cache.contains(&key) {
            let data = fetch_data(&key);
            cache.put(key, data);
        }
    }
}
 
// Use case 5: Debug/inspection (peek)
fn debug_cache(cache: &LruCache<String, Data>) {
    // Use peek() - inspection shouldn't affect order
    for (key, value) in cache.iter() {
        println!("{}: {:?}", key, value);
    }
}

Choose based on whether the operation should affect eviction priority.

Synthesis

Method comparison:

Method Promotion Mutability Use Case
get() Yes No Access that should extend lifetime
get_mut() No Yes Mutation without affecting eviction
peek() No No Inspection without side effects
peek_mut() No Yes Same as get_mut (alias)

When to use each:

  • get(): User-driven access where access pattern should affect retention
  • get_mut(): Updates, maintenance, metrics that shouldn't extend lifetime
  • peek()/peek_mut(): Debugging, inspection, background operations

Key trade-off: Promotion affects which entries get evicted. Using get() on frequently updated entries prevents eviction; using get_mut() allows them to age out naturally. Consider whether the operation represents "real usage" (should promote) or "maintenance" (shouldn't promote).

Performance consideration: get_mut() avoids linked list manipulation, making it marginally faster. In tight loops or high-frequency mutations, prefer get_mut() when promotion isn't needed.

The fundamental insight: LRU cache promotion is about access patterns, not just data retrieval. get() signals "this entry matters" by moving it away from eviction; get_mut() treats the cache as pure storage where mutations don't affect lifetime. Choose based on whether the modification represents user-driven access (promote) or background maintenance (don't promote).