How does lru::LruCache::promote differ from get for manual cache entry reordering?

promote moves a cache entry to the most-recently-used position without returning the value, while get both returns the value reference and implicitly promotes the entry, making them equivalent for read operations but distinct when you want to reorder without accessing the value. This separation allows manual cache management where you might want to affect LRU ordering without the overhead or borrowing constraints of value access.

How LRU Cache Ordering Works

use lru::LruCache;
 
fn lru_basics() {
    // LRU = Least Recently Used
    // Cache tracks access order
    // "Most recent" = accessed last
    // "Least recent" = accessed longest ago
    
    let mut cache: LruCache<&str, i32> = LruCache::new(3);
    
    cache.put("a", 1);  // Order: [a]
    cache.put("b", 2);  // Order: [a, b]
    cache.put("c", 3);  // Order: [a, b, c]
    
    // Accessing "a" moves it to most recent
    let _ = cache.get(&"a");  // Order: [b, c, a]
    
    // On eviction, "b" is removed (least recent)
    cache.put("d", 4);  // Order: [c, a, d], "b" evicted
}

LRU ordering determines which entries are evicted when capacity is exceeded—least recently used first.

The get Method

use lru::LruCache;
 
fn get_method() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    // Order: [a, b, c]
    
    // get returns a reference AND promotes
    let value = cache.get(&"a");
    // Returns: Option<&i32> = Some(&1)
    // Order: [b, c, a]  <- "a" moved to most recent
    
    // get is the common pattern for read + promote
    assert_eq!(value, Some(&1));
    
    // For non-existent keys:
    let missing = cache.get(&"z");
    // Returns: None
    // Order unchanged
}

get returns the value and moves the entry to most-recent position—a combined operation.

The promote Method

use lru::LruCache;
 
fn promote_method() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    // Order: [a, b, c]
    
    // promote moves to most recent WITHOUT returning value
    let promoted = cache.promote(&"a");
    // Returns: bool (true if key existed and was promoted)
    // Order: [b, c, a]  <- "a" moved to most recent
    
    assert!(promoted);  // Key existed
    
    // For non-existent keys:
    let missing = cache.promote(&"z");
    assert!(!missing);  // Key didn't exist
    // Order unchanged
}

promote only reorders without returning the value—useful when you don't need the value.

Key Differences: Return Type and Intent

use lru::LruCache;
 
fn return_differences() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    // Order: [a]
    
    // get: Returns reference, promotes
    if let Some(value) = cache.get(&"a") {
        // Have access to the value
        println!("Value: {}", value);  // 1
    }
    // Limitation: cache is borrowed until value is dropped
    
    // promote: Returns bool, promotes
    let existed = cache.promote(&"a");
    if existed {
        // Know the key exists
        // But don't have access to value
        println!("Key exists");
    }
    // Benefit: No borrow of value, cache immediately usable
    
    // When to use which:
    // - Need value? Use get
    // - Only need to reorder? Use promote
}

get returns Option<&V> (borrowing the cache); promote returns bool (no borrow held).

Borrowing Implications

use lru::LruCache;
 
fn borrowing_differences() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    cache.put("b", 2);
    
    // With get: cache is borrowed while value reference exists
    if let Some(value) = cache.get(&"a") {
        // cache is borrowed immutably
        // Cannot call mutable methods while value exists
        // cache.put("c", 3);  // ERROR: cache borrowed
        
        println!("{}", value);
    }
    // Now cache is unborrowed
    
    // With promote: no value reference, cache immediately usable
    cache.promote(&"b");  // Returns bool, no borrow
    cache.put("c", 3);    // OK, no borrow held
}

promote doesn't hold a borrow, allowing immediate mutation after reordering.

Manual Cache Reordering Use Cases

use lru::LruCache;
 
fn use_case_manual_reorder() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    cache.put("d", 4);
    cache.put("e", 5);
    // Order: [a, b, c, d, e]
    
    // Scenario: Pre-populate LRU order from external knowledge
    // E.g., we know "a" and "b" will be needed soon
    
    // Without promote (get each):
    let _ = cache.get(&"a");  // Returns value we might not need
    let _ = cache.get(&"b");  // Returns value we might not need
    
    // With promote:
    cache.promote(&"a");  // Just reorder
    cache.promote(&"b");  // Just reorder
    // Order: [c, d, e, a, b]
    // "a" and "b" now least likely to be evicted
    
    // Use case: Batch reordering from access prediction
    // When you have predicted access patterns but don't need values
}

Use promote when reordering without needing values—prediction, prefetching, pre-positioning.

Performance Difference

use lru::LruCache;
 
fn performance_diff() {
    // Both operations have similar time complexity
    // O(1) for linked list operations
    
    // promote is marginally faster when:
    // - You don't need the value
    // - Avoids creating Option<&V>
    // - No lifetime management overhead
    
    // The difference is small, but matters in tight loops
    
    let mut cache: LruCache<u64, i32> = LruCache::new(1000);
    for i in 0..1000 {
        cache.put(i, i as i32);
    }
    
    // Scenario: Promote all even keys without accessing values
    for i in (0..1000).step_by(2) {
        cache.promote(&i);
        // No value reference created
        // Slightly less overhead than get
    }
    
    // The value reference from get would:
    // 1. Create Option<&V>
    // 2. Extend lifetime to borrow
    // 3. Require drop before next operation
}

promote avoids value reference overhead—minor but measurable in tight loops.

Peek vs Get vs Promote

use lru::LruCache;
 
fn three_methods_comparison() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    // Order: [a, b, c]
    
    // peek: Get value, NO promotion
    let v = cache.peek(&"a");
    // Returns: Option<&i32>
    // Order: [a, b, c]  <- UNCHANGED
    
    // get: Get value, WITH promotion
    let v = cache.get(&"a");
    // Returns: Option<&i32>
    // Order: [b, c, a]  <- "a" moved to most recent
    
    // promote: NO value, WITH promotion
    let existed = cache.promote(&"a");
    // Returns: bool
    // Order: [b, c, a]  <- (same effect as get for ordering)
    
    // Summary:
    // peek:   Access only, no reorder
    // get:    Access + reorder
    // promote: Reorder only, no access
}

Three operations with different combinations: peek (no reorder), get (access + reorder), promote (reorder only).

Contains and Promote Pattern

use lru::LruCache;
 
fn contains_promote_pattern() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    
    // contains: Check existence, NO promotion
    if cache.contains(&"a") {
        // Key exists, but order unchanged
    }
    
    // Pattern: Check + promote if exists
    if cache.contains(&"a") {
        cache.promote(&"a");
        // Equivalent to get when you don't need value
    }
    
    // But promote already returns existence:
    if cache.promote(&"a") {
        // Key existed and was promoted
    } else {
        // Key didn't exist
    }
    
    // So promote combines contains + reorder in one operation
}

promote combines existence check with reordering—no separate contains needed.

Working with Mutable References

use lru::LruCache;
 
fn mutable_operations() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    cache.put("b", 2);
    
    // get_mut: Returns mutable reference, promotes
    if let Some(value) = cache.get_mut(&"a") {
        *value += 10;  // Modify value
        // Order: [b, a] after get_mut
    }
    
    // promote: Cannot modify value, only reorders
    cache.promote(&"b");
    // Order: [a, b]
    // No way to access value
    
    // get_mut is for read-modify-write with promotion
    // promote is for reorder-only scenarios
    
    // Pattern: Check + modify without promote
    if let Some(value) = cache.peek_mut(&"a") {
        *value += 10;  // Modify without reorder
    }
    // peek_mut: Modify value, keep order
}

For value modification with promotion, use get_mut; for reordering only, use promote.

Batch Promotion Pattern

use lru::LruCache;
 
fn batch_promotion() {
    let mut cache: LruCache<&str, i32> = LruCache::new(100);
    
    // Populate cache
    for i in 0..100 {
        cache.put(&format!("key{}", i).leak(), i);
    }
    
    // Scenario: Known access pattern for next operations
    // Keys ["key0", "key1", "key2"] will be accessed
    // Promote them to most-recent position
    
    // Using promote for batch reordering:
    for key in &["key0", "key1", "key2"] {
        cache.promote(key);
    }
    // All three are now at most-recent positions
    // Cache is immediately usable (no held borrows)
    
    // Compare to using get:
    // Would need to collect values or drop references
    // let v1 = cache.get(&"key0");
    // ... use v1 ...
    // drop(v1);  // Before next get if needed
}

Batch promotion is cleaner with promote—no need to manage value references.

Prediction-Based Reordering

use lru::LruCache;
 
fn prediction_reordering() {
    let mut cache: LruCache<&str, Vec<u8>> = LruCache::new(100);
    
    // Assume cache is populated with file data
    // cache.put("file1", data1);
    // ...
    
    // Scenario: File access prediction from read-ahead pattern
    // "We'll likely need files 5-10 soon"
    
    // Promote predicted files to most recent
    fn predict_and_promote(cache: &mut LruCache<&str, Vec<u8>>, predicted_keys: &[&str]) {
        for key in predicted_keys {
            cache.promote(key);
        }
    }
    
    // This pattern is useful when:
    // - You have external knowledge about access patterns
    // - Prediction logic suggests future accesses
    // - You want to pre-position entries for better cache hit rate
    
    // Why promote instead of get:
    // - Don't need file contents yet
    // - Avoid loading/copying data
    // - Just reorder LRU list
}

Prediction systems can use promote to pre-position entries without accessing heavy values.

Iterating with Promotion

use lru::LruCache;
 
fn iterate_with_promotion() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    
    // Promote while iterating keys (can't do with get easily)
    let keys_to_promote = ["a", "c"];
    
    for key in &keys_to_promote {
        cache.promote(key);
        // No value reference held, can continue immediately
    }
    
    // With get, this would be awkward:
    // for key in &keys_to_promote {
    //     let _ = cache.get(key);
    //     // Reference held until next iteration
    //     // Works but creates unnecessary Option<&V>
    // }
}

promote is cleaner for iteration-based reordering—no value references to manage.

Comparing Operations Side by Side

use lru::LruCache;
 
fn side_by_side() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    // Order: [a, b, c]
    
    // peek: Check only, no reorder
    let _ = cache.peek(&"a");
    // Returns: Option<&V>
    // Order after: [a, b, c] (unchanged)
    
    // get: Check + promote
    let _ = cache.get(&"a");
    // Returns: Option<&V>
    // Order after: [b, c, a]
    
    // promote: Promote only, no value
    let _ = cache.promote(&"b");
    // Returns: bool
    // Order after: [c, a, b]
    
    // contains: Check only, no value, no reorder
    let _ = cache.contains(&"c");
    // Returns: bool
    // Order after: [c, a, b] (unchanged)
}
Operation Returns Reorders Accesses Value Holds Borrow
peek Option<&V> No Yes Yes
get Option<&V> Yes Yes Yes
promote bool Yes No No
contains bool No No No

When to Use Each Method

use lru::LruCache;
 
fn choosing_method() {
    // Use peek when:
    // - Need to read value without affecting LRU order
    // - Example: Debugging, inspection, stats
    
    // Use get when:
    // - Need value AND want to mark as recently used
    // - This is the common cache access pattern
    
    // Use promote when:
    // - Want to affect LRU order without reading value
    // - Example: Prediction, pre-positioning, batch reorder
    // - Want to avoid borrow for subsequent mutations
    
    // Use contains when:
    // - Only checking existence
    // - Don't need value
    // - Don't want to affect order
    
    // Use peek_mut when:
    // - Need to modify value without affecting order
    
    // Use get_mut when:
    // - Need to modify value AND promote
}

Choose based on whether you need value access and whether you want reordering.

Real Example: Web Cache Prediction

use lru::LruCache;
 
fn web_cache_example() {
    // Web cache with predictive pre-promotion
    
    struct WebCache {
        cache: LruCache<String, Vec<u8>>,
    }
    
    impl WebCache {
        fn pre_warm(&mut self, likely_urls: &[String]) {
            // Pre-promote URLs that will likely be accessed
            // Based on user behavior prediction
            
            for url in likely_urls {
                // Use promote: don't need content, just reorder
                self.cache.promote(url);
            }
        }
        
        fn get_content(&mut self, url: &str) -> Option<&Vec<u8>> {
            // Use get: need content AND want to promote
            self.cache.get(url)
        }
        
        fn check_cached(&self, url: &str) -> bool {
            // Use contains: just checking, no order change
            self.cache.contains(url)
        }
        
        fn get_stats(&self, url: &str) -> Option<usize> {
            // Use peek: need size, no order change
            self.cache.peek(url).map(|data| data.len())
        }
    }
}

Different operations for different needs—promote for prediction, get for access, peek for inspection.

Summary Table

fn summary_table() {
    // | Method | Returns | Reorders | Borrow Held |
    // |--------|---------|----------|-------------|
    // | peek | Option<&V> | No | Yes |
    // | get | Option<&V> | Yes | Yes |
    // | promote | bool | Yes | No |
    // | contains | bool | No | No |
    // | peek_mut | Option<&mut V> | No | Yes |
    // | get_mut | Option<&mut V> | Yes | Yes |
    
    // | Use Case | Method |
    // |----------|--------|
    // | Need value + reorder | get |
    // | Need value + no reorder | peek |
    // | Reorder only | promote |
    // | Check existence only | contains |
    // | Modify + reorder | get_mut |
    // | Modify + no reorder | peek_mut |
}

Synthesis

Quick reference:

use lru::LruCache;
 
fn quick_reference() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    cache.put("a", 1);
    cache.put("b", 2);
    
    // get: Returns value, promotes
    if let Some(value) = cache.get(&"a") {
        println!("Value: {}", value);
    }
    
    // promote: Returns bool, promotes, no borrow
    if cache.promote(&"b") {
        // Entry existed and was promoted
    }
    
    // Choose:
    // - get when you need the value
    // - promote when you only want to reorder
}

Key insight: promote and get both move entries to the most-recently-used position in the LRU ordering, but promote returns only a bool indicating existence while get returns Option<&V> with a reference to the value. This distinction matters in two ways: (1) promote avoids the overhead of creating a value reference when you only need to affect cache ordering, and (2) promote doesn't hold a borrow on the cache, allowing immediate subsequent mutations. The method you choose should match your intent: get for read + reorder (the common case), promote for reorder-only (prediction, pre-positioning, batch reordering), peek for read-only without reorder (inspection), and contains for existence check without reorder. This separation gives you fine-grained control over LRU cache behavior without forcing value access when you don't need it.