How does lru::LruCache::promote enable manual cache entry prioritization without reinsertion?

lru::LruCache::promote moves an existing cache entry to the most-recently-used position in the LRU ordering without requiring removal and reinsertion, allowing applications to adjust cache priority based on access patterns that may not align with simple get operations. In an LRU cache, entries are ordered by recency of use: the most recently accessed entry stays, while the least recently used is evicted when capacity is reached. Normally, calling get promotes an entry automatically, but promote provides explicit control—useful when an entry should be prioritized without actually retrieving its value, when warming up a cache with predicted hot entries, or when implementing custom eviction policies that consider factors beyond simple access patterns. The method returns bool indicating whether the key existed and was promoted, allowing callers to distinguish between "key not found" and "promotion succeeded."

LRU Cache Basics

use lru::LruCache;
 
fn main() {
    // Create cache with capacity for 3 entries
    let mut cache: LruCache<String, i32> = LruCache::new(3);
    
    // Insert entries
    cache.put("a".to_string(), 1);
    cache.put("b".to_string(), 2);
    cache.put("c".to_string(), 3);
    
    // Order (most recent -> least recent): c, b, a
    
    // Access "a" - promotes to most recent
    let val = cache.get(&"a".to_string());
    println!("Got a: {:?}", val);
    
    // Order now: a, c, b
    
    // Insert new entry - evicts least recently used ("b")
    cache.put("d".to_string(), 4);
    
    println!("Cache contains:");
    for (key, value) in cache.iter() {
        println!("  {} -> {}", key, value);
    }
    // Contains: d, a, c (b was evicted)
}

LRU caches maintain access order; the least recently used entry is evicted when full.

The promote Method

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<&str, i32> = LruCache::new(3);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    // Order: c, b, a (most recent -> least recent)
    
    // Promote "a" without getting its value
    let promoted = cache.promote(&"a");
    println!("Promoted 'a': {}", promoted);
    // Order now: a, c, b
    
    // Promote non-existent key
    let promoted = cache.promote(&"d");
    println!("Promoted 'd': {}", promoted);  // false
    
    println!("Order after promote:");
    for (key, _) in cache.iter() {
        println!("  {}", key);
    }
    // Order: a, c, b
}

promote moves an entry to the most-recently-used position without returning its value.

Difference Between get and promote

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<&str, i32> = LruCache::new(3);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    
    // get returns value AND promotes
    let val = cache.get(&"b");
    println!("get returned: {:?}", val);
    // b is now most recently used
    
    // Reset
    cache.put("d", 4);  // evicts least recent
    
    let mut cache2: LruCache<&str, i32> = LruCache::new(3);
    cache2.put("a", 1);
    cache2.put("b", 2);
    cache2.put("c", 3);
    
    // promote only moves, doesn't return value
    let promoted = cache2.promote(&"b");
    println!("promote returned: {}", promoted);  // just bool
    // b is now most recently used, but we never accessed the value
    
    // Useful when you don't need the value
    // but want to affect eviction order
}

get returns a reference to the value; promote only returns success status.

Use Case: Cache Warmup

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(10);
    
    // Fill cache with initial data
    for i in 0..10 {
        cache.put(format!("key_{}", i), i);
    }
    
    // We know certain keys will be hot soon
    // Promote them without retrieving values
    let hot_keys = ["key_0", "key_5", "key_9"];
    
    for key in hot_keys {
        cache.promote(&key.to_string());
    }
    
    // Now these keys are least likely to be evicted
    println!("Promoted hot keys to most recent positions");
    
    // Add a new entry - will evict least recently used
    cache.put("new_key".to_string(), 100);
    
    // The hot keys we promoted are safe
    // Some other key was evicted instead
}

promote allows pre-positioning entries for expected access without reading values.

Use Case: Custom Priority Adjustment

use lru::LruCache;
 
struct PrioritizedCache {
    cache: LruCache<String, String>,
}
 
impl PrioritizedCache {
    fn new(capacity: usize) -> Self {
        Self {
            cache: LruCache::new(capacity),
        }
    }
    
    fn insert(&mut self, key: String, value: String, priority: bool) {
        self.cache.put(key.clone(), value);
        
        if priority {
            // High-priority items should stay longer
            // Promote to make them most recently used
            self.cache.promote(&key);
        }
    }
    
    fn get(&mut self, key: &str) -> Option<&String> {
        self.cache.get(&key.to_string())
    }
    
    fn get_without_promote(&self, key: &str) -> Option<&String> {
        // Peek without affecting order
        self.cache.peek(key)
    }
}
 
fn main() {
    let mut cache = PrioritizedCache::new(3);
    
    // Insert with different priorities
    cache.insert("low".to_string(), "data1".to_string(), false);
    cache.insert("high".to_string(), "data2".to_string(), true);  // Promoted
    cache.insert("medium".to_string(), "data3".to_string(), false);
    
    // "high" is most recently used due to promotion
    // It will be evicted last when cache is full
}

promote enables custom prioritization beyond simple access recency.

Preserving Values Without Copying

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<String, Vec<u8>> = LruCache::new(3);
    
    // Large values
    cache.put("a".to_string(), vec
![0; 1_000_000]);
    cache.put("b".to_string(), vec
![0; 1_000_000]);
    cache.put("c".to_string(), vec
![0; 1_000_000]);
    
    // If we use get, we'd need to handle the reference
    // promote doesn't touch the value at all
    
    // Promote "a" to keep it in cache
    // No value copying or moving
    cache.promote(&"a".to_string());
    
    // "a" is now most recently used
    // No data was copied or modified
}

promote reorders entries without touching the values.

Checking Entry Existence

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<&str, i32> = LruCache::new(3);
    
    cache.put("a", 1);
    cache.put("b", 2);
    
    // promote returns bool - can be used as existence check
    if cache.promote(&"a") {
        println!("'a' exists and is now most recent");
    } else {
        println!("'a' doesn't exist");
    }
    
    if cache.promote(&"c") {
        println!("'c' exists");
    } else {
        println!("'c' doesn't exist");
    }
    
    // This combines existence check with prioritization
    // More efficient than: if cache.contains_key(k) { cache.get(k); }
}

promote returns a boolean indicating success, useful for conditional logic.

Working with Iterators

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<&str, i32> = LruCache::new(5);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    cache.put("d", 4);
    cache.put("e", 5);
    // Order: e, d, c, b, a
    
    // Promote multiple keys
    let to_promote = ["a", "c", "e"];
    for key in to_promote {
        cache.promote(&key);
    }
    
    // Check final order
    println!("Final order (most to least recent):");
    for (key, value) in cache.iter() {
        println!("  {} = {}", key, value);
    }
    // Last promoted ("e") is most recent
    // Then "c", then "a"
}

Multiple promotions move each entry to most-recent position in sequence.

Comparison with Reinsertion

use lru::LruCache;
 
fn main() {
    // Method 1: Remove and reinsert
    let mut cache1: LruCache<&str, i32> = LruCache::new(3);
    cache1.put("a", 1);
    cache1.put("b", 2);
    cache1.put("c", 3);
    
    if let Some((_, value)) = cache1.pop(&"a") {
        cache1.put("a", value);  // Reinsert at most recent
    }
    // Requires removing, getting value, reinserting
    
    // Method 2: promote
    let mut cache2: LruCache<&str, i32> = LruCache::new(3);
    cache2.put("a", 1);
    cache2.put("b", 2);
    cache2.put("c", 3);
    
    cache2.promote(&"a");
    // Simpler, doesn't touch value
    // No potential for value modification during reinsertion
    
    println!("Both methods result in same ordering");
}

promote is cleaner than remove-and-reinsert for reordering.

Batch Promotion Pattern

use lru::LruCache;
 
fn promote_batch(cache: &mut LruCache<String, i32>, keys: &[&str]) {
    // Promote in order - last key will be most recent
    for key in keys {
        cache.promote(&key.to_string());
    }
}
 
fn main() {
    let mut cache: LruCache<String, i32> = LruCache::new(5);
    
    cache.put("a".to_string(), 1);
    cache.put("b".to_string(), 2);
    cache.put("c".to_string(), 3);
    cache.put("d".to_string(), 4);
    cache.put("e".to_string(), 5);
    
    // Promote specific keys
    promote_batch(&mut cache, &["a", "c", "e"]);
    
    // Final order: e, c, a, d, b (most to least recent)
    // Last promoted ("e") is most recent
    
    println!("After batch promotion:");
    for (key, _) in cache.iter() {
        println!("  {}", key);
    }
}

Batch promotion allows prioritizing multiple entries at once.

Promote with Peek Pattern

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<&str, i32> = LruCache::new(3);
    
    cache.put("a", 1);
    cache.put("b", 2);
    cache.put("c", 3);
    
    // Peek at value without promoting
    if let Some(value) = cache.peek(&"a") {
        println!("Peeked at 'a': {}", value);
        // Order unchanged
    }
    
    // Conditionally promote based on value
    if let Some(value) = cache.peek(&"b") {
        if *value > 1 {
            cache.promote(&"b");
            println!("Promoted 'b' because value > 1");
        }
    }
    
    // Or promote without looking
    cache.promote(&"c");
}

Combine peek and promote for conditional prioritization without affecting order on peek.

Cache Eviction Behavior

use lru::LruCache;
 
fn main() {
    let mut cache: LruCache<&str, &str> = LruCache::new(3);
    
    cache.put("a", "keep");
    cache.put("b", "evict_me");
    cache.put("c", "normal");
    // Order: c, b, a
    
    // Promote "a" to protect it from eviction
    cache.promote(&"a");
    // Order: a, c, b
    
    // Add new entry - evicts least recently used ("b")
    cache.put("d", "new");
    
    println!("After adding 'd':");
    for (key, value) in cache.iter() {
        println!("  {} = {}", key, value);
    }
    // Contains: d, a, c
    // "b" was evicted, "a" was protected by promotion
    
    assert!(cache.contains_key(&"a"));
    assert!(!cache.contains_key(&"b"));
}

promote protects entries from eviction by moving them away from the eviction end.

Performance Considerations

use lru::LruCache;
use std::time::Instant;
 
fn main() {
    let mut cache: LruCache<u64, u64> = LruCache::new(100_000);
    
    // Fill cache
    for i in 0..100_000 {
        cache.put(i, i);
    }
    
    // Promote random entry
    let start = Instant::now();
    cache.promote(&50_000);
    let promote_time = start.elapsed();
    
    // Compare with get (which also promotes)
    let start = Instant::now();
    let _ = cache.get(&50_001);
    let get_time = start.elapsed();
    
    println!("promote: {:?}", promote_time);
    println!("get: {:?}", get_time);
    
    // promote is typically faster because it doesn't
    // need to return a reference or update access tracking
    // beyond the position change
}

promote has O(1) complexity like all LRU operations; it may be slightly faster than get.

Implementing Priority-Based Eviction

use lru::LruCache;
 
struct PriorityCache {
    cache: LruCache<String, String>,
    high_priority: std::collections::HashSet<String>,
}
 
impl PriorityCache {
    fn new(capacity: usize) -> Self {
        Self {
            cache: LruCache::new(capacity),
            high_priority: std::collections::HashSet::new(),
        }
    }
    
    fn insert(&mut self, key: String, value: String, high_priority: bool) {
        self.cache.put(key.clone(), value);
        if high_priority {
            self.high_priority.insert(key.clone());
            self.cache.promote(&key);
        }
    }
    
    fn get(&mut self, key: &str) -> Option<&String> {
        let value = self.cache.get(&key.to_string());
        // Re-promote high priority items on access
        if self.high_priority.contains(key) {
            self.cache.promote(&key.to_string());
        }
        value
    }
    
    fn ensure_high_priority(&mut self, key: &str) {
        if self.cache.contains_key(key) {
            self.high_priority.insert(key.to_string());
            self.cache.promote(&key.to_string());
        }
    }
}
 
fn main() {
    let mut cache = PriorityCache::new(3);
    
    cache.insert("low".to_string(), "a".to_string(), false);
    cache.insert("high".to_string(), "b".to_string(), true);
    cache.insert("medium".to_string(), "c".to_string(), false);
    
    // "high" is promoted during insert
    // "high" will be last to be evicted
    
    cache.get("low");  // Normal promotion
    cache.ensure_high_priority("medium");  // Promote without getting
}

promote enables custom priority systems on top of LRU.

Synthesis

Comparison of methods:

Method Returns Side Effect
get Option<&V> Promotes entry
peek Option<&V> No promotion
promote bool Promotes entry
pop Option<(K, V)> Removes entry

When to use promote:

Scenario Method
Need value and want to access get
Need value but don't want to promote peek
Don't need value, want to protect entry promote
Cache warmup for predicted access promote
Conditional promotion peek + promote

Key insight: lru::LruCache::promote exists because the LRU eviction policy—evict the least recently used entry—doesn't always align with actual importance. Sometimes an entry should be protected from eviction even though it hasn't been accessed recently, and sometimes you want to affect the eviction order without paying the cost of retrieving the value. The method provides fine-grained control over cache ordering: you can pre-position entries you expect will be needed (cache warmup), implement priority tiers where high-priority entries are promoted after regular access, or build custom eviction policies that consider factors beyond recency. The boolean return value makes promote useful as both an action and an existence check—calling promote and checking the result tells you whether the key existed, which can drive conditional logic without a separate contains_key call. Combined with peek for reading without promoting, these methods give you complete control over how access patterns affect cache ordering, going beyond the simple "access promotes" behavior that basic LRU provides.