Loading pageā¦
Rust walkthroughs
Loading pageā¦
lru::LruCache, what happens to evicted entries and how can you implement custom eviction callbacks?LRU caches evict entries when they reach capacity, dropping the least recently used items. Understanding what happens to evicted entries and how to react to evictions enables proper resource cleanup and cache instrumentation.
The lru::LruCache maintains entries in access order, evicting the least recently used when capacity is exceeded:
use lru::LruCache;
use std::num::NonZeroUsize;
fn basic_eviction() {
let mut cache: LruCache<&str, &str> = LruCache::new(NonZeroUsize::new(3).unwrap());
cache.put("a", "value a");
cache.put("b", "value b");
cache.put("c", "value c");
// Cache is now at capacity: [a, b, c] (least to most recent)
cache.put("d", "value d"); // "a" is evicted
assert_eq!(cache.get(&"a"), None); // "a" is gone
assert_eq!(cache.get(&"b"), Some(&"value b"));
assert_eq!(cache.get(&"c"), Some(&"value c"));
assert_eq!(cache.get(&"d"), Some(&"value d"));
}When put exceeds capacity, the least recently used entry is silently dropped.
By default, evicted entries are simply dropped:
use lru::LruCache;
use std::num::NonZeroUsize;
struct Resource {
name: String,
}
impl Drop for Resource {
fn drop(&mut self) {
println!("Dropping resource: {}", self.name);
}
}
fn eviction_drops() {
let mut cache: LruCache<String, Resource> = LruCache::new(NonZeroUsize::new(2).unwrap());
cache.put("a".to_string(), Resource { name: "A".to_string() });
cache.put("b".to_string(), Resource { name: "B".to_string() });
println!("Adding C...");
cache.put("c".to_string(), Resource { name: "C".to_string() });
// Prints: "Dropping resource: A"
// Entry "a" was evicted and Resource was dropped
}The Drop implementation is called, but you have no opportunity to inspect or process the evicted value.
The put method returns the old value if the key existed, and evicts an entry if capacity is exceeded:
use lru::LruCache;
use std::num::NonZeroUsize;
fn put_returns_evicted() {
let mut cache: LruCache<&str, i32> = LruCache::new(NonZeroUsize::new(2).unwrap());
cache.put("a", 1);
cache.put("b", 2);
// put returns Option<V> - the old value for this key
let old = cache.put("a", 10);
assert_eq!(old, Some(1)); // Previous value for "a"
// But put doesn't tell us about evicted entries
cache.put("c", 3); // "b" is evicted, but put returns None
assert_eq!(cache.get(&"b"), None);
}The put method only returns the previous value for the same key, not evicted entries.
The push method returns evicted entries:
use lru::LruCache;
use std::num::NonZeroUsize;
fn push_captures_evicted() {
let mut cache: LruCache<&str, i32> = LruCache::new(NonZeroUsize::new(2).unwrap());
cache.put("a", 1);
cache.put("b", 2);
// push returns Option<(K, V)> - the evicted entry
let evicted = cache.push("c", 3);
assert_eq!(evicted, Some(("b", 2)));
// If no eviction occurs, push returns None
let evicted = cache.push("d", 4);
assert_eq!(evicted, Some(("a", 1)));
// Cache now contains: c -> 3, d -> 4
}Using push instead of put lets you capture and process evicted entries.
Wrap the cache to provide callback functionality:
use lru::LruCache;
use std::num::NonZeroUsize;
struct LruCacheWithCallback<K, V, F>
where
F: Fn(&K, &V),
{
cache: LruCache<K, V>,
on_evict: F,
}
impl<K, V, F> LruCacheWithCallback<K, V, F>
where
K: std::hash::Hash + Eq + Clone,
V: Clone,
F: Fn(&K, &V),
{
fn new(capacity: usize, on_evict: F) -> Self {
Self {
cache: LruCache::new(NonZeroUsize::new(capacity).unwrap()),
on_evict,
}
}
fn put(&mut self, key: K, value: V) -> Option<V> {
// Use push to capture evictions
let evicted = self.cache.push(key, value);
// Call callback on evicted entry
if let Some((k, v)) = evicted {
(self.on_evict)(&k, &v);
}
// push doesn't return the old value for the same key
// If we need that, we'd check first
None
}
fn get(&mut self, key: &K) -> Option<&V> {
self.cache.get(key)
}
fn len(&self) -> usize {
self.cache.len()
}
}
fn eviction_callback_example() {
let mut cache = LruCacheWithCallback::new(2, |key: &&str, value: &i32| {
println!("Evicted: {} -> {}", key, value);
});
cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3); // Prints: "Evicted: a -> 1"
cache.put("d", 4); // Prints: "Evicted: b -> 2"
}When cache values hold resources, use eviction callbacks for cleanup:
use lru::LruCache;
use std::num::NonZeroUsize;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
struct FileCache {
cache: LruCache<String, (File, PathBuf)>,
capacity: usize,
}
impl FileCache {
fn new(capacity: usize) -> Self {
Self {
cache: LruCache::new(NonZeroUsize::new(capacity).unwrap()),
capacity,
}
}
fn get_or_open(&mut self, name: &str) -> std::io::Result<&mut File> {
// Check if already cached
if self.cache.contains(&name.to_string()) {
return Ok(&mut self.cache.get_mut(&name.to_string()).unwrap().0);
}
// Open new file
let path = PathBuf::from(name);
let file = File::create(&path)?;
// Insert and handle eviction
let evicted = self.cache.push(name.to_string(), (file, path));
if let Some((_, (mut file, path))) = evicted {
// Clean up evicted file
file.flush()?;
// File is closed when dropped
// Optionally delete the file
// std::fs::remove_file(&path)?;
}
Ok(&mut self.cache.get_mut(&name.to_string()).unwrap().0)
}
}Track evictions for monitoring:
use lru::LruCache;
use std::num::NonZeroUsize;
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc;
struct CacheStats {
evictions: AtomicU64,
hits: AtomicU64,
misses: AtomicU64,
}
struct InstrumentedCache<K, V> {
cache: LruCache<K, V>,
stats: Arc<CacheStats>,
}
impl<K, V> InstrumentedCache<K, V>
where
K: std::hash::Hash + Eq + Clone,
{
fn new(capacity: usize) -> Self {
Self {
cache: LruCache::new(NonZeroUsize::new(capacity).unwrap()),
stats: Arc::new(CacheStats {
evictions: AtomicU64::new(0),
hits: AtomicU64::new(0),
misses: AtomicU64::new(0),
}),
}
}
fn get(&mut self, key: &K) -> Option<&V> {
if self.cache.contains(key) {
self.stats.hits.fetch_add(1, Ordering::Relaxed);
self.cache.get(key)
} else {
self.stats.misses.fetch_add(1, Ordering::Relaxed);
None
}
}
fn put(&mut self, key: K, value: V) {
let evicted = self.cache.push(key, value);
if evicted.is_some() {
self.stats.evictions.fetch_add(1, Ordering::Relaxed);
}
}
fn stats(&self) -> &Arc<CacheStats> {
&self.stats
}
}
fn stats_example() {
let mut cache = InstrumentedCache::new(3);
cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);
cache.get(&"a");
cache.get(&"b");
cache.get(&"d"); // miss
cache.put("e", 5); // eviction
cache.put("f", 6); // eviction
let stats = cache.stats();
println!("Hits: {}", stats.hits.load(Ordering::Relaxed));
println!("Misses: {}", stats.misses.load(Ordering::Relaxed));
println!("Evictions: {}", stats.evictions.load(Ordering::Relaxed));
}Evict the least recently used entry explicitly:
use lru::LruCache;
use std::num::NonZeroUsize;
fn manual_eviction() {
let mut cache: LruCache<&str, i32> = LruCache::new(NonZeroUsize::new(3).unwrap());
cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);
// Manually evict LRU entry
let evicted = cache.pop_lru();
assert_eq!(evicted, Some(("a", 1)));
// Manually evict most recently used
let evicted = cache.pop_mru();
assert_eq!(evicted, Some(("c", 3)));
// Cache now contains only "b" -> 2
assert_eq!(cache.len(), 1);
}Resizing the cache can evict multiple entries:
use lru::LruCache;
use std::num::NonZeroUsize;
fn resize_eviction() {
let mut cache: LruCache<&str, i32> = LruCache::new(NonZeroUsize::new(5).unwrap());
cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);
cache.put("d", 4);
cache.put("e", 5);
// Resize to smaller capacity
// This returns all evicted entries
let evicted: Vec<_> = cache.resize(NonZeroUsize::new(2).unwrap());
assert_eq!(evicted.len(), 3);
assert_eq!(evicted[0], ("a", 1)); // Least recently used first
assert_eq!(evicted[1], ("b", 2));
assert_eq!(evicted[2], ("c", 3));
// Cache now contains only "d" and "e"
assert_eq!(cache.len(), 2);
}The resize method returns all entries evicted due to the capacity change.
Process entries before they're evicted:
use lru::LruCache;
use std::num::NonZeroUsize;
fn process_before_eviction() {
let mut cache: LruCache<String, Vec<u8>> = LruCache::new(NonZeroUsize::new(3).unwrap());
// Fill cache with data
for i in 0..3 {
let key = format!("key_{}", i);
let data = vec![i as u8; 100];
cache.put(key.clone(), data);
}
// Before adding new entries, process potential evictions
if cache.len() >= cache.cap().get() {
// Evict oldest entry and process it
if let Some((key, data)) = cache.pop_lru() {
println!("Processing evicted entry: {} ({} bytes)", key, data.len());
// Maybe write to disk or forward to another system
}
}
// Now safe to add new entry
cache.put("new_key".to_string(), vec![99; 100]);
}Use interior mutability for complex callback scenarios:
use lru::LruCache;
use std::num::NonZeroUsize;
use std::cell::RefCell;
use std::rc::Rc;
struct CacheWithListener<K, V> {
cache: LruCache<K, V>,
listener: Rc<RefCell<dyn Fn(K, V)>>,
}
impl<K, V> CacheWithListener<K, V>
where
K: std::hash::Hash + Eq + Clone,
{
fn new(capacity: usize, listener: Rc<RefCell<dyn Fn(K, V)>>) -> Self {
Self {
cache: LruCache::new(NonZeroUsize::new(capacity).unwrap()),
listener,
}
}
fn put(&mut self, key: K, value: V) {
let evicted = self.cache.push(key, value);
if let Some((k, v)) = evicted {
self.listener.borrow_mut()(k, v);
}
}
fn get(&mut self, key: &K) -> Option<&V> {
self.cache.get(key)
}
}
fn listener_example() {
let listener = Rc::new(RefCell::new(|key: String, value: i32| {
println!("Evicted: {} -> {}", key, value);
}));
let mut cache = CacheWithListener::new(2, listener);
cache.put("a".to_string(), 1);
cache.put("b".to_string(), 2);
cache.put("c".to_string(), 3); // Triggers listener
}For multi-threaded contexts, wrap in a mutex:
use lru::LruCache;
use std::num::NonZeroUsize;
use std::sync::{Arc, Mutex};
struct ThreadSafeCache<K, V> {
inner: Mutex<LruCache<K, V>>,
on_evict: Mutex<Box<dyn Fn(&K, &V) + Send>>,
}
impl<K, V> ThreadSafeCache<K, V>
where
K: std::hash::Hash + Eq + Clone + Send,
V: Send,
{
fn new(capacity: usize, on_evict: Box<dyn Fn(&K, &V) + Send>) -> Self {
Self {
inner: Mutex::new(LruCache::new(NonZeroUsize::new(capacity).unwrap())),
on_evict: Mutex::new(on_evict),
}
}
fn put(&self, key: K, value: V) {
let mut cache = self.inner.lock().unwrap();
let evicted = cache.push(key, value);
drop(cache); // Release lock before callback
if let Some((k, v)) = evicted {
let callback = self.on_evict.lock().unwrap();
callback(&k, &v);
}
}
fn get(&self, key: &K) -> Option<V>
where
V: Clone,
{
let mut cache = self.inner.lock().unwrap();
cache.get(key).cloned()
}
}
fn thread_safe_example() {
let cache = Arc::new(ThreadSafeCache::new(
100,
Box::new(|key: &String, value: &i32| {
println!("Evicted: {} -> {}", key, value);
}),
));
let cache_clone = cache.clone();
std::thread::spawn(move || {
cache_clone.put("a".to_string(), 1);
});
}By default, evicted entries in lru::LruCache are simply dropped with no notification. To implement eviction callbacks:
Use push instead of put: The push method returns Option<(K, V)> containing the evicted entry, allowing you to process it before it's dropped.
Wrap the cache: Create a wrapper type that captures eviction callbacks and processes evicted entries appropriately.
Handle resource cleanup: When cache values hold resources (file handles, connections, etc.), use eviction callbacks to properly release those resources.
Track statistics: Use atomic counters in eviction callbacks to monitor cache behavior and tune capacity.
Use resize for bulk eviction: When changing capacity, resize returns all evicted entries as a vector.
Consider thread safety: For multi-threaded scenarios, wrap the cache and callback in mutexes, being careful about lock ordering to avoid deadlocks.
The key insight is that lru::LruCache doesn't have built-in eviction callbacksāyou must use the return value of push (or manually call pop_lru) to intercept evictions. This design keeps the core library simple while allowing you to build whatever eviction handling you need on top.