Loading pageā¦
Rust walkthroughs
Loading pageā¦
dashmap::DashMap::iter handle concurrent iteration with mutation safety?dashmap::DashMap::iter provides concurrent-safe iteration by leveraging DashMap's sharded architecture: each shard maintains its own RwLock, and iteration acquires read locks on shards one at a time as it traverses them. This design allows concurrent reads and writes from other threads while iteratingāiteration doesn't block writers to other shards, and writers don't completely block iteration. However, the iterator yields guarded references (Ref or RefMut) that hold their shard's lock, preventing modifications to entries currently being accessed while allowing other entries to be modified concurrently.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
fn main() {
let map = DashMap::new();
// Insert some data
for i in 0..10 {
map.insert(i, format!("value-{}", i));
}
// iter returns an iterator over all entries
for entry in map.iter() {
println!("Key: {}, Value: {}", entry.key(), entry.value());
}
// The iterator yields Ref<K, V> guards
// These guards hold the shard's read lock
}iter() returns an iterator that yields Ref<K, V> guards, each holding a read lock on its shard.
use dashmap::DashMap;
fn main() {
// DashMap is divided into multiple shards
// Default shard count is based on CPU cores
let map = DashMap::<i32, String>::new();
println!("Shard count: {}", map.shards().len());
// Each shard has its own RwLock
// iter() acquires locks one shard at a time
// Insert data
for i in 0..20 {
map.insert(i, format!("value-{}", i));
}
// Iteration visits each shard
let mut count = 0;
for entry in map.iter() {
count += 1;
// While iterating, other shards can be modified
}
println!("Iterated {} entries", count);
}DashMap's sharding allows concurrent operations on different shards during iteration.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
fn main() {
let map = Arc::new(DashMap::new());
// Populate map
for i in 0..100 {
map.insert(i, i);
}
let map_clone = Arc::clone(&map);
// Thread that iterates
let iter_thread = thread::spawn(move || {
let mut sum = 0;
for entry in map_clone.iter() {
// entry is a Ref<i32, i32>
// This holds a read lock on the shard
sum += *entry.value();
// Small delay to allow interleaving
thread::yield_now();
}
sum
});
// Thread that modifies
let modify_thread = thread::spawn(move || {
for i in 0..100 {
// Insertions may go to different shards
// Some shards may be locked during iteration
map.insert(i, i + 1);
map.insert(i + 100, i);
thread::yield_now();
}
});
modify_thread.join().unwrap();
let sum = iter_thread.join().unwrap();
println!("Sum during iteration: {}", sum);
println!("Final map size: {}", map.len());
}Iteration can proceed concurrently with modifications to other shards.
use dashmap::DashMap;
fn main() {
let map = DashMap::new();
map.insert("key1", "value1");
map.insert("key2", "value2");
// iter() yields Ref<K, V>
for ref_guard in map.iter() {
// ref_guard is Ref<&str, &str>
// It implements Deref to V
let key: &&str = ref_guard.key();
let value: &&str = ref_guard.value();
println!("Key: {}, Value: {}", key, value);
// The guard holds a read lock
// Other threads trying to write to this shard will block
}
// iter_mut() yields RefMut<K, V>
for mut ref_mut in map.iter_mut() {
// ref_mut is RefMut<&str, &str>
// It holds a write lock on the shard
*ref_mut.value_mut() = "modified";
}
// After iteration, check results
for entry in map.iter() {
println!("Modified: {} = {}", entry.key(), entry.value());
}
}iter() yields Ref with read locks; iter_mut() yields RefMut with write locks.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
use std::time::Duration;
fn main() {
let map = Arc::new(DashMap::new());
// Create enough entries to span multiple shards
for i in 0..100 {
map.insert(i, format!("value-{}", i));
}
let map_iter = Arc::clone(&map);
let map_write = Arc::clone(&map);
// Iteration thread
let iter_handle = thread::spawn(move || {
println!("Starting iteration");
for entry in map_iter.iter() {
// Each iteration step:
// 1. Acquires read lock on current shard
// 2. Yields the entry
// 3. Drops read lock when moving to next entry
let key = *entry.key();
if key % 10 == 0 {
println!("Iterating key: {}", key);
}
thread::sleep(Duration::from_micros(100));
}
println!("Iteration complete");
});
// Modification thread
let write_handle = thread::spawn(move || {
thread::sleep(Duration::from_millis(1));
println!("Starting modifications");
for i in (0..100).step_by(10) {
// May succeed immediately or block briefly
// depending on which shard is locked
map_write.insert(i, format!("modified-{}", i));
println!("Modified key: {}", i);
}
});
iter_handle.join().unwrap();
write_handle.join().unwrap();
}Locks are acquired per-shard during iteration, allowing concurrent modifications.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
fn main() {
let map1 = Arc::new(DashMap::new());
let map2 = Arc::new(DashMap::new());
// Populate
for i in 0..10 {
map1.insert(i, i);
map2.insert(i, i * 2);
}
let m1 = Arc::clone(&map1);
let m2 = Arc::clone(&map2);
// Potential deadlock scenario
// Thread 1: iterate map1, access map2
// Thread 2: iterate map2, access map1
let h1 = thread::spawn(move || {
// DashMap's lock ordering prevents deadlocks
// Shards are always locked in consistent order
for entry in m1.iter() {
let key = *entry.key();
// Accessing another map during iteration is safe
// because shard locks are short-lived
if let Some(v2) = m2.get(&key) {
// This acquires a different shard lock
// No deadlock because locks are released between iterations
}
}
});
let h2 = thread::spawn(move || {
for entry in map2.iter() {
let key = *entry.key();
if let Some(v1) = map1.get(&key) {
// Safe: locks are per-shard and released quickly
}
}
});
h1.join().unwrap();
h2.join().unwrap();
println!("No deadlock occurred");
}DashMap's shard-level locking and short-lived guards prevent most deadlock scenarios.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
fn main() {
let map = Arc::new(DashMap::new());
for i in 0..20 {
map.insert(i, format!("initial-{}", i));
}
let map_iter = Arc::clone(&map);
let map_modify = Arc::clone(&map);
// Iteration does NOT provide a snapshot
// You may see:
// - Entries added before the iterator reaches their shard
// - Entries missing that were removed during iteration
// - Entries with modified values
let iter_handle = thread::spawn(move || {
let mut seen = Vec::new();
for entry in map_iter.iter() {
seen.push(*entry.key());
// Yield to allow modifications
thread::yield_now();
}
seen
});
let modify_handle = thread::spawn(move || {
for i in 0..20 {
// Modify entries during iteration
map_modify.insert(i, format!("modified-{}", i));
// Add new entries
map_modify.insert(i + 100, format!("new-{}", i));
thread::yield_now();
}
});
let seen = iter_handle.join().unwrap();
modify_handle.join().unwrap();
println!("Keys seen during iteration: {:?}", seen);
println!("Final map size: {}", map.len());
// Note: iteration may see inconsistent state
// Not all new entries may be seen
// Some modifications may or may not be visible
}Iteration is not atomicāmodifications during iteration may or may not be visible.
use dashmap::DashMap;
fn main() {
let map = DashMap::new();
for i in 0..10 {
map.insert(i, format!("value-{}", i));
}
// Safe pattern: collect what you need, then process
let entries: Vec<(i32, String)> = map.iter()
.map(|entry| (*entry.key(), entry.value().clone()))
.collect();
// Now process without holding locks
for (key, value) in entries {
println!("Key: {}, Value: {}", key, value);
// Safe to modify map here
map.insert(key, format!("processed-{}", value));
}
// Alternative: use retain to modify in place
map.retain(|key, value| {
// This holds shard locks briefly
key % 2 == 0
});
println!("Remaining entries: {}", map.len());
}Collect data during iteration, then process after releasing locks for maximum concurrency.
use dashmap::DashMap;
fn main() {
let map = DashMap::new();
map.insert("a", 1);
map.insert("b", 2);
map.insert("c", 3);
// iter() yields Ref guards
// The Ref guard derefs to the value
for entry in map.iter() {
// entry.key() -> &K
// entry.value() -> &V
// Deref allows direct access
let value: &i32 = &*entry;
println!("Value: {}", value);
// Or explicit methods
println!("Key: {}, Value: {}", entry.key(), entry.value());
}
// Convert to owned values if needed
let owned_values: Vec<i32> = map.iter()
.map(|entry| *entry.value())
.collect();
println!("Owned values: {:?}", owned_values);
}Ref implements Deref<Target = V>, providing convenient value access.
use dashmap::DashMap;
fn main() {
let map = DashMap::new();
// Insert in order
for i in 0..20 {
map.insert(i, format!("value-{}", i));
}
// Iteration order depends on shard distribution
// and internal hash table layout
println!("Iteration order:");
for entry in map.iter() {
print!("{} ", entry.key());
}
println!();
// Keys may not appear in insertion order
// Order is not guaranteed
// If you need ordered iteration:
// 1. Collect and sort
let mut entries: Vec<_> = map.iter()
.map(|e| (*e.key(), e.value().clone()))
.collect();
entries.sort_by_key(|(k, _)| *k);
println!("Sorted order:");
for (k, v) in entries {
print!("{} ", k);
}
println!();
}DashMap iteration order is not definedāif order matters, collect and sort.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
fn main() {
let map = Arc::new(DashMap::new());
for i in 0..100 {
map.insert(i, i * 2);
}
let mut handles = vec
![];
// Multiple concurrent readers
for _ in 0..4 {
let map = Arc::clone(&map);
handles.push(thread::spawn(move || {
let mut sum = 0;
for entry in map.iter() {
sum += *entry.value();
}
sum
}));
}
let mut total = 0;
for handle in handles {
total += handle.join().unwrap();
}
println!("Total sum from all threads: {}", total);
// Concurrent reads don't block each other
// Each shard's RwLock allows multiple readers
}Multiple threads can iterate concurrentlyāiter() acquires read locks on shards.
use dashmap::DashMap;
fn main() {
let map = DashMap::new();
for i in 0..10 {
map.insert(i, format!("value-{}", i));
}
// iter() doesn't allow removal during iteration
// Use retain for conditional removal
// Remove entries matching a condition
map.retain(|key, value| {
// This holds shard locks briefly
key % 2 == 0 && !value.contains("skip")
});
println!("After retain: {} entries", map.len());
// Alternative: collect keys to remove, then remove
let keys_to_remove: Vec<i32> = map.iter()
.filter(|entry| *entry.key() % 3 == 0)
.map(|entry| *entry.key())
.collect();
for key in keys_to_remove {
map.remove(&key);
}
println!("After selective removal: {} entries", map.len());
// Verify
for entry in map.iter() {
println!("Remaining: {} = {}", entry.key(), entry.value());
}
}Use retain() or collect-then-remove patterns for safe removal during iteration.
use dashmap::DashMap;
use std::sync::Arc;
use std::thread;
use std::time::Instant;
fn main() {
let map = DashMap::new();
// Large dataset
for i in 0..100_000 {
map.insert(i, i);
}
// Iteration is O(n) but with good cache behavior
let start = Instant::now();
let mut sum = 0;
for entry in map.iter() {
sum += *entry.value();
}
let iter_time = start.elapsed();
println!("Iteration time: {:?}", iter_time);
// Parallel iteration with multiple threads
let map = Arc::new(DashMap::new());
for i in 0..100_000 {
map.insert(i, i);
}
let start = Instant::now();
let threads: Vec<_> = (0..4)
.map(|_| {
let map = Arc::clone(&map);
thread::spawn(move || {
let mut local_sum = 0;
for entry in map.iter() {
local_sum += *entry.value();
}
local_sum
})
})
.collect();
let total: i32 = threads.into_iter()
.map(|h| h.join().unwrap())
.sum();
let parallel_time = start.elapsed();
println!("Parallel iteration time: {:?}", parallel_time);
println!("Total sum: {}", total);
// Note: Multiple concurrent iterations work well because
// shards use RwLock allowing multiple readers
}Sharded RwLock design allows efficient concurrent reads and iterations.
Lock behavior during iteration:
| Phase | Lock Type | Duration | Blocking | |-------|------------|----------|----------| | Move to entry | Read lock | Very brief | Writers to same shard | | Yield entry | Guard held | Until drop | Writers to same shard | | Move to next | Release, acquire | Per entry | Minimal contention |
Guard types:
| Method | Guard Type | Lock | Allowed Operations |
|--------|------------|------|-------------------|
| iter() | Ref<K, V> | Read | Read key/value |
| iter_mut() | RefMut<K, V> | Write | Read and modify |
Consistency guarantees:
| Property | Behavior | |----------|----------| | Atomic snapshot | No | | See new entries | Possible | | Miss removed entries | Possible | | See modified values | Possible | | Ordered iteration | No |
Key insight: DashMap::iter achieves concurrent-safe iteration through its sharded architecture, where each shard has an independent RwLock. During iteration, the iterator acquires and releases read locks on each shard as it moves between entries, yielding Ref guards that temporarily hold the shard's lock. This design allows other threads to read and write to different shards concurrentlyāiteration doesn't globally block modifications. However, this also means iteration provides no snapshot guarantee: entries added, removed, or modified during iteration may or may not be visible. The Ref guards ensure that while an entry is being accessed, its shard is protected from concurrent modifications to that specific entry, but other entries in the same shard can still be modified by threads waiting for the write lock. Use retain() for safe in-place filtering, or collect-then-process patterns when you need to avoid holding locks during computation.