How does parking_lot::RwLock::read differ from std::sync::RwLock::read regarding potential writer starvation?
parking_lot::RwLock uses a fair locking policy that prevents writer starvation by ensuring writers eventually acquire the lock even under continuous reader pressure, while std::sync::RwLock can indefinitely block writers when readers continuously arriveāa phenomenon where readers keep acquiring the lock before any waiting writer gets a chance, causing writers to wait potentially forever. The key difference lies in the fairness policy: parking_lot tracks waiting writers and blocks new readers when writers are waiting, whereas std::sync::RwLock only checks if readers currently hold the lock, allowing new readers to bypass waiting writers.
Understanding Writer Starvation
use std::sync::{Arc, RwLock};
use std::thread;
use std::time::Duration;
fn writer_starvation_demonstration() {
let lock = Arc::new(RwLock::<i32>::new(0));
// Scenario: Continuous stream of readers
// A writer wants to write but never gets the chance
let mut reader_handles = vec![];
// Start multiple readers that continuously read
for _ in 0..5 {
let lock_clone = Arc::clone(&lock);
let handle = thread::spawn(move || {
for _ in 0..1000 {
// Each reader releases and immediately re-acquires
let _guard = lock_clone.read().unwrap();
// Small sleep simulates some work
thread::sleep(Duration::from_micros(1));
}
});
reader_handles.push(handle);
}
// Writer thread
let lock_clone = Arc::clone(&lock);
let writer_handle = thread::spawn(move || {
let start = std::time::Instant::now();
// Try to acquire write lock
let _guard = lock_clone.write().unwrap();
println!("Writer acquired lock after {:?}", start.elapsed());
// With std::sync::RwLock, this might take very long
// or never happen if readers keep arriving
});
// In pathological cases, writer could wait indefinitely
// because new readers can acquire while writer is waiting
}With std::sync::RwLock, new readers can acquire the lock while a writer is waiting, potentially blocking the writer indefinitely.
The std::sync::RwLock Behavior
use std::sync::{Arc, RwLock};
use std::thread;
fn std_rwlock_behavior() {
let lock = Arc::new(RwLock::new(0));
// std::sync::RwLock uses a platform-dependent implementation
// On many platforms, it's "reader-preferred" or "reader-biased"
// Timeline:
// T0: Reader 1 acquires read lock
// T1: Writer W wants to write, blocks (waiting for no readers)
// T2: Reader 2 wants to read, can it acquire?
// - On reader-preferred: YES, Reader 2 acquires immediately
// - Reader 1 releases, but Writer W still waits (Reader 2 holds)
// T3: Reader 3 wants to read
// - Can acquire while Writer W waits
// This continues until there's a moment with NO active readers
// The problem: If readers arrive frequently enough,
// there's never a moment with zero readers
let lock_clone = Arc::clone(&lock);
// Reader holds lock
let r1 = lock_clone.read().unwrap();
// Writer starts waiting
let lock_clone2 = Arc::clone(&lock);
let writer_thread = thread::spawn(move || {
println!("Writer attempting to acquire...");
let _w = lock_clone2.write().unwrap();
println!("Writer acquired!");
});
thread::sleep(std::time::Duration::from_millis(10));
// Another reader arrives - can acquire!
let r2 = lock.read().unwrap();
println!("Reader 2 acquired while writer waiting");
// Writer is still blocked
// std::sync::RwLock allows this on many platforms
}std::sync::RwLock is often reader-preferred, allowing new readers while writers wait.
The parking_lot::RwLock Solution
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
fn parking_lot_fairness() {
let lock = Arc::new(RwLock::new(0));
// parking_lot::RwLock uses a fair locking policy
// Timeline:
// T0: Reader 1 acquires read lock
// T1: Writer W wants to write, starts waiting
// - RwLock marks "writer waiting"
// T2: Reader 2 wants to read
// - Sees "writer waiting"
// - BLOCKS (even though no writer holds the lock)
// T3: Reader 1 releases
// T4: Writer W acquires immediately (was first in line)
// T5: Writer W releases
// T6: Reader 2 can now acquire
// This prevents writer starvation because
// new readers wait for pending writers
let lock_clone = Arc::clone(&lock);
// Reader holds lock
let r1 = lock_clone.read();
// Writer starts waiting
let lock_clone2 = Arc::clone(&lock);
let writer_thread = thread::spawn(move || {
println!("Writer attempting to acquire...");
let _w = lock_clone2.write();
println!("Writer acquired!");
});
thread::sleep(std::time::Duration::from_millis(10));
// Another reader arrives - will BLOCK
// because writer is waiting
println!("Reader 2 attempting to acquire...");
// This would block until writer completes
// parking_lot prevents this reader from acquiring
// until the waiting writer gets its turn
}parking_lot::RwLock tracks waiting writers and blocks new readers to ensure fairness.
Demonstrating Writer Acquisition Timing
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
#[cfg(feature = "std_sync")]
fn std_rwlock_timing() {
use std::sync::RwLock;
let lock = Arc::new(RwLock::new(0i32));
let mut handles = vec![];
// Continuous readers
for _ in 0..10 {
let lock_clone = Arc::clone(&lock);
handles.push(thread::spawn(move || {
for _ in 0..1000 {
let _r = lock_clone.read().unwrap();
thread::sleep(Duration::from_micros(10));
}
}));
}
// Single writer
let lock_clone = Arc::clone(&lock);
let start = Instant::now();
thread::spawn(move || {
let _w = lock_clone.write().unwrap();
println!("std::sync::RwLock: Writer waited {:?}", start.elapsed());
}).join().unwrap();
}
fn parking_lot_rwlock_timing() {
use parking_lot::RwLock;
let lock = Arc::new(RwLock::new(0i32));
let mut handles = vec![];
// Continuous readers
for _ in 0..10 {
let lock_clone = Arc::clone(&lock);
handles.push(thread::spawn(move || {
for _ in 0..1000 {
let _r = lock_clone.read();
thread::sleep(Duration::from_micros(10));
}
}));
}
// Single writer
let lock_clone = Arc::clone(&lock);
let start = Instant::now();
thread::spawn(move || {
let _w = lock_clone.write();
println!("parking_lot::RwLock: Writer waited {:?}", start.elapsed());
}).join().unwrap();
// parking_lot::RwLock typically shows faster writer acquisition
// because writers don't starve waiting for readers
}Under reader pressure, parking_lot writers acquire the lock faster than std::sync writers.
Fairness Policy Implementation
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
fn fairness_implementation() {
// parking_lot::RwLock maintains internal state:
// - Count of active readers
// - Flag for active writer
// - Count of waiting writers
// - Queue of waiting threads (both readers and writers)
// The fairness algorithm:
// 1. When a writer requests write lock:
// - If no readers and no writers: acquire immediately
// - Otherwise: block and increment "waiting writers" count
// 2. When a reader requests read lock:
// - If no active writer and NO waiting writers: acquire immediately
// - If waiting writers exist: block (fairness to writers)
// - If active writer: block
// 3. When a writer releases:
// - Wake waiting threads in FIFO order
// - If next is writer: it acquires
// - If next are readers: they all acquire together
// This ensures:
// - Writers aren't starved by continuous readers
// - Fair ordering (roughly FIFO)
// - Readers still get parallelism when no writers waiting
let lock = RwLock::new(0);
// Demonstrate the FIFO fairness
let lock = Arc::new(lock);
// Reader 1 acquires
let r1 = lock.read();
println!("Reader 1 acquired");
// Writer waits
let lock_clone = Arc::clone(&lock);
let writer_thread = thread::spawn(move || {
println!("Writer requesting lock...");
let _w = lock_clone.write();
println!("Writer acquired!");
});
thread::sleep(Duration::from_millis(10));
// Reader 2 arrives - will wait for writer
let lock_clone2 = Arc::clone(&lock);
let reader2_thread = thread::spawn(move || {
println!("Reader 2 requesting lock...");
let _r = lock_clone2.read();
println!("Reader 2 acquired!");
});
// Order: Reader 1 holds, then Writer acquires, then Reader 2
// parking_lot ensures Writer gets its turn before Reader 2
}The fairness policy tracks waiting writers and blocks new readers until pending writers complete.
Comparison Summary
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn comparison_summary() {
// std::sync::RwLock characteristics:
// - Reader-preferred on most platforms
// - Writers can starve under continuous reader load
// - Platform-dependent behavior (may vary)
// - Uses OS primitives (may have overhead)
// - Poisoning on panic (lock becomes poisoned)
// parking_lot::RwLock characteristics:
// - Fair locking policy
// - Writers cannot starve
// - Consistent cross-platform behavior
// - Custom implementation (often faster)
// - No poisoning (lock remains usable after panic)
// Memory overhead:
// - std::sync::RwLock: Platform-dependent, often larger
// - parking_lot::RwLock: Consistent, typically smaller
// Lock state size:
// - std::sync::RwLock: Often contains OS mutex/condvar
// - parking_lot::RwLock: Just atomic counters and wait queue
}The implementations differ in fairness guarantees and platform consistency.
Reader-Preferred vs Fair Policies
use std::sync::Arc;
use std::thread;
use std::time::Duration;
fn reader_preferred_behavior() {
// Reader-preferred (std::sync::RwLock on most platforms):
// - New readers can acquire while writer waiting
// - Maximizes reader throughput
// - Can starve writers
// - Good for read-heavy workloads with infrequent writes
// Fair (parking_lot::RwLock):
// - New readers wait while writer waiting
// - Writers get fair chance
// - May reduce reader throughput under writer pressure
// - Good when writes must not be delayed indefinitely
// Example: A cache with frequent reads and occasional writes
// Reader-preferred might be fine if:
// - Writes are infrequent
// - Missing a write for a few seconds is acceptable
// - Read throughput is critical
// Fair might be better if:
// - Writers must eventually run
// - Stale reads are problematic
// - Write latency matters
}Choose based on whether reader throughput or writer fairness is more important.
No Poisoning in parking_lot
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
use std::thread;
fn poisoning_comparison() {
// std::sync::RwLock has poisoning
let std_lock = Arc::new(StdRwLock::new(0));
let lock_clone = Arc::clone(&std_lock);
let handle = thread::spawn(move || {
let _guard = lock_clone.write().unwrap();
panic!("Writer panicked while holding lock!");
});
let _ = handle.join();
// Now the lock is poisoned
match std_lock.read() {
Ok(_) => println!("Got lock"),
Err(poisoned) => {
// Lock is poisoned, but we can still access the guard
println!("Lock is poisoned: {}", poisoned);
let _guard = poisoned.into_inner();
}
}
// parking_lot::RwLock has no poisoning
let pl_lock = Arc::new(PlRwLock::new(0));
let lock_clone = Arc::clone(&pl_lock);
let handle = thread::spawn(move || {
let _guard = lock_clone.write();
panic!("Writer panicked while holding lock!");
});
let _ = handle.join();
// Lock is NOT poisoned, still usable
{
let _guard = pl_lock.read();
println!("parking_lot lock still works after panic");
}
// This is a trade-off:
// - Poisoning catches potential data corruption
// - No poisoning means you must handle consistency yourself
}std::sync::RwLock poisons on panic; parking_lot::RwLock doesn't poison but continues working.
Performance Characteristics
use std::sync::Arc;
use std::thread;
use std::time::Instant;
fn performance_comparison() {
// std::sync::RwLock:
// - Uses OS primitives (pthread_rwlock on Unix)
// - System calls for contended locks
// - Variable performance across platforms
// - Often slower for uncontended locks due to OS overhead
// parking_lot::RwLock:
// - Uses atomic operations + wait queue
// - Userspace implementation
// - Consistent cross-platform performance
// - Often faster for uncontended and lightly contended locks
// Uncontended performance (single thread):
// parking_lot is typically faster (no syscalls)
// Contended performance:
// Depends on contention pattern
// - Heavy reader contention: similar
// - Mixed reader/writer: parking_lot may be faster (fairness)
// - Heavy writer contention: depends on platform
// Memory overhead:
// std::sync::RwLock: ~size of OS mutex (often 40+ bytes)
// parking_lot::RwLock: ~12 bytes (counter + state)
}parking_lot often has better performance due to userspace implementation and lower overhead.
When Writer Starvation Matters
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
fn scenarios_where_starvation_matters() {
// Scenario 1: Configuration updates
// - High read frequency (many threads reading config)
// - Occasional write (config update)
// - Writer starvation = config updates never happen
// - Use parking_lot to ensure updates eventually apply
// Scenario 2: Metrics/caching
// - Continuous reads (serving requests)
// - Periodic cache invalidation
// - Writer starvation = stale cache forever
// - Use parking_lot to prevent indefinite staleness
// Scenario 3: Feature flags
// - High read frequency (checking flags)
// - Write on flag change
// - Writer starvation = flag change never takes effect
// - Use parking_lot to ensure timely flag updates
// Scenario 4: Logging/tracing
// - Many threads writing logs
// - Occasional flush/rotate
// - Writer starvation = logs never flushed
// - Use parking_lot to ensure flush happens
// When writer starvation is acceptable:
// - Writes are truly optional
// - Read throughput is critical
// - Temporary inconsistency is fine
// - Platform guarantees fairness (check docs)
}Writer starvation matters when delayed writes cause correctness or staleness issues.
Practical Example: Cache with Updates
use parking_lot::RwLock;
use std::collections::HashMap;
use std::sync::Arc;
use std::time::{Duration, Instant};
struct Cache<K, V> {
data: RwLock<HashMap<K, V>>,
last_update: RwLock<Instant>,
}
impl<K: Eq + std::hash::Hash + Clone, V: Clone> Cache<K, V> {
fn new() -> Self {
Cache {
data: RwLock::new(HashMap::new()),
last_update: RwLock::new(Instant::now()),
}
}
fn get(&self, key: &K) -> Option<V> {
// Read lock - many concurrent readers
self.data.read().get(key).cloned()
}
fn update(&self, updates: HashMap<K, V>) {
// Write lock - needs to eventually happen
let mut data = self.data.write();
let mut last = self.last_update.write();
data.extend(updates);
*last = Instant::now();
}
fn last_updated(&self) -> Duration {
self.last_update.read().elapsed()
}
}
fn cache_example() {
let cache = Arc::new(Cache::<String, String>::new());
// Many readers continuously
let mut reader_handles = vec![];
for i in 0..10 {
let cache_clone = Arc::clone(&cache);
let handle = std::thread::spawn(move || {
loop {
if let Some(value) = cache_clone.get(&format!("key_{}", i % 5)) {
// Use cached value
}
// Check if cache is stale
if cache_clone.last_updated() > Duration::from_secs(60) {
println!("Thread {}: Cache is stale!", i);
}
}
});
reader_handles.push(handle);
}
// Periodic updater
let cache_clone = Arc::clone(&cache);
let _updater = std::thread::spawn(move || {
loop {
std::thread::sleep(Duration::from_secs(30));
// With parking_lot, this write WILL eventually happen
// even with continuous readers
let mut updates = HashMap::new();
updates.insert("key_0".to_string(), "value_0".to_string());
cache_clone.update(updates);
}
});
// With std::sync::RwLock, the update might starve indefinitely
// under heavy continuous read load
}A cache that needs timely updates benefits from parking_lot's fairness.
Choosing Between Implementations
fn choosing_implementation() {
// Use parking_lot::RwLock when:
// - Writer starvation is a concern
// - You need cross-platform consistency
// - You want no poisoning behavior
// - Performance matters (uncontended locks)
// - You need fair ordering guarantees
// Use std::sync::RwLock when:
// - Platform primitives are preferred
// - Poisoning behavior is desired
// - You're in an environment without parking_lot
// - Reader throughput is critical and writes can wait
// - You need to use standard library only
// Additional parking_lot benefits:
// - Smaller memory footprint
// - Const construction (RwLock::new works in const)
// - No allocation for lock state
// - Consistent behavior across platforms
// When in doubt for general use:
// parking_lot::RwLock is often the better default
// due to fairness and consistent behavior
}Choose parking_lot for fairness; std::sync for standard library compatibility.
Synthesis
Core difference:
// std::sync::RwLock: Reader-preferred (on most platforms)
// New readers can acquire while writers wait
// Writers may starve under continuous reader pressure
// parking_lot::RwLock: Fair policy
// New readers wait while writers are queued
// Writers guaranteed to eventually acquireWriter starvation mechanism:
// std::sync::RwLock timeline:
// R1 acquires -> W waits -> R2 acquires -> R1 releases -> R3 acquires -> ...
// Writer never gets a moment with zero readers
// parking_lot::RwLock timeline:
// R1 acquires -> W waits (marked as waiting) -> R2 blocked -> R1 releases -> W acquires -> ...
// Writer gets fair turnKey trade-offs:
| Aspect | std::sync::RwLock | parking_lot::RwLock |
|---|---|---|
| Writer fairness | May starve | Guaranteed |
| Reader throughput | Maximum | Reduced under writer wait |
| Poisoning | Yes | No |
| Platform consistency | Varies | Consistent |
| Memory overhead | Higher | Lower |
| Const construction | No | Yes |
| Uncontended speed | Slower | Faster |
Key insight: parking_lot::RwLock::read differs from std::sync::RwLock::read in how it handles waiting writersāparking_lot blocks new read acquisitions when writers are waiting, preventing writer starvation, while std::sync::RwLock on most platforms allows new readers to acquire even when writers are queued, which maximizes reader throughput but can indefinitely delay writers under continuous reader pressure. The fairness policy in parking_lot ensures that write operations eventually complete, which is critical for scenarios where delayed writes cause correctness issues (configuration updates, cache invalidation, feature flag changes). Beyond fairness, parking_lot::RwLock offers consistent cross-platform behavior, no poisoning, smaller memory footprint, and often better performance for uncontended locks. Choose parking_lot when writer latency matters or when fairness is required; choose std::sync when maximum reader throughput is the only concern and writes can tolerate indefinite delay.
