Loading pageâŚ
Rust walkthroughs
Loading pageâŚ
parking_lot::RwLock differ from std::sync::RwLock in terms of fairness and potential for writer starvation?parking_lot::RwLock uses a fair locking policy by default, ensuring that writers waiting for the lock will eventually acquire it even in the presence of continuous readers, while std::sync::RwLock provides no fairness guarantees and can indefinitely starve writers when readers continuously acquire and release the lock. The standard library's RwLock allows new readers to acquire the lock even when a writer is waiting, which can prevent the writer from ever gaining access if reader arrival rate exceeds reader holding time. parking_lot::RwLock prevents this by blocking new readers when a writer is waiting, ensuring the writer will acquire the lock once existing readers release it. This fairness comes at a potential throughput cost in read-heavy workloads, as readers must wait for queued writers even when the lock could technically be acquired as a reader.
use std::sync::RwLock;
fn main() {
let lock = RwLock::new(42);
// Multiple readers can hold the lock simultaneously
{
let r1 = lock.read().unwrap();
let r2 = lock.read().unwrap();
println!("Readers: {}, {}", *r1, *r2);
}
// Only one writer can hold the lock
{
let mut w = lock.write().unwrap();
*w = 100;
}
}Both std::sync::RwLock and parking_lot::RwLock provide the same basic API.
use parking_lot::RwLock;
fn main() {
let lock = RwLock::new(42);
// Same API, but returns guard directly (no Result)
{
let r1 = lock.read();
let r2 = lock.read();
println!("Readers: {}, {}", *r1, *r2);
}
{
let mut w = lock.write();
*w = 100;
}
}parking_lot::RwLock doesn't return Result, as it doesn't use poisoning.
use std::sync::RwLock;
use std::thread;
use std::time::Duration;
fn main() {
let lock = RwLock::new(0);
// Writer thread
let writer = thread::spawn(|| {
// Writer tries to acquire lock
loop {
if let Ok(mut guard) = lock.write() {
*guard += 1;
break; // Writer succeeds
}
thread::sleep(Duration::from_micros(1));
}
});
// Continuous readers
for _ in 0..100 {
thread::spawn(|| {
loop {
if let Ok(guard) = lock.read() {
// Readers can continuously acquire
// Writer keeps waiting
}
}
});
}
// In std::sync::RwLock, if readers arrive frequently enough,
// the writer may NEVER acquire the lock
// This is writer starvation
}With std::sync::RwLock, continuous readers can block a writer indefinitely.
use parking_lot::RwLock;
use std::thread;
use std::time::Duration;
fn main() {
let lock = RwLock::new(0);
// Writer thread
let writer = thread::spawn(|| {
// When writer waits, new readers are blocked
let mut guard = lock.write();
*guard += 1;
});
// Readers
for _ in 0..100 {
let lock_clone = lock.clone();
thread::spawn(move || {
// If writer is waiting, this read will block
let guard = lock_clone.read();
});
}
// parking_lot ensures writer eventually acquires the lock
}parking_lot::RwLock blocks new readers when a writer is waiting.
use std::sync::RwLock;
// std::sync::RwLock behavior:
// - Readers don't block other readers
// - Writer blocks all new readers AND writers
// - BUT: new readers can acquire lock even if writer is waiting
// - This is "reader-preferred" or "read-preferring"
// Timeline example:
// T1: Reader acquires lock
// T2: Writer tries to acquire - BLOCKS (waits)
// T3: New reader tries to acquire - SUCCEEDS (bypasses waiting writer)
// T4: Another reader tries - SUCCEEDS
// ...continues...
// Writer may wait indefinitely if readers keep arrivingStandard RwLock favors readers over waiting writers.
use parking_lot::RwLock;
// parking_lot::RwLock behavior:
// - Readers don't block other readers (until writer waits)
// - Writer blocks all new readers AND writers
// - When writer is waiting, new readers BLOCK (wait for writer)
// - This prevents writer starvation
// Timeline example:
// T1: Reader acquires lock
// T2: Writer tries to acquire - BLOCKS (waits)
// T3: New reader tries to acquire - BLOCKS (waits for writer)
// T4: Original reader releases lock
// T5: Writer acquires lock (writers get priority after waiting)
// ...writer finishes...
// T6: Waiting readers can now proceedparking_lot::RwLock is fair: waiting writers block new readers.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::thread;
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::Instant;
fn main() {
// Test std::sync::RwLock
let std_lock = StdRwLock::new(0u64);
let std_read_count = AtomicU64::new(0);
let std_write_count = AtomicU64::new(0);
// Start continuous readers
let std_readers: Vec<_> = (0..4).map(|_| {
let lock = &std_lock;
let counter = &std_read_count;
thread::spawn(move || {
for _ in 0..100000 {
if let Ok(_) = lock.read() {
counter.fetch_add(1, Ordering::Relaxed);
}
}
})
}).collect();
// Try to write
let std_start = Instant::now();
let mut writes = 0u64;
while std_start.elapsed().as_secs() < 5 {
if let Ok(mut guard) = std_lock.write() {
*guard += 1;
writes += 1;
}
}
println!("std::sync::RwLock - Writes: {}", writes);
println!("std::sync::RwLock - Reads: {}", std_read_count.load(Ordering::Relaxed));
// Writers may have very few acquisitions despite trying for 5 seconds
// parking_lot::RwLock would give writers fair access
}In read-heavy workloads, std::sync::RwLock can starve writers.
use parking_lot::RwLock;
// Fair locking (parking_lot) trade-offs:
// PRO: Writers guaranteed eventual access
// CON: Readers may wait even when lock is read-available
// CON: Lower throughput in read-heavy workloads
// Reader-preferred (std::sync) trade-offs:
// PRO: Higher read throughput
// PRO: Readers never wait for other readers
// CON: Writers can starve indefinitely
// Example scenario:
// - 99% reads, 1% writes
// - With std::sync::RwLock: very high read throughput
// - With parking_lot::RwLock: readers occasionally wait for writerFairness reduces read throughput but prevents starvation.
use parking_lot::RwLock;
fn main() {
let lock = RwLock::new(vec![1, 2, 3]);
// parking_lot supports upgradable read locks
let upgradable = lock.upgradable_read();
// Check condition
if upgradable.len() < 5 {
// Upgrade to write lock without releasing
let mut write = RwLock::upgradable_read::upgrade(upgradable);
write.push(4);
}
// std::sync::RwLock does NOT support upgradable reads
// You'd have to release read lock, then acquire write
}
// Upgradable read: can read, then upgrade to write atomically
// No race condition between releasing read and acquiring writeparking_lot::RwLock supports upgradable reads; standard library doesn't.
use parking_lot::RwLock;
fn main() {
let lock = RwLock::new(HashMap::new());
// Common pattern: check then update
{
let r = lock.upgradable_read();
if !r.contains_key("key") {
let mut w = RwLock::upgradable_read::upgrade(r);
w.insert("key".to_string(), "value".to_string());
}
}
// Without upgradable reads:
// {
// let r = lock.read();
// if !r.contains_key("key") {
// drop(r); // Release read
// let mut w = lock.write(); // Race condition here!
// w.insert("key".to_string(), "value".to_string());
// }
// }
}Upgradable reads prevent race conditions in check-then-modify patterns.
use std::sync::RwLock;
use std::thread;
fn main() {
let lock = RwLock::new(42);
// If a thread panics while holding a write lock:
let result = thread::spawn(|| {
let mut guard = lock.write().unwrap();
panic!("oops");
});
result.join().unwrap_err();
// Lock is now "poisoned"
match lock.read() {
Ok(guard) => println!("Got lock"),
Err(poisoned) => {
// Can still access data via poisoned error
let guard = poisoned.into_inner();
println!("Lock was poisoned, data: {}", *guard);
}
}
}std::sync::RwLock uses poisoning to signal panic during lock hold.
use parking_lot::RwLock;
use std::thread;
fn main() {
let lock = RwLock::new(42);
// parking_lot does NOT use poisoning
let result = thread::spawn(|| {
let mut guard = lock.write();
panic!("oops");
});
result.join().unwrap_err();
// Lock is NOT poisoned, just proceeds normally
let guard = lock.read(); // No Result, just returns guard
println!("Data: {}", *guard);
}parking_lot::RwLock doesn't poison on panic, simplifying error handling.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
let std_lock = StdRwLock::new(42);
let pl_lock = PlRwLock::new(42);
// std::sync::RwLock returns Result
let std_guard: Result<_, _> = std_lock.read();
match std_guard {
Ok(guard) => println!("std: {}", *guard),
Err(poisoned) => println!("std poisoned"),
}
// parking_lot::RwLock returns guard directly
let pl_guard = pl_lock.read(); // Just RwLockReadGuard
println!("parking_lot: {}", *pl_guard);
}parking_lot doesn't return Result, avoiding unwrap boilerplate.
// std::sync::RwLock implementation (conceptual):
// - Maintains count of active readers
// - Writer waits for count to reach 0
// - New readers can increment count even when writer waiting
// - Readers only blocked when writer ACTIVE, not when writer WAITING
// parking_lot::RwLock implementation (conceptual):
// - Maintains queue of waiting threads
// - When writer enters queue, marks that new readers should wait
// - New readers check for waiting writers before acquiring
// - When last reader releases, writer acquires
// - After writer releases, waiting readers acquireThe key difference: when writers wait, does that block new readers?
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
// In practice, performance depends on workload:
// Read-heavy, low contention:
// - Both perform similarly
// - std::sync may have slight edge (no fairness overhead)
// Read-heavy, high contention:
// - std::sync allows more readers through
// - parking_lot may slow readers waiting for writers
// Write-heavy:
// - Both perform similarly
// - parking_lot ensures writes happen fairly
// Mixed read-write, high contention:
// - parking_lot provides more predictable latency
// - std::sync may starve writers (unpredictable write latency)
// parking_lot also typically has:
// - Smaller memory footprint
// - Faster uncontended operations
// - Better performance on many-core systemsPerformance depends on contention patterns and read/write ratio.
// Writer starvation matters when:
// 1. Write operations are critical
// - Configuration updates
// - Status changes
// - Leader election
// 2. Write latency affects system correctness
// - Timeouts on writes could cause failures
// - Deadlines for state changes
// 3. High read concurrency with frequent writes
// - Many reader threads
// - Writers need guaranteed progress
// Example: configuration reload
use parking_lot::RwLock;
struct Config {
value: String,
}
fn reload_config(lock: &RwLock<Config>) {
// This MUST complete eventually
// Can't let continuous reads block it forever
let mut config = lock.write();
*config = load_new_config();
}Use parking_lot::RwLock when writes must eventually complete.
// Reader throughput matters more when:
// 1. Reads vastly outnumber writes
// - Logging systems
// - Metrics collection
// - Cache lookups
// 2. Writes are rare and not time-sensitive
// - Occasional configuration updates
// - Periodic data refresh
// 3. Read latency is critical
// - Low-latency read paths
// - High-frequency lookups
// In these cases, std::sync::RwLock may be acceptable
// But consider: will continuous reads ever block a write completely?Use std::sync::RwLock when read throughput is critical and writes can tolerate delay.
// Use parking_lot::RwLock when:
// - Fairness is required (writers must complete)
// - You need upgradable read locks
// - You want simpler API (no Result, no poisoning)
// - You need predictable write latency
// - High contention with mixed read/write
// Use std::sync::RwLock when:
// - Read-heavy workload with rare writes
// - Maximum read throughput is critical
// - Writer starvation is acceptable or impossible
// - You need standard library only (no dependencies)
// - Poisoning behavior is desiredChoose based on fairness requirements and workload characteristics.
| Feature | std::sync::RwLock | parking_lot::RwLock |
|---------|---------------------|----------------------|
| Fairness | No guarantee | Writers get fair access |
| Writer starvation | Possible | Prevented |
| Upgradable reads | No | Yes |
| Poisoning | Yes | No |
| Return type | Result<Guard, PoisonError> | Guard directly |
| Reader overhead | Lower under contention | Higher (wait for writers) |
| Write latency | Unbounded | Bounded (fair) |
The difference between std::sync::RwLock and parking_lot::RwLock centers on fairness:
Standard library RwLock is reader-preferred. When a writer waits, new readers can still acquire the lock. This maximizes read throughputâreaders don't wait for waiting writersâbut can starve writers indefinitely if reader arrival rate exceeds reader hold time. If a continuous stream of readers arrives, the writer's write() may never return.
parking_lot::RwLock is fair by default. When a writer waits, new readers are blocked until the writer acquires and releases the lock. This guarantees writer progressâno starvationâbut reduces read throughput because readers must wait for queued writers even when the lock could technically be acquired as a reader.
Key insight: Fairness is a trade-off, not purely better. In read-heavy workloads where writes are rare and time-insensitive, the standard library's reader-preference can achieve higher throughput. But for correctness-critical writes, or when write latency must be bounded, parking_lot::RwLock ensures writers make progress. The upgradable read feature in parking_lot is also valuable for atomic check-then-modify patterns that would otherwise need a write lock for the entire operation.
Consider your workload: if you have high read concurrency and writes must eventually happen, use parking_lot::RwLock. If reads vastly dominate and write latency is acceptable, std::sync::RwLock may sufficeâbut verify that writes can't be starved indefinitely by your specific reader pattern.