Loading pageβ¦
Rust walkthroughs
Loading pageβ¦
parking_lot::RwLock differ from std::sync::RwLock in terms of fairness and performance characteristics?parking_lot::RwLock uses a fair locking algorithm based on ticket-based queuing, ensuring that waiting threads acquire the lock in order of arrival, while std::sync::RwLock on many platforms uses an OS-provided primitive that may allow reader-writer fairness issues where continuous readers can indefinitely starve writers. The parking_lot implementation also avoids syscalls in the uncontended case, using atomic operations for fast-path acquisition and kernel-level parking only when threads need to wait. This design choice means parking_lot::RwLock typically has better performance for uncontended access and provides stronger fairness guarantees, but may have different latency characteristics under contention compared to the standard library's platform-dependent implementation.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// std::sync::RwLock
let std_lock = StdRwLock::new(42);
// Read lock (blocking)
{
let guard = std_lock.read().unwrap();
println!("std read: {}", *guard);
}
// Write lock (blocking)
{
let mut guard = std_lock.write().unwrap();
*guard += 1;
println!("std write: {}", *guard);
}
// parking_lot::RwLock
let pl_lock = PlRwLock::new(42);
// Read lock (blocking)
{
let guard = pl_lock.read();
println!("parking_lot read: {}", *guard);
}
// Write lock (blocking)
{
let mut guard = pl_lock.write();
*guard += 1;
println!("parking_lot write: {}", *guard);
}
}Both APIs are similar, but parking_lot doesn't return a Result for lock operations.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::panic;
fn main() {
// std::sync::RwLock has poisoning
let std_lock = StdRwLock::new(42);
// If a thread panics while holding the lock, it becomes "poisoned"
// Subsequent lock attempts return Err(PoisonError)
// parking_lot::RwLock does NOT have poisoning
let pl_lock = PlRwLock::new(42);
// Lock methods return guards directly, not Result
// If a thread panics while holding the lock:
// - The lock is released
// - Other threads can still acquire it
// - No poisoning mechanism
// This means:
// 1. Simpler API (no .unwrap() needed)
// 2. Data may be in inconsistent state after panic
// 3. Different trade-off: safety vs simplicity
}parking_lot doesn't poison on panic; std does, requiring error handling.
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
fn main() {
let lock = Arc::new(RwLock::new(0));
// parking_lot uses fair scheduling
// Writers are not starved by continuous readers
// When a writer is waiting:
// - New readers queue up instead of acquiring immediately
// - This prevents writer starvation
// std::sync::RwLock behavior is platform-dependent:
// - On Linux (glibc pthreads): may allow reader-preference
// - On Windows: different behavior
// - Writers can be starved by continuous readers
// parking_lot fairness model:
// 1. Thread arrives and gets a "ticket"
// 2. Threads acquire lock in ticket order
// 3. Readers can share when it's their turn
// 4. Writers get exclusive access when it's their turn
let lock_clone = Arc::clone(&lock);
let handle = thread::spawn(move || {
// This will not be starved by readers
let _guard = lock_clone.write();
});
handle.join().unwrap();
}parking_lot guarantees fairness; std behavior varies by platform.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// Uncontended lock performance
// std::sync::RwLock on many platforms:
// - May use OS primitives (syscalls) even for uncontended cases
// - Platform-dependent overhead
// - read()/write() return Result, adding branch overhead
// parking_lot::RwLock:
// - Fast path uses only atomic operations (no syscall)
// - Only enters kernel when contention detected
// - No Result wrapping means simpler code generation
let std_lock = StdRwLock::new(0);
let pl_lock = PlRwLock::new(0);
// Benchmark would show:
// - parking_lot faster for uncontended reads/writes
// - parking_lot avoids syscall overhead in common case
// std::sync::RwLock read (uncontended)
for _ in 0..1000 {
let _guard = std_lock.read().unwrap();
}
// parking_lot::RwLock read (uncontended)
for _ in 0..1000 {
let _guard = pl_lock.read();
}
}parking_lot avoids syscalls in uncontended cases; std may not.
use parking_lot::RwLock;
use std::sync::RwLock as StdRwLock;
use std::sync::Arc;
use std::thread;
fn main() {
// Under contention, behavior differs
// parking_lot uses "parking" mechanism:
// - Contended threads are put to sleep (parked)
// - Woken up when lock becomes available
// - Fair queue ensures ordering
// std::sync::RwLock:
// - Platform-dependent parking mechanism
// - May use OS scheduler
// - Fairness depends on OS implementation
// When to prefer parking_lot:
// - Need consistent fairness across platforms
// - Want to avoid writer starvation
// - High contention scenarios
// When std::sync::RwLock might be fine:
// - Low contention workloads
// - Platform-specific optimization acceptable
// - Need poisoning behavior
// parking_lot contention handling:
let lock = Arc::new(RwLock::new(Vec::new()));
let handles: Vec<_> = (0..4)
.map(|_| {
let lock = Arc::clone(&lock);
thread::spawn(move || {
for _ in 0..100 {
// Fair scheduling: each thread gets fair chance
let mut guard = lock.write();
guard.push(1);
}
})
})
.collect();
for handle in handles {
handle.join().unwrap();
}
}parking_lot uses fair parking; std relies on OS scheduler behavior.
use parking_lot::RwLock;
use std::sync::RwLock as StdRwLock;
fn main() {
// parking_lot supports downgrading write lock to read lock
let pl_lock = RwLock::new(42);
{
let write_guard = pl_lock.write();
// ... write operations ...
// Downgrade without releasing lock
let read_guard = RwLock::downgrade(write_guard);
// Now we have a read guard
// Other readers can now acquire
// But no writer can acquire until we're done
}
// std::sync::RwLock does NOT support downgrade
// You must release write lock and acquire read lock separately
let std_lock = StdRwLock::new(42);
{
let mut write_guard = std_lock.write().unwrap();
*write_guard += 1;
// Must release write lock
drop(write_guard);
// Then acquire read lock
let read_guard = std_lock.read().unwrap();
println!("Value: {}", *read_guard);
}
// The gap between release and acquire is a race window
// Another writer could acquire in between
}parking_lot supports downgrading; std requires releasing and re-acquiring.
use parking_lot::RwLock;
fn main() {
// parking_lot has upgradable read locks
let lock = RwLock::new(0);
// Upgradable read: can read now, upgrade to write later
let upgradable = lock.upgradable_read();
// Read current value
println!("Current: {}", *upgradable);
// Decide if we need to write
if *upgradable < 10 {
// Upgrade to write lock
let mut write_guard = RwLock::upgrade(upgradable);
*write_guard = 10;
println!("Upgraded and wrote: {}", *write_guard);
}
// std::sync::RwLock does NOT have upgradable reads
// You'd need to:
// 1. Read with read lock
// 2. Release read lock
// 3. Acquire write lock
// 4. Re-read (value might have changed!)
// 5. Write
// This pattern is error-prone and inefficient
}parking_lot supports upgradable reads; std doesn't have this feature.
use parking_lot::RwLock;
use std::sync::RwLock as StdRwLock;
fn main() {
// Both support try_lock operations
let pl_lock = RwLock::new(42);
let std_lock = StdRwLock::new(42);
// parking_lot: returns Option<guard>
if let Some(guard) = pl_lock.try_read() {
println!("Got read lock: {}", *guard);
} else {
println!("Read lock not available");
}
if let Some(mut guard) = pl_lock.try_write() {
*guard += 1;
println!("Got write lock: {}", *guard);
} else {
println!("Write lock not available");
}
// std::sync::RwLock: returns Result, LockResult
if let Ok(guard) = std_lock.try_read() {
println!("Got std read lock: {}", *guard);
}
if let Ok(mut guard) = std_lock.try_write() {
*guard += 1;
println!("Got std write lock: {}", *guard);
}
// Try upgradable (parking_lot only)
if let Some(guard) = pl_lock.try_upgradable_read() {
println!("Got upgradable read: {}", *guard);
// Can later upgrade with RwLock::upgrade(guard)
}
}parking_lot returns Option for try operations; std returns Result.
use std::mem::size_of;
fn main() {
// Memory comparison
use parking_lot::RwLock as PlRwLock;
use std::sync::RwLock as StdRwLock;
println!("parking_lot::RwLock<usize>: {} bytes", size_of::<PlRwLock<usize>>());
println!("std::sync::RwLock<usize>: {} bytes", size_of::<StdRwLock<usize>>());
// parking_lot typically has smaller footprint:
// - No poisoning state
// - Uses atomic state instead of heavy OS primitive
// - Single word state for lock status
// std::sync::RwLock wraps platform primitive:
// - pthread_rwlock_t on Unix (larger)
// - SRWLock on Windows (smaller)
// - May have additional bookkeeping
// On Linux, pthread_rwlock_t can be ~56 bytes
// parking_lot::RwLock is typically a few words
// This matters for:
// - Many small RwLock-protected values
// - Memory-constrained environments
}parking_lot typically has lower memory overhead.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// std::sync::RwLock behavior varies by platform:
// Linux (glibc pthreads):
// - Default is writer-preference on some versions
// - Reader-preference on others
// - Behavior depends on glibc version and kernel
// macOS:
// - Different pthread implementation
// - Different fairness characteristics
// Windows:
// - Uses SRWLock (Slim Reader-Writer Lock)
// - Different performance characteristics
// parking_lot::RwLock:
// - Same behavior on all platforms
// - Fair, ticket-based scheduling
// - Predictable performance characteristics
// When choosing:
// - Cross-platform code: parking_lot gives consistent behavior
// - Platform-specific tuning: std might leverage OS optimizations
// - Need predictable fairness: parking_lot guarantees it
let std_lock = StdRwLock::new(0); // Platform-dependent
let pl_lock = PlRwLock::new(0); // Consistent everywhere
}parking_lot behavior is consistent across platforms; std varies.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// Feature comparison:
// std::sync::RwLock:
// β Poisoning on panic
// β Platform-native implementation
// β No upgradable reads
// β No lock downgrade
// β Platform-dependent fairness
// β Result-based API (unwrap needed)
// parking_lot::RwLock:
// β Fair, ticket-based scheduling
// β Upgradable read locks
// β Lock downgrade
// β Consistent cross-platform behavior
// β Smaller memory footprint
// β No syscall for uncontended case
// β Simpler API (no Result)
// β No poisoning
// β Additional dependency
// Use std::sync::RwLock when:
// - Want poisoning behavior for panic recovery
// - Relying on platform-specific optimizations
// - No need for upgradable/downgrade features
// - Minimal dependencies preferred
// Use parking_lot::RwLock when:
// - Need fair scheduling to prevent starvation
// - Need upgradable reads or downgrades
// - Want consistent cross-platform behavior
// - Performance matters for uncontended case
// - Need predictable behavior under contention
}The choice depends on specific requirements: fairness, features, and portability.
use parking_lot::RwLock;
use std::collections::HashMap;
use std::sync::Arc;
struct Cache<K, V> {
data: RwLock<HashMap<K, V>>,
}
impl<K: std::hash::Hash + Eq + Clone, V: Clone> Cache<K, V> {
fn new() -> Self {
Cache {
data: RwLock::new(HashMap::new()),
}
}
fn get(&self, key: &K) -> Option<V> {
// Read lock: multiple readers can access simultaneously
let guard = self.data.read();
guard.get(key).cloned()
}
fn insert(&self, key: K, value: V) {
// Write lock: exclusive access
let mut guard = self.data.write();
guard.insert(key, value);
}
fn get_or_insert<F>(&self, key: K, f: F) -> V
where
F: FnOnce() -> V,
V: Clone,
{
// Try read lock first
{
let read_guard = self.data.read();
if let Some(v) = read_guard.get(&key) {
return v.clone();
}
}
// Need to write: use upgradable read
let upgradable = self.data.upgradable_read();
// Check again (another thread might have inserted)
if let Some(v) = upgradable.get(&key) {
return v.clone();
}
// Upgrade to write and insert
let mut write_guard = RwLock::upgrade(upgradable);
let value = f();
write_guard.insert(key, value.clone());
value
}
}
fn main() {
let cache = Arc::new(Cache::new());
// Multiple readers can access simultaneously
let readers: Vec<_> = (0..4)
.map(|i| {
let cache = Arc::clone(&cache);
std::thread::spawn(move || {
for j in 0..100 {
cache.get(&(i * 100 + j));
}
})
})
.collect();
for r in readers {
r.join().unwrap();
}
}parking_lot's upgradable reads are valuable for read-heavy caches with occasional writes.
Fairness differences:
parking_lot: Fair, ticket-based scheduling; writers are not starvedstd: Platform-dependent; some platforms allow reader preferencePerformance characteristics:
parking_lot: Atomic operations for uncontended case; no syscallstd: May use syscalls even for uncontended locksparking_lot: Smaller memory footprintstd: May leverage platform-specific optimizationsAPI differences:
parking_lot: Returns guards directly (no Result)std: Returns LockResult (poisoning support)parking_lot: Supports try_read, try_write returning Optionstd: Returns TryLockResult for try operationsUnique features:
parking_lot: Upgradable read locks (read β write)parking_lot: Lock downgrade (write β read)std: Poisoning for panic detectionstd: No additional dependenciesWhen to use parking_lot:
When to use std:
Key insight: parking_lot::RwLock is designed for predictable, fair behavior across all platforms, while std::sync::RwLock is a thin wrapper around OS-provided primitives with varying semantics. The fair scheduling of parking_lot means that when a writer is waiting, new readers must wait tooβpreventing writer starvation that can occur with reader-preference locks. The upgradable read feature is particularly valuable for "check-then-modify" patterns where you want to read, conditionally write, without the race condition of releasing the read lock before acquiring the write lock. The lack of poisoning in parking_lot is a deliberate trade-off: simpler API and no poisoning overhead, but no automatic detection of data corruption from panics.