What is the difference between parking_lot::RwLock and std::sync::RwLock regarding writer starvation?

Writer starvation occurs in readers-writer locks when continuous reader acquisitions prevent writers from ever obtaining the lock. The standard library's std::sync::RwLock can suffer from this problem on many platforms because new readers can acquire the lock even when a writer is waiting, potentially blocking writers indefinitely. parking_lot::RwLock addresses this by implementing fair scheduling: once a writer begins waiting, new readers must wait behind it, ensuring writers eventually acquire the lock. This fairness guarantee comes with slightly different performance characteristics and is particularly important in read-heavy workloads where writers would otherwise be starved indefinitely.

Basic RwLock Usage Comparison

use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as ParkingRwLock;
 
fn basic_usage() {
    // std::sync::RwLock
    let std_lock = StdRwLock::new(42);
    {
        let read_guard = std_lock.read().unwrap();
        println!("std value: {}", *read_guard);
    }
    {
        let mut write_guard = std_lock.write().unwrap();
        *write_guard += 1;
    }
    
    // parking_lot::RwLock
    let parking_lock = ParkingRwLock::new(42);
    {
        let read_guard = parking_lock.read();
        println!("parking_lot value: {}", *read_guard);
    }
    {
        let mut write_guard = parking_lock.write();
        *write_guard += 1;
    }
}

Both APIs are similar, but parking_lot returns guards directly without Result since it doesn't poison.

The Writer Starvation Problem

use std::sync::RwLock;
use std::thread;
use std::time::Duration;
 
fn demonstrate_potential_starvation() {
    let lock = RwLock::new(0);
    let lock_clone = lock.clone(); // Actually, RwLock doesn't impl Clone
    // More accurate demonstration would use Arc
    
    // Scenario: continuous readers can prevent writer from acquiring lock
    // Reader 1 acquires read lock
    // Writer tries to acquire write lock, must wait
    // Reader 2 tries to acquire read lock
    // On std::sync::RwLock (some platforms), Reader 2 might succeed
    // Reader 3 arrives while Reader 2 holds lock
    // Writer still waiting...
    // This can continue indefinitely
}

Writer starvation occurs when new readers continuously acquire the lock while a writer waits.

std::sync::RwLock Behavior

use std::sync::RwLock;
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
 
fn std_rwlock_behavior() {
    let lock = Arc::new(RwLock::new(0));
    let mut reader_handles = Vec::new();
    
    // Start multiple readers
    for _ in 0..5 {
        let lock = Arc::clone(&lock);
        let handle = thread::spawn(move || {
            for _ in 0..1000 {
                let _guard = lock.read().unwrap();
                // Continuous reader acquisitions
            }
        });
        reader_handles.push(handle);
    }
    
    // Writer attempts to acquire
    let writer_lock = Arc::clone(&lock);
    let writer_start = Instant::now();
    let writer_handle = thread::spawn(move || {
        let mut _guard = writer_lock.write().unwrap();
        writer_start.elapsed()
    });
    
    // On some platforms, writer might wait indefinitely
    // as new readers can acquire while writer is waiting
    
    for handle in reader_handles {
        handle.join().unwrap();
    }
    
    let writer_wait = writer_handle.join().unwrap();
    println!("Writer waited: {:?}", writer_wait);
}

The standard library's RwLock behavior varies by platform—some implementations allow readers to jump the queue.

parking_lot::RwLock Fairness

use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
 
fn parking_lot_fairness() {
    let lock = Arc::new(RwLock::new(0));
    let mut reader_handles = Vec::new();
    
    // Start continuous readers
    for _ in 0..5 {
        let lock = Arc::clone(&lock);
        let handle = thread::spawn(move || {
            for _ in 0..1000 {
                let _guard = lock.read();
                // Readers that arrive after writer is waiting
                // must wait behind the writer
            }
        });
        reader_handles.push(handle);
    }
    
    // Writer starts waiting
    let writer_lock = Arc::clone(&lock);
    let writer_start = Instant::now();
    let writer_handle = thread::spawn(move || {
        let _guard = writer_lock.write();
        writer_start.elapsed()
    });
    
    // New readers after writer starts waiting
    // will queue BEHIND the writer
    
    for handle in reader_handles {
        handle.join().unwrap();
    }
    
    let writer_wait = writer_handle.join().unwrap();
    println!("Writer waited: {:?}", writer_wait);
}

parking_lot::RwLock ensures writers get fair access by blocking new readers when a writer is waiting.

Demonstrating the Difference

use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as ParkingRwLock;
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
 
fn starvation_comparison() {
    // Test std::sync::RwLock
    let std_lock = Arc::new(StdRwLock::new(0u64));
    
    // Acquire read lock
    let _read1 = std_lock.read().unwrap();
    
    // In another thread, writer waits
    let std_clone = Arc::clone(&std_lock);
    let writer_handle = thread::spawn(move || {
        let _write = std_clone.write().unwrap();
        println!("std writer acquired");
    });
    
    // Allow writer to start waiting
    thread::sleep(Duration::from_millis(10));
    
    // New reader can acquire (on some platforms)
    // while writer is waiting
    let _read2 = std_lock.read().unwrap();
    println!("std reader acquired while writer waiting");
    
    drop(_read1);
    drop(_read2);
    writer_handle.join().unwrap();
    
    // Test parking_lot::RwLock
    let park_lock = Arc::new(ParkingRwLock::new(0u64));
    
    // Acquire read lock
    let _read1 = park_lock.read();
    
    // Writer starts waiting
    let park_clone = Arc::clone(&park_lock);
    let writer_handle = thread::spawn(move || {
        let _write = park_clone.write();
        println!("parking_lot writer acquired");
    });
    
    // Allow writer to start waiting
    thread::sleep(Duration::from_millis(10));
    
    // New reader must wait behind writer
    // This would block:
    // let _read2 = park_lock.read(); // Would block until writer completes
    
    drop(_read1);
    writer_handle.join().unwrap();
}

The key difference: parking_lot blocks new readers when a writer is waiting.

Reader-Writer Queue Ordering

use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
 
fn queue_ordering() {
    let lock = Arc::new(RwLock::new(vec![]));
    
    // Reader 1 acquires
    let lock1 = Arc::clone(&lock);
    let r1 = thread::spawn(move || {
        let _guard = lock1.read();
        lock1.read(); // Re-acquire as reader
        "R1 done"
    });
    
    // Writer arrives (after reader 1)
    let lock_w = Arc::clone(&lock);
    let w = thread::spawn(move || {
        let _guard = lock_w.write();
        "Writer done"
    });
    
    // Reader 2 arrives (after writer)
    let lock2 = Arc::clone(&lock);
    let r2 = thread::spawn(move || {
        let _guard = lock2.read();
        "R2 done"
    });
    
    // parking_lot guarantees order:
    // 1. R1 completes (already had lock)
    // 2. Writer acquires (was waiting before R2)
    // 3. R2 acquires (after writer)
    
    r1.join().unwrap();
    w.join().unwrap();
    r2.join().unwrap();
}

Fair scheduling maintains FIFO ordering of waiting threads.

Fairness Implementation Details

// parking_lot::RwLock uses a fair queue internally
// The implementation:
// 
// 1. Maintains a queue of waiting threads
// 2. Readers and writers are added to queue
// 3. When lock is released, first waiting thread wakes
// 4. If writer is first in queue, it gets exclusive access
// 5. New readers that arrive while writer is queued
//    are added behind the writer
//
// std::sync::RwLock (platform dependent):
// 
// 1. May allow new readers while writer is waiting
// 2. Prioritizes throughput over fairness
// 3. Writer can be indefinitely delayed
 
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
use std::time::Duration;
 
fn fairness_mechanism() {
    let lock = Arc::new(RwLock::new(0));
    
    // First reader holds lock
    let lock1 = Arc::clone(&lock);
    let _guard1 = lock1.read();
    
    // Writer starts waiting
    let lock_w = Arc::clone(&lock);
    thread::spawn(move || {
        let _guard = lock_w.write();
        // Writer waits until ALL readers release
        // New readers queue BEHIND this writer
    });
    
    thread::sleep(Duration::from_millis(10)); // Let writer start waiting
    
    // This new reader would wait behind the writer
    // in parking_lot, but might succeed in std::sync
    let lock2 = Arc::clone(&lock);
    let _guard2 = lock2.read(); // Blocks until writer completes
}

parking_lot uses a ticket-based or queue-based approach for fairness.

Performance Characteristics

use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as ParkingRwLock;
use std::sync::Arc;
use std::time::Instant;
 
fn performance_comparison() {
    // Read-heavy workload without writers
    // std::sync::RwLock might be faster due to no queue overhead
    
    // Read-heavy workload with writers
    // parking_lot::RwLock ensures writers make progress
    
    // Write-heavy workload
    // Similar performance, both serialize writes
    
    // Mixed workload
    // parking_lot::RwLock provides predictable latency
}

Performance trade-offs depend on read/write ratio and fairness requirements.

Read-Heavy Workload Impact

use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
 
fn read_heavy_with_writer() {
    let lock = Arc::new(RwLock::new(0u64));
    let mut handles = Vec::new();
    
    // Many continuous readers
    for _ in 0..100 {
        let lock = Arc::clone(&lock);
        handles.push(thread::spawn(move || {
            let mut sum = 0u64;
            for _ in 0..10_000 {
                {
                    let guard = lock.read();
                    sum += *guard;
                }
                // Small delay between reads
            }
            sum
        }));
    }
    
    // Writer needs to update
    let lock_w = Arc::clone(&lock);
    let writer = thread::spawn(move || {
        for i in 0..100 {
            // Writer will acquire within bounded time
            // because parking_lot prevents indefinite starvation
            let mut guard = lock_w.write();
            *guard += 1;
            drop(guard);
        }
    });
    
    for handle in handles {
        handle.join().unwrap();
    }
    writer.join().unwrap();
}

In read-heavy workloads, writers still make progress with parking_lot.

Poisoning Behavior Difference

use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as ParkingRwLock;
use std::thread;
 
fn poisoning_difference() {
    // std::sync::RwLock can become poisoned
    let std_lock = StdRwLock::new(42);
    
    // If a thread panics while holding the lock
    let lock_clone = std_lock.clone(); // This doesn't compile, actually
    // Using proper Arc pattern:
    let std_lock = Arc::new(StdRwLock::new(42));
    let std_clone = Arc::clone(&std_lock);
    
    thread::spawn(move || {
        let _guard = std_clone.write().unwrap();
        panic!("Writer panics!");
    }).join().unwrap_err();
    
    // Now the lock is poisoned
    let result = std_lock.read();
    assert!(result.is_err()); // Returns Err(PoisonError)
    
    // parking_lot::RwLock does not poison
    let park_lock = Arc::new(ParkingRwLock::new(42));
    let park_clone = Arc::clone(&park_lock);
    
    thread::spawn(move || {
        let _guard = park_clone.write();
        panic!("Writer panics!");
    }).join().unwrap_err();
    
    // Lock still usable
    let guard = park_lock.read(); // Returns guard, not Result
    assert_eq!(*guard, 42);
}

parking_lot::RwLock doesn't implement poisoning—locks remain usable after panics.

API Differences Summary

use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as ParkingRwLock;
 
fn api_differences() {
    // std::sync::RwLock
    let std_lock = StdRwLock::new(42);
    
    // Returns Result (poisoning)
    let read_guard = std_lock.read().unwrap();
    drop(read_guard);
    let write_guard = std_lock.write().unwrap();
    drop(write_guard);
    
    // try_read/try_write return Result
    let try_read = std_lock.try_read().unwrap();
    drop(try_read);
    
    // parking_lot::RwLock
    let park_lock = ParkingRwLock::new(42);
    
    // Returns guard directly (no poisoning)
    let read_guard = park_lock.read();
    drop(read_guard);
    let write_guard = park_lock.write();
    drop(write_guard);
    
    // try_read/try_write return Option
    let try_read = park_lock.try_read();
    assert!(try_read.is_some());
    
    // Additional methods in parking_lot
    let _ = park_lock.try_read_upgradable(); // Can upgrade to write
    let _ = park_lock.upgradable_read();     // Upgradable read lock
}

parking_lot provides additional lock types and returns guards directly.

Upgradable Read Locks

use parking_lot::RwLock;
 
fn upgradable_read() {
    let lock = RwLock::new(vec![1, 2, 3]);
    
    // Acquire upgradable read lock
    let upgradable = lock.upgradable_read();
    
    // Can read while holding upgradable
    println!("Initial: {:?}", *upgradable);
    
    // Multiple readers can hold regular read locks
    // But only one upgradable at a time
    
    // Upgrade to write lock
    {
        let mut write = upgradable.upgrade();
        write.push(4);
        // Automatically releases write when dropped
    }
    
    // Alternative: try_upgrade returns Err if can't upgrade immediately
    let upgradable = lock.upgradable_read();
    match upgradable.try_upgrade() {
        Ok(mut write) => {
            write.push(5);
        }
        Err(_upgradable) => {
            // Couldn't upgrade, still have upgradable read
        }
    }
}

Upgradable reads allow reading first, then atomically upgrading to write.

Upgradable Read vs Write Lock

use parking_lot::RwLock;
 
fn upgradable_vs_write() {
    let lock = RwLock::new(0);
    
    // Scenario: check if value needs update, then update
    // With regular write lock:
    {
        let mut guard = lock.write();
        if *guard < 10 {
            *guard = 10; // Only writes when needed
        }
    }
    // Problem: exclusive access entire time
    
    // With upgradable read:
    {
        let guard = lock.upgradable_read();
        if *guard < 10 {
            // Upgrade only when we need to write
            let mut write_guard = guard.upgrade();
            *write_guard = 10;
        }
        // If value >= 10, we never upgraded
        // Other readers could read during our check
    }
}

Upgradable reads optimize read-then-write patterns.

Deadlock Prevention

use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
 
fn deadlock_considerations() {
    // Both std::sync::RwLock and parking_lot::RwLock can deadlock
    
    let lock1 = Arc::new(RwLock::new(0));
    let lock2 = Arc::new(RwLock::new(0));
    
    // Deadlock scenario: two threads acquire in different order
    // Thread 1: lock1.write(), then lock2.write()
    // Thread 2: lock2.write(), then lock1.write()
    
    // parking_lot provides RwLock::deadlock_detection feature
    // (compile-time feature, not enabled by default)
    
    // Best practice: always acquire locks in consistent order
    // Or use try_write() with timeout patterns
}

Fairness doesn't prevent deadlock—consistent ordering does.

When to Use Each

// Use std::sync::RwLock when:
// 1. Writers are rare and never time-sensitive
// 2. Maximum read throughput is priority
// 3. Poisoning semantics are desired
// 4. No external dependencies preferred
// 5. Platform-specific fairness is acceptable
 
// Use parking_lot::RwLock when:
// 1. Writers must not be starved
// 2. Predictable writer latency is required
// 3. Upgradable read locks are needed
// 4. Poisoning is not desired
// 5. Consistent cross-platform behavior needed
// 6. Additional features (try_upgradable, etc.) useful

Choose based on fairness requirements and feature needs.

Real-World Example: Cache with Updates

use parking_lot::RwLock;
use std::collections::HashMap;
use std::sync::Arc;
use std::time::{Duration, Instant};
 
struct Cache<K, V> {
    data: RwLock<HashMap<K, V>>,
    last_update: RwLock<Instant>,
}
 
impl<K: Eq + std::hash::Hash + Clone, V: Clone> Cache<K, V> {
    fn new() -> Self {
        Cache {
            data: RwLock::new(HashMap::new()),
            last_update: RwLock::new(Instant::now()),
        }
    }
    
    fn get(&self, key: &K) -> Option<V> {
        // Fast read path
        self.data.read().get(key).cloned()
    }
    
    fn insert(&self, key: K, value: V) {
        // Writer must eventually succeed
        // Even with continuous reads
        let mut guard = self.data.write();
        guard.insert(key, value);
        
        let mut time_guard = self.last_update.write();
        *time_guard = Instant::now();
    }
    
    fn refresh_needed(&self) -> bool {
        // Read-then-write pattern
        let time_guard = self.last_update.upgradable_read();
        
        if time_guard.elapsed() > Duration::from_secs(60) {
            // Upgrade to write
            let mut write_guard = time_guard.upgrade();
            *write_guard = Instant::now();
            true
        } else {
            false
        }
    }
}
 
fn cache_usage() {
    let cache = Arc::new(Cache::<String, String>::new());
    
    // Many readers
    let mut reader_handles = Vec::new();
    for _ in 0..100 {
        let cache = Arc::clone(&cache);
        reader_handles.push(std::thread::spawn(move || {
            for _ in 0..1000 {
                let _ = cache.get(&"key".to_string());
            }
        }));
    }
    
    // Writers still make progress
    let cache_writer = Arc::clone(&cache);
    std::thread::spawn(move || {
        for i in 0..100 {
            cache_writer.insert(format!("key{}", i), format!("value{}", i));
        }
    });
    
    for handle in reader_handles {
        handle.join().unwrap();
    }
}

Cache pattern benefits from fair writer scheduling to ensure updates happen.

Real-World Example: Statistics Collection

use parking_lot::RwLock;
use std::sync::Arc;
 
struct Statistics {
    request_count: u64,
    error_count: u64,
    avg_latency_ms: f64,
}
 
struct StatsCollector {
    stats: RwLock<Statistics>,
}
 
impl StatsCollector {
    fn new() -> Self {
        StatsCollector {
            stats: RwLock::new(Statistics {
                request_count: 0,
                error_count: 0,
                avg_latency_ms: 0.0,
            }),
        }
    }
    
    fn record_request(&self, latency_ms: f64, is_error: bool) {
        // Writer must not be starved by continuous readers
        // Otherwise stats would never update
        let mut stats = self.stats.write();
        stats.request_count += 1;
        if is_error {
            stats.error_count += 1;
        }
        // Running average
        let n = stats.request_count as f64;
        stats.avg_latency_ms = stats.avg_latency_ms * (n - 1.0) / n + latency_ms / n;
    }
    
    fn get_snapshot(&self) -> Statistics {
        // Many readers can access concurrently
        let stats = self.stats.read();
        Statistics {
            request_count: stats.request_count,
            error_count: stats.error_count,
            avg_latency_ms: stats.avg_latency_ms,
        }
    }
}
 
fn stats_usage() {
    let collector = Arc::new(StatsCollector::new());
    
    // High read volume for monitoring
    let collector_read = Arc::clone(&collector);
    std::thread::spawn(move || {
        loop {
            let stats = collector_read.get_snapshot();
            println!("Requests: {}, Errors: {}", stats.request_count, stats.error_count);
            std::thread::sleep(std::time::Duration::from_millis(1));
        }
    });
    
    // Writers from application
    for _ in 0..1000 {
        collector.record_request(10.0, false);
    }
}

Statistics collection requires writers to eventually succeed for accuracy.

Synthesis

Writer starvation comparison:

Lock Type Writer Fairness Behavior
std::sync::RwLock Platform-dependent May allow new readers while writer waits
parking_lot::RwLock Guaranteed New readers queue behind waiting writers

Key API differences:

Feature std::sync::RwLock parking_lot::RwLock
Read result LockResult<RwLockReadGuard> RwLockReadGuard
Write result LockResult<RwLockWriteGuard> RwLockWriteGuard
Poisoning Yes No
Upgradable read No Yes
try_read TryLockResult Option

When fairness matters:

Scenario Fairness Need Recommendation
Read-heavy, rare writes Low Either works
Read-heavy, time-sensitive writes High parking_lot
Write-heavy Medium Either works
Mixed, unknown ratio High parking_lot
Cross-platform consistency High parking_lot

Key insight: The fundamental difference between std::sync::RwLock and parking_lot::RwLock is fairness scheduling for writers. The standard library's implementation varies by platform—some allow new readers to acquire the lock while a writer waits, which can starve writers indefinitely in read-heavy workloads. parking_lot::RwLock implements a fair queue where waiting threads are served in order: once a writer begins waiting, new readers queue behind it, guaranteeing eventual writer acquisition. This is critical when writers perform essential work like cache updates, statistics collection, or configuration changes. The additional parking_lot features—upgradable reads, no poisoning, consistent cross-platform behavior—make it suitable for production systems where predictable latency and writer progress are requirements. Use std::sync::RwLock when maximum read throughput is the priority and writers can tolerate potentially unbounded wait times, or when you want poisoning semantics and prefer to stay in the standard library.