Loading pageā¦
Rust walkthroughs
Loading pageā¦
parking_lot::RwLock and std::sync::RwLock for read-heavy workloads?parking_lot::RwLock uses a fair, queue-based parking mechanism with smaller memory footprint and no system calls in the uncontended case, making it faster for most read-heavy workloads compared to std::sync::RwLock which uses OS-level primitives with larger internal state. The standard library's RwLock delegates to the operating system's reader-writer lock implementationāon Linux this is pthreads pthread_rwlock_t, on Windows it's SRWLockāwhile parking_lot::RwLock implements its own lock queue in userspace, parking threads on condition variables rather than kernel objects. For read-heavy workloads where multiple readers can access simultaneously, parking_lot typically wins due to cheaper read lock acquisition: the standard library may perform system calls or atomic operations with more memory ordering overhead, while parking_lot uses a simple atomic update to the lock state and parks writers separately from readers.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// Standard library RwLock
let std_lock = StdRwLock::new(vec
![1, 2, 3]);
{
let mut write = std_lock.write().unwrap();
write.push(4);
}
{
let read = std_lock.read().unwrap();
println!("Std RwLock: {:?}", *read);
}
// parking_lot RwLock
let pl_lock = PlRwLock::new(vec
![1, 2, 3]);
{
let mut write = pl_lock.write();
write.push(4);
}
{
let read = pl_lock.read();
println!("PL RwLock: {:?}", *read);
}
}parking_lot::RwLock returns guards directly; std::sync::RwLock returns Result for lock poisoning.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::panic;
fn main() {
// Standard library: lock poisoning
let std_lock = StdRwLock::new(42);
// If a panic occurs while holding the lock, it becomes "poisoned"
// and subsequent lock attempts return Err
let result = panic::catch_unwind(|| {
let _guard = std_lock.write().unwrap();
panic!("intentional panic");
});
// Lock is poisoned
match std_lock.read() {
Ok(guard) => println!("Got lock: {}", *guard),
Err(e) => println!("Lock poisoned: {}", e),
}
// parking_lot: no lock poisoning
let pl_lock = PlRwLock::new(42);
let result = panic::catch_unwind(|| {
let _guard = pl_lock.write();
panic!("intentional panic");
});
// Lock is still usable
let guard = pl_lock.read();
println!("PL lock still works: {}", *guard);
}std::sync::RwLock tracks lock poisoning; parking_lot::RwLock does not, simplifying error handling.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::mem;
fn main() {
// Size of the lock structures themselves
println!("Std RwLock size: {} bytes", mem::size_of::<StdRwLock<()>>());
println!("PL RwLock size: {} bytes", mem::size_of::<PlRwLock<()>>());
// parking_lot is typically smaller because it uses a single atomic
// for state plus a separate wait queue allocated on demand
// std::sync::RwLock wraps the OS lock which varies by platform:
// - Linux pthread_rwlock_t: ~56 bytes
// - Windows SRWLOCK: pointer-sized
// - macOS pthread_rwlock_t: similar size
// Example structure overhead
struct Data {
values: Vec<u64>,
metadata: String,
}
println!("Std RwLock<Data> size: {} bytes", mem::size_of::<StdRwLock<Data>>());
println!("PL RwLock<Data> size: {} bytes", mem::size_of::<PlRwLock<Data>>());
}parking_lot::RwLock typically has smaller memory overhead than std::sync::RwLock.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
use std::thread;
fn read_heavy_std() -> u64 {
let lock = Arc::new(StdRwLock::new(0u64));
let mut handles = vec
![];
// Multiple readers
for _ in 0..8 {
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
let mut sum = 0u64;
for _ in 0..100_000 {
let guard = lock.read().unwrap();
sum += *guard;
}
sum
}));
}
handles.into_iter().map(|h| h.join().unwrap()).sum()
}
fn read_heavy_pl() -> u64 {
let lock = Arc::new(PlRwLock::new(0u64));
let mut handles = vec
![];
// Multiple readers - parking_lot handles concurrent reads efficiently
for _ in 0..8 {
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
let mut sum = 0u64;
for _ in 0..100_000 {
let guard = lock.read();
sum += *guard;
}
sum
}));
}
handles.into_iter().map(|h| h.join().unwrap()).sum()
}
fn main() {
let start = std::time::Instant::now();
read_heavy_std();
println!("Std RwLock read-heavy: {:?}", start.elapsed());
let start = std::time::Instant::now();
read_heavy_pl();
println!("PL RwLock read-heavy: {:?}", start.elapsed());
}parking_lot typically performs better in read-heavy scenarios due to cheaper read acquisition.
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
fn main() {
// parking_lot RwLock is fair - writers won't starve
let lock = Arc::new(RwLock::new(vec
![0; 100]));
let mut handles = vec
![];
// Multiple readers running continuously
for _ in 0..4 {
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
for _ in 0..1000 {
let guard = lock.read();
let _ = guard.len();
}
}));
}
// Writer - will eventually get the lock despite continuous readers
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
for i in 0..10 {
let mut guard = lock.write();
guard.push(i);
}
}));
for handle in handles {
handle.join().unwrap();
}
// parking_lot ensures writers make progress
// std::sync may allow readers to starve writers
// (behavior depends on OS implementation)
}parking_lot implements fairness: writers are queued and won't be indefinitely starved by readers.
use parking_lot::RwLock;
fn main() {
let lock = RwLock::new(42);
// Acquire write lock
let mut write_guard = lock.write();
*write_guard += 1;
// Downgrade to read lock without releasing
// This is a parking_lot feature - std RwLock doesn't support this
let read_guard = RwLock::downgrade(write_guard);
// Now we have a read guard
println!("Value: {}", *read_guard);
// Can no longer modify
}parking_lot::RwLock::downgrade converts a write guard to a read guard atomically.
use parking_lot::RwLock;
fn main() {
let lock = RwLock::new(42);
// Upgradable read lock - can later upgrade to write
let upgradable = lock.upgradable_read();
// Can read while holding upgradable
println!("Current value: {}", *upgradable);
// Upgrade to write lock
// This blocks until all other readers release
let mut write_guard = RwLock::upgrade(upgradable);
*write_guard = 100;
println!("New value: {}", *write_guard);
// std::sync::RwLock doesn't have upgradable reads
// You'd need to release read and acquire write separately
// Which creates a window for another writer to change data
}parking_lot supports upgradable reads: read locks that can be upgraded to write locks atomically.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// Standard library try operations
let std_lock = StdRwLock::new(42);
match std_lock.try_read() {
Ok(guard) => println!("Got read lock: {}", *guard),
Err(_) => println!("Read lock busy"),
}
match std_lock.try_write() {
Ok(guard) => println!("Got write lock"),
Err(_) => println!("Write lock busy"),
}
// parking_lot try operations return Option
let pl_lock = PlRwLock::new(42);
if let Some(guard) = pl_lock.try_read() {
println!("Got read lock: {}", *guard);
}
if let Some(guard) = pl_lock.try_write() {
println!("Got write lock");
}
// parking_lot also has try_read_upgradable and try_upgrade
if let Some(guard) = pl_lock.try_upgradable_read() {
if let Some(mut write) = RwLock::try_upgrade(guard) {
*write = 100;
}
}
}Both support try-lock; parking_lot returns Option and supports try-upgrade from upgradable reads.
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
use std::time::Instant;
fn main() {
let lock = Arc::new(RwLock::new(0u64));
let mut handles = vec
![];
// High contention scenario: many readers, occasional writers
for i in 0..10 {
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
if i < 8 {
// Readers
let mut sum = 0u64;
for _ in 0..10_000 {
let guard = lock.read();
sum += *guard;
}
sum
} else {
// Writers
for j in 0..100 {
let mut guard = lock.write();
*guard += 1;
}
0
}
}));
}
let start = Instant::now();
let total: u64 = handles.into_iter().map(|h| h.join().unwrap()).sum();
println!("Total: {}, Time: {:?}", total, start.elapsed());
// parking_lot uses parking and unparking to handle contention
// Threads are put to sleep and woken efficiently
}Under contention, parking_lot parks threads efficiently rather than spinning.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
use std::thread;
fn demonstrate_fairness() {
// parking_lot guarantees FIFO ordering for blocked threads
let lock = Arc::new(PlRwLock::new(0));
let mut handles = vec
![];
// First writer
{
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
let _guard = lock.write();
thread::sleep(std::time::Duration::from_millis(10));
}));
}
// Readers queue up after first writer
for _ in 0..3 {
let lock = Arc::clone(&lock);
handles.push(thread::spawn(move || {
let _guard = lock.read();
}));
}
// Second writer - should wait for first writer and all queued readers
let lock2 = Arc::clone(&lock);
let writer2 = thread::spawn(move || {
let _guard = lock2.write();
});
// parking_lot ensures proper ordering
for h in handles {
h.join().unwrap();
}
writer2.join().unwrap();
}
fn main() {
demonstrate_fairness();
println!("Fairness demonstration complete");
}parking_lot maintains FIFO ordering in its wait queue for fairness.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// Both support cloning read guards
let std_lock = StdRwLock::new(vec
![1, 2, 3]);
let pl_lock = PlRwLock::new(vec
![1, 2, 3]);
// Clone std read guard
{
let read1 = std_lock.read().unwrap();
let read2 = read1.clone(); // RwLockReadGuard clone
println!("Std: {:?} {:?}", read1.len(), read2.len());
}
// Clone parking_lot read guard
{
let read1 = pl_lock.read();
let read2 = read1.clone();
println!("PL: {:?} {:?}", read1.len(), read2.len());
}
}Both support cloning read guards, allowing multiple readers from the same lock acquisition.
use parking_lot::RwLock;
use std::sync::RwLock as StdRwLock;
// parking_lot supports const construction
static GLOBAL_LOCK: RwLock<u64> = RwLock::new(0);
const CONST_LOCK: RwLock<u64> = RwLock::new(42);
// Standard library also supports const since 1.63
static STD_GLOBAL: StdRwLock<u64> = StdRwLock::new(0);
fn main() {
// Use global lock
{
let mut guard = GLOBAL_LOCK.write();
*guard += 1;
}
{
let guard = GLOBAL_LOCK.read();
println!("Global value: {}", *guard);
}
}Both support const construction for static global locks.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
use std::thread;
use std::time::Instant;
fn benchmark_read_heavy(iterations: usize) {
// Benchmark parameters
let readers = 8;
let writers = 2;
// std::sync::RwLock benchmark
let std_lock = Arc::new(StdRwLock::new(0u64));
let mut handles = vec
![];
let start = Instant::now();
for _ in 0..readers {
let lock = Arc::clone(&std_lock);
handles.push(thread::spawn(move || {
let mut sum = 0u64;
for _ in 0..iterations {
if let Ok(guard) = lock.try_read() {
sum += *guard;
}
}
sum
}));
}
for _ in 0..writers {
let lock = Arc::clone(&std_lock);
handles.push(thread::spawn(move || {
for _ in 0..(iterations / 100) {
if let Ok(mut guard) = lock.write() {
*guard += 1;
}
}
}));
}
for h in handles {
h.join().unwrap();
}
let std_time = start.elapsed();
// parking_lot::RwLock benchmark
let pl_lock = Arc::new(PlRwLock::new(0u64));
let mut handles = vec
![];
let start = Instant::now();
for _ in 0..readers {
let lock = Arc::clone(&pl_lock);
handles.push(thread::spawn(move || {
let mut sum = 0u64;
for _ in 0..iterations {
if let Some(guard) = lock.try_read() {
sum += *guard;
}
}
sum
}));
}
for _ in 0..writers {
let lock = Arc::clone(&pl_lock);
handles.push(thread::spawn(move || {
for _ in 0..(iterations / 100) {
if let Some(mut guard) = lock.try_write() {
*guard += 1;
}
}
}));
}
for h in handles {
h.join().unwrap();
}
let pl_time = start.elapsed();
println!("Std RwLock: {:?}", std_time);
println!("PL RwLock: {:?}", pl_time);
}
fn main() {
benchmark_read_heavy(100_000);
}Performance varies by workload; parking_lot typically wins in read-heavy scenarios.
use std::sync::RwLock as StdRwLock;
use parking_lot::RwLock as PlRwLock;
fn main() {
// std::sync::RwLock behavior varies by platform:
// - Linux: pthread_rwlock_t (may be unfair in some versions)
// - Windows: SRWLOCK (no recursive acquisition)
// - macOS: pthread_rwlock_t (implementation details differ)
// parking_lot::RwLock behavior is consistent across platforms:
// - Same fairness guarantees
// - Same memory layout
// - Same performance characteristics
println!("Platform consistency:");
println!("std::sync::RwLock behavior depends on OS");
println!("parking_lot::RwLock is consistent across platforms");
}parking_lot provides consistent behavior across platforms; std::sync::RwLock varies by OS.
use parking_lot::RwLock;
use std::sync::Arc;
struct Cache {
data: RwLock<Vec<String>>,
}
impl Cache {
fn new() -> Self {
Cache {
data: RwLock::new(Vec::new()),
}
}
fn get(&self, index: usize) -> Option<String> {
let guard = self.data.read();
guard.get(index).cloned()
}
fn insert(&self, value: String) {
let mut guard = self.data.write();
guard.push(value);
}
fn update_if_needed(&self, index: usize, new_value: String) -> bool {
// Use upgradable read to avoid race
let guard = self.data.upgradable_read();
if guard.len() > index {
let mut write_guard = RwLock::upgrade(guard);
write_guard[index] = new_value;
return true;
}
false
}
}
fn main() {
let cache = Arc::new(Cache::new());
// Multiple threads can read
let mut handles = vec
![];
for i in 0..5 {
let cache = Arc::clone(&cache);
handles.push(std::thread::spawn(move || {
cache.insert(format!("Item {}", i));
}));
}
for h in handles {
h.join().unwrap();
}
println!("Cache contents: {:?}", *cache.data.read());
}Upgradable reads enable atomic check-then-modify patterns without TOCTOU races.
// Use std::sync::RwLock when:
// - You need lock poisoning for error recovery
// - You want standard library without dependencies
// - Platform-specific behavior is acceptable
// - Simplicity is more important than performance
// Use parking_lot::RwLock when:
// - Read-heavy workloads benefit from cheaper reads
// - You need upgradable reads or downgradable writes
// - Cross-platform consistency matters
// - Smaller memory footprint helps
// - Fairness guarantees are important
// - You don't want lock poisoning complexity
fn main() {
println!("Choose based on your requirements:");
println!("- std::sync::RwLock: portability, lock poisoning");
println!("- parking_lot::RwLock: performance, features, consistency");
}Choose based on requirements: std for simplicity and lock poisoning; parking_lot for performance.
Key differences:
| Feature | std::sync::RwLock | parking_lot::RwLock |
|---------|---------------------|----------------------|
| Lock poisoning | Yes (returns Result) | No (returns guard directly) |
| Memory overhead | Larger (OS-dependent) | Smaller (userspace queue) |
| Upgradable read | No | Yes |
| Downgrade write | No | Yes |
| Fairness | OS-dependent | Guaranteed FIFO |
| Cross-platform | Varies by OS | Consistent |
| Dependency | Standard library | External crate |
| Const init | Yes (1.63+) | Yes |
Performance characteristics:
| Scenario | Recommendation | Reason |
|----------|---------------|--------|
| Read-heavy | parking_lot | Cheaper read acquisition |
| Write-heavy | Similar | Both handle writes similarly |
| High contention | parking_lot | Better fairness, less overhead |
| Low contention | Similar | Both fast in uncontended case |
| Need lock poisoning | std | Only std supports it |
| Need upgrade/downgrade | parking_lot | Only parking_lot supports it |
Key insight: parking_lot::RwLock is designed specifically for high-performance concurrent access, implementing its own wait queue in userspace rather than delegating to OS primitives. For read-heavy workloads, this translates to faster read lock acquisition because parking threads on userspace condition variables costs less than kernel-level system calls. The absence of lock poisoning removes Result handling overhead and simplifies code at the cost of not detecting data corruption from panics. The additional featuresāupgradable reads, downgradable writes, and guaranteed fairnessāenable patterns that std::sync::RwLock cannot express atomically, like checking a value under a read lock and upgrading to write without releasing (which would create a race window in std). Cross-platform consistency matters for reproducible behavior; std::sync::RwLock may allow reader-writer fairness to vary between Linux, macOS, and Windows, while parking_lot::RwLock guarantees the same FIFO ordering everywhere. Choose std::sync::RwLock when you need lock poisoning semantics or want zero external dependencies; choose parking_lot::RwLock when performance under read-heavy workloads, fairness guarantees, or atomic upgrade operations justify adding a dependency.