How does parking_lot::RwLock::write differ from parking_lot::Mutex::lock for exclusive access semantics?
Both RwLock::write and Mutex::lock provide exclusive access, but they differ in what they block against and how they communicate intent. RwLock::write blocks until all readers AND any other writer release their locks, allowing the lock to distinguish between shared read access and exclusive write access. Mutex::lock blocks until the single mutex holder releases, with no distinction between read and write operationsâevery access is treated as exclusive. The choice between them depends on whether your data benefits from concurrent read access, and whether you want to encode read vs. write intent in the API.
Basic Exclusive Access Comparison
use parking_lot::{RwLock, Mutex};
fn main() {
let rwlock = RwLock::new(42);
let mutex = Mutex::new(42);
// RwLock::write() for exclusive write access
{
let mut write_guard = rwlock.write();
*write_guard = 100;
// No other reader or writer can access during this scope
}
// Mutex::lock() for exclusive access (no read/write distinction)
{
let mut mutex_guard = mutex.lock();
*mutex_guard = 100;
// No other thread can access during this scope
}
// Both provide &mut T through DerefMut
// Both block until exclusive access is available
}Both methods return guards that provide mutable access, but RwLock has additional semantics.
Blocking Behavior Differences
use parking_lot::{RwLock, Mutex};
use std::sync::Arc;
use std::thread;
fn main() {
// RwLock: write() blocks on both readers and writers
let rwlock = Arc::new(RwLock::new(0));
{
// A read lock exists
let _read_guard = rwlock.read();
// write() would block until read_guard is dropped
// This won't compile because it would deadlock:
// let _write_guard = rwlock.write(); // BLOCKS!
}
// Now no locks held
let write_guard = rwlock.write(); // Succeeds immediately
// Mutex: lock() blocks on any holder
let mutex = Arc::new(Mutex::new(0));
{
let guard1 = mutex.lock();
// lock() would block until guard1 is dropped
// let guard2 = mutex.lock(); // BLOCKS!
}
// Mutex makes no distinction - every access is exclusive
}RwLock::write must wait for ALL readers and writers to finish; Mutex::lock waits for ONE holder.
Reader-Writer Distinction
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
fn main() {
let rwlock = Arc::new(RwLock::new(vec![1, 2, 3]));
// Multiple readers can hold read locks simultaneously
let handles: Vec<_> = (0..3)
.map(|i| {
let lock = Arc::clone(&rwlock);
thread::spawn(move || {
let read_guard = lock.read();
println!("Reader {}: {:?}", i, *read_guard);
// Multiple threads can read concurrently
})
})
.collect();
for handle in handles {
handle.join().unwrap();
}
// But write() requires exclusive access
{
let mut write_guard = rwlock.write();
write_guard.push(4);
// No readers or other writers can access now
}
// Mutex doesn't allow this distinction
let mutex = Arc::new(parking_lot::Mutex::new(vec![1, 2, 3]));
// Every lock() is exclusive - even if you only read
{
let guard = mutex.lock();
// You're holding exclusive access even though just reading
let _first = guard.first();
}
}RwLock allows concurrent reads when no writer holds the lock; Mutex serializes all access.
Intent Communication
use parking_lot::{RwLock, Mutex};
struct Data {
values: Vec<i32>,
}
// With RwLock, the API communicates read vs. write intent
impl Data {
fn with_rwlock() {
let lock = RwLock::new(Self { values: vec![] });
// Clear intent: we're reading
{
let read_guard = lock.read();
let _sum: i32 = read_guard.values.iter().sum();
}
// Clear intent: we're writing
{
let mut write_guard = lock.write();
write_guard.values.push(42);
}
}
// With Mutex, intent is opaque
fn with_mutex() {
let lock = Mutex::new(Self { values: vec![] });
// Intent unclear - are we reading or writing?
{
let guard = lock.lock();
let _sum: i32 = guard.values.iter().sum();
// We only read, but held exclusive access
}
{
let mut guard = lock.lock();
guard.values.push(42);
// Same lock() call, but writing
}
}
}
fn main() {}RwLock encodes read/write intent in the method call; Mutex requires external documentation.
When RwLock Writers Can Starve
use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
use std::time::Duration;
fn potential_writer_starvation() {
let lock = Arc::new(RwLock::new(0));
// Continuous readers can prevent writers
let reader_handles: Vec<_> = (0..10)
.map(|_| {
let lock = Arc::clone(&lock);
thread::spawn(move || {
for _ in 0..100 {
let _guard = lock.read();
// Hold read lock briefly
thread::sleep(Duration::from_micros(10));
}
})
})
.collect();
// Writer trying to get write lock
let writer_handle = {
let lock = Arc::clone(&lock);
thread::spawn(move || {
// May wait a long time if readers keep arriving
let mut guard = lock.write();
*guard = 42;
})
};
// parking_lot's RwLock has fairness mechanisms to prevent
// indefinite writer starvation, but writers can still wait
// longer than with a Mutex
}
fn main() {
potential_writer_starvation();
}RwLock can allow continuous readers to delay writers, though parking_lot mitigates this.
Performance Characteristics
use parking_lot::{RwLock, Mutex};
use std::sync::Arc;
use std::thread;
fn benchmark_read_heavy() {
// Read-heavy workload: RwLock typically faster
let rwlock = Arc::new(RwLock::new(0u64));
let readers: Vec<_> = (0..8)
.map(|_| {
let lock = Arc::clone(&rwlock);
thread::spawn(move || {
for _ in 0..10000 {
let guard = lock.read();
let _ = *guard; // Just reading
}
})
})
.collect();
let writers: Vec<_> = (0..2)
.map(|_| {
let lock = Arc::clone(&rwlock);
thread::spawn(move || {
for _ in 0..100 {
let mut guard = lock.write();
*guard += 1;
}
})
})
.collect();
// RwLock allows 8 readers to work concurrently
// Mutex would serialize all 10 threads
}
fn benchmark_write_heavy() {
// Write-heavy or equal read/write: Mutex often faster
let mutex = Arc::new(Mutex::new(0u64));
let threads: Vec<_> = (0..10)
.map(|_| {
let lock = Arc::clone(&mutex);
thread::spawn(move || {
for _ in 0..1000 {
let mut guard = lock.lock();
*guard += 1; // Writing every time
}
})
})
.collect();
// With Mutex, every access is exclusive anyway
// RwLock would add overhead for no benefit
}
fn main() {
benchmark_read_heavy();
benchmark_write_heavy();
}RwLock shines with read-heavy workloads; Mutex is simpler for write-heavy or unknown patterns.
Guard Types and Functionality
use parking_lot::{RwLock, Mutex, RwLockReadGuard, RwLockWriteGuard, MutexGuard};
fn guard_types() {
let rwlock = RwLock::new(42);
let mutex = Mutex::new(42);
// RwLock::write() returns RwLockWriteGuard
let write_guard: RwLockWriteGuard<'_, i32> = rwlock.write();
// Implements Deref<Target=i32> and DerefMut
// RwLock::read() returns RwLockReadGuard
let read_guard: RwLockReadGuard<'_, i32> = rwlock.read();
// Implements Deref<Target=i32> only (no DerefMut)
// Mutex::lock() returns MutexGuard
let mutex_guard: MutexGuard<'_, i32> = mutex.lock();
// Implements Deref<Target=i32> and DerefMut
// Key difference: RwLock has two guard types with different capabilities
// Mutex has one guard type that's always "write capable"
}
fn main() {
guard_types();
}RwLock has separate guard types for read and write access; Mutex has one guard type.
Recursive Locking Behavior
use parking_lot::{RwLock, Mutex};
fn main() {
// RwLock: A thread holding read() can call read() again
let rwlock = RwLock::new(42);
{
let guard1 = rwlock.read();
let guard2 = rwlock.read(); // OK - reentrant read
// Same thread can hold multiple read locks
}
// RwLock: A thread holding write() can call read() or write()
{
let write_guard = rwlock.write();
// Cannot call read() or write() again - would deadlock
// RwLock is not reentrant for write locks
}
// Mutex: A thread holding lock() cannot call lock() again
let mutex = Mutex::new(42);
{
let guard1 = mutex.lock();
// let guard2 = mutex.lock(); // DEADLOCK - not reentrant
}
// parking_lot mutexes are NOT reentrant by default
// Use parking_lot::ReentrantMutex for reentrant locking
}Neither RwLock::write nor Mutex::lock is reentrantâa thread cannot acquire the same lock twice.
Downgrade and Upgrade Patterns
use parking_lot::RwLock;
fn main() {
let lock = RwLock::new(42);
// RwLock supports downgrade: write -> read
let write_guard = lock.write();
// Modify data
// Then downgrade to read for continued access without blocking writers forever
let read_guard = parking_lot::RwLockWriteGuard::downgrade(write_guard);
// Now holding only a read lock, other readers can proceed
// You cannot upgrade read -> write (would cause deadlock potential)
// You must release read and acquire write separately
}
fn downgrade_example() {
let lock = RwLock::new(vec![1, 2, 3]);
// Common pattern: write, then downgrade to read
{
let mut write_guard = lock.write();
write_guard.push(4); // Write while holding exclusive
// Downgrade to read lock atomically
let read_guard = parking_lot::RwLockWriteGuard::downgrade(write_guard);
// Can still read, but not write
assert_eq!(read_guard.len(), 4);
// Other readers can now join
}
}RwLock supports atomic downgrade from write to read; Mutex has no equivalent.
Fairness Guarantees
use parking_lot::{RwLock, Mutex};
fn main() {
// parking_lot provides fairness guarantees for both
// RwLock fairness:
// - Writers can be prioritized to prevent writer starvation
// - Readers that arrive after a waiting writer may be queued
// - This prevents continuous readers from blocking writers forever
// Mutex fairness:
// - parking_lot Mutex uses a fair locking algorithm
// - Threads are woken in FIFO order
// - No thread can be starved indefinitely
// Both use parking_lot's efficient thread parking mechanism
// rather than OS condition variables
}parking_lot implements fairness for both RwLock and Mutex, preventing indefinite starvation.
When to Use Each
use parking_lot::{RwLock, Mutex};
// Use RwLock::write when:
// 1. Data is read more often than written
// 2. Multiple threads can safely read concurrently
// 3. You want to encode read vs. write intent in the API
// 4. Read operations are substantial enough to benefit from concurrency
struct Cache {
data: RwLock<Vec<String>>,
}
impl Cache {
fn get(&self, index: usize) -> Option<String> {
let guard = self.data.read(); // Concurrent reads OK
guard.get(index).cloned()
}
fn insert(&self, value: String) {
let mut guard = self.data.write(); // Exclusive write
guard.push(value);
}
}
// Use Mutex::lock when:
// 1. Every access modifies the data (no read-only access)
// 2. Access pattern is unknown or mixed
// 3. Simplicity is more important than concurrent reads
// 4. The overhead of RwLock isn't justified
struct Counter {
value: Mutex<i64>,
}
impl Counter {
fn increment(&self) {
let mut guard = self.value.lock();
*guard += 1; // Always writing - Mutex is appropriate
}
fn get(&self) -> i64 {
let guard = self.value.lock();
*guard // Even this read needs exclusive access
}
}
fn main() {}Choose based on read/write ratio and whether concurrent reads provide meaningful benefit.
Memory Overhead
use parking_lot::{RwLock, Mutex};
use std::mem::size_of;
fn main() {
// Mutex is simpler and has lower overhead
println!("Mutex<i32> size: {}", size_of::<Mutex<i32>>());
// RwLock needs to track readers, so has more overhead
println!("RwLock<i32> size: {}", size_of::<RwLock<i32>>());
// parking_lot's implementations are optimized:
// - Mutex: typically pointer-sized (atomic state + wait queue pointer)
// - RwLock: tracks reader count and waiting writers
// For small data, overhead may be significant proportion
// For large data structures, overhead is negligible
}RwLock has higher memory overhead than Mutex due to reader tracking.
Synthesis
Quick reference:
use parking_lot::{RwLock, Mutex};
// RwLock::write()
// - Blocks until all readers AND writers release
// - Signals write intent explicitly
// - Returns RwLockWriteGuard (Deref + DerefMut)
// - Allows concurrent reads when no writer holds lock
// - Higher overhead, more complex
// - Best for read-heavy workloads
// Mutex::lock()
// - Blocks until the single holder releases
// - No distinction between read/write intent
// - Returns MutexGuard (Deref + DerefMut)
// - Every access is exclusive, no concurrency
// - Lower overhead, simpler
// - Best for write-heavy or unknown access patterns
// Decision matrix:
// - Read-heavy, known pattern: RwLock
// - Write-heavy or mixed: Mutex
// - Need to encode intent in API: RwLock
// - Simple is better: Mutex
// - Downgrade pattern needed: RwLock (unique feature)Key insight: The difference isn't just implementationâit's semantics. RwLock::write explicitly signals "I'm modifying data" and cooperates with a reader-writer protocol that allows concurrent reads when no writers are active. Mutex::lock signals "I need exclusive access" with no information about what you'll do. Use RwLock when the distinction between read and write matters for your access patternsâwhen reads are frequent and can safely happen concurrently, and writes are relatively rare. Use Mutex when you don't have clear read/write separation, when writes are as common as reads, or when the additional complexity of reader-writer semantics isn't justified. The downgrade capability (RwLockWriteGuard::downgrade) is unique to RwLock and useful when you need to modify and then read atomically while allowing other readers to proceed.
