What is the difference between tokio::sync::RwLock and parking_lot::RwLock for read-heavy vs write-heavy workloads?

tokio::sync::RwLock is an async-aware read-write lock designed for use in async code, allowing the runtime to schedule other tasks while a lock is being contested. parking_lot::RwLock is a synchronous read-write lock optimized for minimal overhead and fair scheduling in thread-based concurrency. The critical difference is that tokio::sync::RwLock must be used in async contexts because its methods are async and yield the task on contention, while parking_lot::RwLock blocks the thread and should never be held across await points. For read-heavy workloads, both perform well, but parking_lot has lower overhead. For write-heavy workloads, tokio::RwLock may be preferable in async contexts to avoid blocking the executor, but careful consideration of lock scope is essential.

Basic tokio::sync::RwLock Usage

use tokio::sync::RwLock;
use std::sync::Arc;
 
#[tokio::main]
async fn main() {
    let lock = Arc::new(RwLock::new(0u32));
    
    // Async read lock
    let r1 = lock.read().await;
    println!("Read: {}", *r1);
    drop(r1);
    
    // Async write lock
    let mut w = lock.write().await;
    *w += 1;
    println!("Write: {}", *w);
}

tokio::sync::RwLock requires .await because it may yield to the runtime.

Basic parking_lot::RwLock Usage

use parking_lot::RwLock;
use std::sync::Arc;
 
fn main() {
    let lock = Arc::new(RwLock::new(0u32));
    
    // Synchronous read lock
    let r1 = lock.read();
    println!("Read: {}", *r1);
    drop(r1);
    
    // Synchronous write lock
    let mut w = lock.write();
    *w += 1;
    println!("Write: {}", *w);
}

parking_lot::RwLock blocks synchronously without .await.

Async Context Requirement

use tokio::sync::RwLock;
use parking_lot::RwLock as PlRwLock;
 
// CORRECT: tokio::sync::RwLock in async context
#[tokio::main]
async fn main() {
    let tokio_lock = RwLock::new(0u32);
    
    // This yields the task, allowing other tasks to run
    let value = tokio_lock.read().await;
    println!("Value: {}", *value);
}
 
// CORRECT: parking_lot::RwLock in sync context
fn sync_function() {
    let pl_lock = PlRwLock::new(0u32);
    
    // This blocks the thread
    let value = pl_lock.read();
    println!("Value: {}", *value);
}

Match the lock type to your execution context.

Never Hold parking_lot Lock Across Await

use parking_lot::RwLock;
use std::sync::Arc;
 
// WRONG: Holding parking_lot lock across await
#[tokio::main]
async fn main() {
    let lock = Arc::new(RwLock::new(0u32));
    
    let read_guard = lock.read();  // Blocks thread!
    
    // This await while holding the lock is DANGEROUS
    some_async_function().await;  // Lock still held!
    
    drop(read_guard);
}
 
async fn some_async_function() {
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;
}

Holding a sync lock across .await can block the executor and cause deadlocks.

Correct Async Pattern with tokio::RwLock

use tokio::sync::RwLock;
use std::sync::Arc;
 
#[tokio::main]
async fn main() {
    let lock = Arc::new(RwLock::new(vec![1, 2, 3]));
    
    // Read lock is async
    {
        let data = lock.read().await;
        println!("Data: {:?}", *data);
        // Lock released when dropped
    }
    
    // Write lock is async
    {
        let mut data = lock.write().await;
        data.push(4);
        println!("Modified: {:?}", *data);
    }
    
    // Multiple concurrent reads
    let lock_clone = lock.clone();
    let task1 = tokio::spawn(async move {
        let data = lock_clone.read().await;
        println!("Task 1 reading: {:?}", *data);
    });
    
    let lock_clone = lock.clone();
    let task2 = tokio::spawn(async move {
        let data = lock_clone.read().await;
        println!("Task 2 reading: {:?}", *data);
    });
    
    task1.await.unwrap();
    task2.await.unwrap();
}

tokio::sync::RwLock allows multiple concurrent readers in async code.

Read-Heavy Workloads

use tokio::sync::RwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
 
// Read-heavy: Many more reads than writes
// Both locks perform well, but parking_lot has lower overhead
 
fn read_heavy_sync() {
    let lock = Arc::new(PlRwLock::new(vec![1, 2, 3]));
    
    // Many reads - parking_lot excels here
    let mut handles = vec![];
    for _ in 0..100 {
        let lock = lock.clone();
        handles.push(std::thread::spawn(move || {
            let data = lock.read();
            data.len()  // Fast read operation
        }));
    }
    
    // Occasional write
    let lock = lock.clone();
    let mut w = lock.write();
    w.push(4);
    drop(w);
    
    handles.into_iter().for_each(|h| { h.join().unwrap(); });
}
 
#[tokio::main]
async fn read_heavy_async() {
    let lock = Arc::new(RwLock::new(vec![1, 2, 3]));
    
    // Many concurrent reads
    let mut tasks = vec![];
    for _ in 0..100 {
        let lock = lock.clone();
        tasks.push(tokio::spawn(async move {
            let data = lock.read().await;
            data.len()
        }));
    }
    
    // Occasional write
    let mut w = lock.write().await;
    w.push(4);
    drop(w);
    
    for task in tasks {
        task.await.unwrap();
    }
}

For read-heavy workloads, both work well; parking_lot has less overhead.

Write-Heavy Workloads

use tokio::sync::RwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
 
// Write-heavy: Many writes, fewer reads
// Consider if RwLock is the right data structure
 
#[tokio::main]
async fn write_heavy_async() {
    let lock = Arc::new(RwLock::new(0u32));
    
    // Many writers - RwLock may not be ideal
    let mut tasks = vec![];
    for _ in 0..100 {
        let lock = lock.clone();
        tasks.push(tokio::spawn(async move {
            let mut w = lock.write().await;  // Exclusive access
            *w += 1;
        }));
    }
    
    for task in tasks {
        task.await.unwrap();
    }
    
    println!("Final value: {}", *lock.read().await);
    // 100 - all writes completed
}
 
// For write-heavy workloads, consider:
// - tokio::sync::Mutex (simpler, no reader/writer distinction)
// - Lock-free data structures
// - Message passing (channels)

For write-heavy workloads, RwLock may not be the best choice.

Blocking vs Yielding Behavior

use tokio::sync::RwLock;
use parking_lot::RwLock as PlRwLock;
 
// parking_lot: Blocks thread
fn sync_example() {
    let lock = PlRwLock::new(0);
    
    // Thread is blocked until lock is acquired
    let r = lock.read();
    println!("Got read lock, thread blocked until now");
}
 
// tokio: Yields task
#[tokio::main]
async fn async_example() {
    let lock = RwLock::new(0);
    
    // If lock is contended, task yields, other tasks can run
    let r = lock.read().await;
    println!("Got read lock, task yielded until now");
}

parking_lot blocks the thread; tokio::sync::RwLock yields the task.

Lock Contention Scenarios

use tokio::sync::RwLock;
use std::sync::Arc;
 
#[tokio::main]
async fn main() {
    let lock = Arc::new(RwLock::new(0u32));
    
    // Scenario 1: Long read hold, trying to write
    let lock_read = lock.clone();
    let read_task = tokio::spawn(async move {
        let r = lock_read.read().await;
        println!("Reader acquired");
        tokio::time::sleep(std::time::Duration::from_secs(1)).await;
        // Writer must wait
        drop(r);
    });
    
    let lock_write = lock.clone();
    let write_task = tokio::spawn(async move {
        tokio::time::sleep(std::time::Duration::from_millis(100)).await;
        println!("Writer waiting...");
        let mut w = lock_write.write().await;
        println!("Writer acquired");
        *w += 1;
    });
    
    read_task.await.unwrap();
    write_task.await.unwrap();
}

Writers wait for all readers to release their locks.

Fairness Policies

// parking_lot::RwLock: Fair by default
// - Writers can starve if readers keep arriving
// - parking_lot has writer-prevention to reduce writer starvation
 
// tokio::sync::RwLock: Not fair by default
// - New readers can acquire lock while writers wait
// - Can lead to writer starvation
 
use tokio::sync::RwLock;
use std::sync::Arc;
 
#[tokio::main]
async fn main() {
    let lock = Arc::new(RwLock::new(0u32));
    
    // Demonstrate potential writer starvation
    let lock1 = lock.clone();
    let lock2 = lock.clone();
    
    // Start with a read lock
    let r1 = lock1.read().await;
    
    // Start a writer (will wait)
    let write_task = tokio::spawn(async move {
        println!("Writer trying to acquire...");
        let mut w = lock2.write().await;
        println!("Writer acquired!");
        *w += 1;
    });
    
    // Give writer time to start waiting
    tokio::time::sleep(std::time::Duration::from_millis(10)).await;
    
    // New reader can still acquire (potential starvation)
    let r2 = lock.read().await;
    println!("Second reader acquired while writer waiting");
    
    drop(r1);
    drop(r2);
    
    write_task.await.unwrap();
}

tokio::RwLock can starve writers; parking_lot has fairness mechanisms.

Memory Overhead

use std::mem::size_of;
 
fn main() {
    // tokio::RwLock has higher overhead
    // - Async runtime integration
    // - Waker storage
    // - Queue for waiting tasks
    
    // parking_lot::RwLock is minimal
    // - Just the lock state
    // - Very small memory footprint
    
    println!(
        "parking_lot::RwLock<usize>: {} bytes",
        size_of::<parking_lot::RwLock<usize>>()
    );
    
    // tokio::sync::RwLock is larger due to async state
}

parking_lot::RwLock has lower memory overhead.

Performance Characteristics

use tokio::sync::RwLock;
use parking_lot::RwLock as PlRwLock;
use std::sync::Arc;
use std::time::Instant;
 
fn bench_parking_lot() {
    let lock = Arc::new(PlRwLock::new(0u32));
    let iterations = 1_000_000;
    
    let start = Instant::now();
    
    // Read-heavy workload
    for _ in 0..iterations {
        let r = lock.read();
        let _ = *r;
    }
    
    println!("parking_lot read: {:?}", start.elapsed());
    
    let start = Instant::now();
    
    // Write-heavy workload
    for _ in 0..iterations {
        let mut w = lock.write();
        *w += 1;
    }
    
    println!("parking_lot write: {:?}", start.elapsed());
}
 
#[tokio::main(flavor = "current_thread")]
async fn bench_tokio() {
    let lock = Arc::new(RwLock::new(0u32));
    let iterations = 1_000_000;
    
    let start = Instant::now();
    
    // Read-heavy workload
    for _ in 0..iterations {
        let r = lock.read().await;
        let _ = *r;
    }
    
    println!("tokio read: {:?}", start.elapsed());
    
    let start = Instant::now();
    
    // Write-heavy workload
    for _ in 0..iterations {
        let mut w = lock.write().await;
        *w += 1;
    }
    
    println!("tokio write: {:?}", start.elapsed());
}

parking_lot typically has lower latency per operation.

Use with Tokio Tasks

use tokio::sync::RwLock;
use std::sync::Arc;
 
#[tokio::main]
async fn main() {
    let lock = Arc::new(RwLock::new(vec![1, 2, 3]));
    
    // Spawn multiple tasks reading concurrently
    let mut handles = vec![];
    for i in 0..10 {
        let lock = lock.clone();
        handles.push(tokio::spawn(async move {
            let data = lock.read().await;
            println!("Task {} sees: {:?}", i, *data);
            data.len()
        }));
    }
    
    // All tasks can read concurrently
    for handle in handles {
        let result = handle.await.unwrap();
        println!("Result: {}", result);
    }
    
    // Write requires exclusive access
    let mut w = lock.write().await;
    w.push(4);
    drop(w);
    
    println!("Final: {:?}", *lock.read().await);
}

tokio::RwLock integrates naturally with tokio tasks.

Use with Threads

use parking_lot::RwLock;
use std::sync::Arc;
use std::thread;
 
fn main() {
    let lock = Arc::new(RwLock::new(vec![1, 2, 3]));
    
    // Spawn multiple threads reading concurrently
    let mut handles = vec![];
    for i in 0..10 {
        let lock = lock.clone();
        handles.push(thread::spawn(move || {
            let data = lock.read();
            println!("Thread {} sees: {:?}", i, *data);
            data.len()
        }));
    }
    
    // All threads can read concurrently
    for handle in handles {
        let result = handle.join().unwrap();
        println!("Result: {}", result);
    }
    
    // Write requires exclusive access
    let mut w = lock.write();
    w.push(4);
    drop(w);
    
    println!("Final: {:?}", *lock.read());
}

parking_lot::RwLock integrates naturally with threads.

Downgrade Pattern

use tokio::sync::RwLock;
 
#[tokio::main]
async fn main() {
    let lock = RwLock::new(0u32);
    
    // Acquire write lock
    let mut w = lock.write().await;
    *w += 1;
    
    // Downgrade to read lock (tokio supports this)
    let r = RwLock::downgrade(w);
    println!("Read after downgrade: {}", *r);
    
    // parking_lot also supports downgrade
    let lock_pl = parking_lot::RwLock::new(0u32);
    let mut w_pl = lock_pl.write();
    *w_pl += 1;
    let r_pl = parking_lot::RwLockWriteGuard::downgrade(w_pl);
    println!("Read after downgrade: {}", *r_pl);
}

Both locks support downgrading from write to read.

Choosing Between Them

// Use tokio::sync::RwLock when:
// 1. Code is async
// 2. Lock is held across await points
// 3. You want task yielding on contention
// 4. Working in a tokio runtime
 
// Use parking_lot::RwLock when:
// 1. Code is synchronous
// 2. Lock is held briefly (no awaits)
// 3. You need maximum performance
// 4. Working with threads, not tasks
 
// WRONG: Using parking_lot in async code with holds across await
#[tokio::main]
async fn bad_example() {
    let lock = parking_lot::RwLock::new(0);
    let _guard = lock.read();  // Blocks thread!
    
    // This blocks the entire executor thread!
    some_async_work().await;  // BAD: holding sync lock
    
    drop(_guard);
}
 
async fn some_async_work() {
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;
}
 
// CORRECT: Using tokio::sync::RwLock
#[tokio::main]
async fn good_example() {
    let lock = tokio::sync::RwLock::new(0);
    let _guard = lock.read().await;  // Yields task, not thread
    
    some_async_work().await;  // OK: async lock
    
    drop(_guard);
}

Match lock type to execution context.

Comparison Table

Aspect tokio::sync::RwLock parking_lot::RwLock
Async support Native async/await Blocks thread
Context Async code Sync code
Hold across await Safe Dangerous/Don't
Performance Higher overhead Lower overhead
Memory Higher Lower
Fairness May starve writers Writer-priority option
Read-heavy Good Excellent
Write-heavy Consider Mutex Consider Mutex

Synthesis

tokio::sync::RwLock characteristics:

  • Async-aware, uses .await
  • Yields task on contention
  • Safe to hold across await points
  • Integrates with tokio runtime
  • Higher overhead per operation

parking_lot::RwLock characteristics:

  • Synchronous, blocks thread
  • Lower overhead per operation
  • Must not hold across await points
  • Excellent for pure sync code
  • Minimal memory footprint

Read-heavy workloads:

  • Both perform well
  • parking_lot has lower overhead
  • Multiple readers can hold lock simultaneously
  • Consider lock granularity

Write-heavy workloads:

  • RwLock may not be optimal
  • Consider Mutex instead
  • Consider lock-free structures
  • Consider message passing (channels)

Best practices:

  • Use tokio::sync::RwLock only in async code
  • Use parking_lot::RwLock in sync code
  • Never hold sync locks across .await
  • Keep critical sections short
  • Consider Mutex for write-heavy workloads

Key insight: The choice is primarily about execution context, not performance. In async code, use tokio::sync::RwLock to avoid blocking the executor. In sync code, use parking_lot::RwLock for better performance. The workload matters less than whether the lock will be held across suspension points.