Loading pageā¦
Rust walkthroughs
Loading pageā¦
tokio::sync::RwLock and tokio::sync::Mutex for read-heavy vs write-heavy workloads?tokio::sync::RwLock allows multiple concurrent readers or one exclusive writer, while tokio::sync::Mutex allows only one holder regardless of operation type. For read-heavy workloads, RwLock provides better throughput because multiple readers can hold the lock simultaneously. For write-heavy or contested workloads, Mutex often performs better due to lower overhead: RwLock must track reader counts and manage writer queues, while Mutex uses simpler state management. The choice depends on the ratio of reads to writes, contention levels, and whether the overhead of RwLock's reader tracking is justified by the concurrency gains.
use tokio::sync::Mutex;
use std::sync::Arc;
#[tokio::main]
async fn main() {
let data = Arc::new(Mutex::new(vec![1, 2, 3]));
// Only one task can hold the lock at a time
let data1 = data.clone();
let data2 = data.clone();
let handle1 = tokio::spawn(async move {
let mut guard = data1.lock().await;
guard.push(4);
println!("Task 1: {:?}", *guard);
});
let handle2 = tokio::spawn(async move {
let guard = data2.lock().await;
println!("Task 2: {:?}", *guard);
});
handle1.await.unwrap();
handle2.await.unwrap();
}Mutex provides exclusive access: only one task can hold the lock.
use tokio::sync::RwLock;
use std::sync::Arc;
#[tokio::main]
async fn main() {
let data = Arc::new(RwLock::new(vec![1, 2, 3]));
// Multiple readers can hold the lock simultaneously
let data1 = data.clone();
let data2 = data.clone();
let data3 = data.clone();
let read1 = tokio::spawn(async move {
let guard = data1.read().await; // Shared read access
println!("Reader 1: {:?}", *guard);
});
let read2 = tokio::spawn(async move {
let guard = data2.read().await; // Shared read access
println!("Reader 2: {:?}", *guard);
});
let write = tokio::spawn(async move {
let mut guard = data3.write().await; // Exclusive write access
guard.push(4);
println!("Writer: {:?}", *guard);
});
// read1 and read2 can run concurrently
// write must wait for readers to finish
// readers after write must wait for write to finish
}RwLock allows concurrent readers but exclusive writers.
use tokio::sync::RwLock;
use std::sync::Arc;
use std::time::Instant;
#[tokio::main]
async fn main() {
let data = Arc::new(RwLock::new(vec![0u64; 1_000_000]));
let start = Instant::now();
// Multiple concurrent readers
let mut handles = vec![];
for _ in 0..10 {
let data = data.clone();
handles.push(tokio::spawn(async move {
let guard = data.read().await;
guard.iter().sum::<u64>()
}));
}
let results: Vec<u64> = futures::future::join_all(handles)
.await
.into_iter()
.map(|r| r.unwrap())
.collect();
println!("RwLock readers: {:?}", start.elapsed());
println!("Results: {:?}", results);
}All readers execute concurrently with RwLock.
use tokio::sync::Mutex;
use std::sync::Arc;
use std::time::Instant;
#[tokio::main]
async fn main() {
let data = Arc::new(Mutex::new(vec![0u64; 1_000_000]));
let start = Instant::now();
// Mutex: readers execute one at a time
let mut handles = vec![];
for _ in 0..10 {
let data = data.clone();
handles.push(tokio::spawn(async move {
let guard = data.lock().await;
guard.iter().sum::<u64>()
}));
}
let results: Vec<u64> = futures::future::join_all(handles)
.await
.into_iter()
.map(|r| r.unwrap())
.collect();
println!("Mutex readers: {:?}", start.elapsed());
println!("Results: {:?}", results);
}Readers execute sequentially with Mutex.
use tokio::sync::{Mutex, RwLock};
use std::sync::Arc;
use std::time::Instant;
#[tokio::main]
async fn main() {
// RwLock has more complex internal state:
// - Reader count
// - Writer waiting queue
// - Read/write mode tracking
// - Wake-up notifications for waiting tasks
let mutex = Arc::new(Mutex::new(0u64));
let rwlock = Arc::new(RwLock::new(0u64));
// Single reader: Mutex overhead
let start = Instant::now();
for _ in 0..100_000 {
let _ = mutex.lock().await;
}
let mutex_time = start.elapsed();
// Single reader: RwLock overhead
let start = Instant::now();
for _ in 0..100_000 {
let _ = rwlock.read().await;
}
let rwlock_time = start.elapsed();
println!("Mutex: {:?}", mutex_time);
println!("RwLock: {:?}", rwlock_time);
// RwLock typically slower for uncontended single reader
}RwLock has higher per-operation overhead than Mutex.
use tokio::sync::{Mutex, RwLock};
use std::sync::Arc;
use std::time::Instant;
#[tokio::main]
async fn main() {
let mutex_data = Arc::new(Mutex::new(0u64));
let rwlock_data = Arc::new(RwLock::new(0u64));
// Write-heavy: 90% writes, 10% reads
let write_ratio = 0.9;
let operations = 10_000;
// Mutex benchmark
let start = Instant::now();
for i in 0..operations {
if (i as f64 / operations as f64) < write_ratio {
let mut guard = mutex_data.lock().await;
*guard += 1;
} else {
let guard = mutex_data.lock().await;
let _ = *guard;
}
}
let mutex_time = start.elapsed();
// RwLock benchmark
let start = Instant::now();
for i in 0..operations {
if (i as f64 / operations as f64) < write_ratio {
let mut guard = rwlock_data.write().await;
*guard += 1;
} else {
let guard = rwlock_data.read().await;
let _ = *guard;
}
}
let rwlock_time = start.elapsed();
println!("Write-heavy Mutex: {:?}", mutex_time);
println!("Write-heavy RwLock: {:?}", rwlock_time);
// Mutex often faster: RwLock overhead not justified
}For write-heavy workloads, Mutex overhead is lower.
use tokio::sync::{Mutex, RwLock};
use std::sync::Arc;
use std::time::Instant;
#[tokio::main]
async fn main() {
let mutex_data = Arc::new(Mutex::new(vec![0u64; 10_000]));
let rwlock_data = Arc::new(RwLock::new(vec![0u64; 10_000]));
let concurrency = 10;
let operations_per_task = 1_000;
// Read-heavy: 95% reads, 5% writes
// Mutex: all operations serialized
let start = Instant::now();
let mut handles = vec![];
for _ in 0..concurrency {
let data = mutex_data.clone();
handles.push(tokio::spawn(async move {
for i in 0..operations_per_task {
if i % 20 == 0 {
let mut guard = data.lock().await;
guard[0] += 1;
} else {
let guard = data.lock().await;
let _ = guard.iter().sum::<u64>();
}
}
}));
}
futures::future::join_all(handles).await;
let mutex_time = start.elapsed();
// RwLock: reads can proceed concurrently
let start = Instant::now();
let mut handles = vec![];
for _ in 0..concurrency {
let data = rwlock_data.clone();
handles.push(tokio::spawn(async move {
for i in 0..operations_per_task {
if i % 20 == 0 {
let mut guard = data.write().await;
guard[0] += 1;
} else {
let guard = data.read().await;
let _ = guard.iter().sum::<u64>();
}
}
}));
}
futures::future::join_all(handles).await;
let rwlock_time = start.elapsed();
println!("Read-heavy Mutex: {:?}", mutex_time);
println!("Read-heavy RwLock: {:?}", rwlock_time);
// RwLock often faster: concurrent reads
}For read-heavy workloads, RwLock concurrency wins.
use tokio::sync::RwLock;
use std::sync::Arc;
use std::time::Duration;
#[tokio::main]
async fn main() {
let data = Arc::new(RwLock::new(0));
// Continuous readers can starve writers
let reader_handles: Vec<_> = (0..5)
.map(|i| {
let data = data.clone();
tokio::spawn(async move {
loop {
let guard = data.read().await;
println!("Reader {} has lock", i);
tokio::time::sleep(Duration::from_millis(10)).await;
}
})
})
.collect();
// Writer may wait indefinitely
let writer = {
let data = data.clone();
tokio::spawn(async move {
loop {
println!("Writer attempting to acquire...");
let mut guard = data.write().await;
*guard += 1;
println!("Writer acquired! Value: {}", *guard);
drop(guard);
tokio::time::sleep(Duration::from_millis(100)).await;
}
})
};
tokio::time::sleep(Duration::from_secs(2)).await;
// Writer may struggle to acquire while readers keep coming
}Continuous readers can prevent writers from acquiring the lock.
use tokio::sync::RwLock;
use std::sync::Arc;
#[tokio::main]
async fn main() {
// Tokio RwLock is NOT fair by default
// Writers can be starved by continuous readers
// Mitigation strategies:
// 1. Limit read lock hold time
let data = Arc::new(RwLock::new(vec![1, 2, 3]));
{
let guard = data.read().await;
// Do quick work, drop guard promptly
let _ = guard.len();
} // Guard dropped here
// 2. Use write locks for critical sections
{
let mut guard = data.write().await;
guard.push(4);
}
// 3. Consider Mutex for write-heavy or contested data
// When fairness matters more than concurrent reads
}RwLock can starve writers; consider alternatives when fairness is critical.
use tokio::sync::RwLock;
#[tokio::main]
async fn main() {
let lock = RwLock::new(vec![1, 2, 3]);
// Read guard: shared reference
{
let guard1 = lock.read().await;
let guard2 = lock.read().await; // OK: multiple readers
println!("Reader 1: {:?}", *guard1);
println!("Reader 2: {:?}", *guard2);
// Cannot modify through read guard
// guard1.push(4); // Compile error
}
// Write guard: exclusive mutable reference
{
let mut guard = lock.write().await;
guard.push(4); // OK: exclusive access
println!("Writer: {:?}", *guard);
}
// Cannot mix reads and writes
// let read_guard = lock.read().await;
// let write_guard = lock.write().await; // Deadlock: waits forever
}Read guards allow sharing; write guards require exclusivity.
use tokio::sync::Mutex;
#[tokio::main]
async fn main() {
let data = Mutex::new(vec![1, 2, 3]);
// BAD: Holding lock across await points
{
let mut guard = data.lock().await;
guard.push(4);
// Lock held across .await
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
// Other tasks blocked during sleep
}
// GOOD: Release lock before await
{
let result = {
let mut guard = data.lock().await;
guard.push(5);
guard.len() // Get value before releasing
}; // Lock released here
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
// Lock not held during sleep
}
}Hold locks for the shortest time possible, especially across .await.
use tokio::sync::RwLock;
use std::sync::Arc;
struct Config {
database_url: String,
timeout_ms: u64,
max_connections: usize,
}
#[tokio::main]
async fn main() {
// Configuration is typically read-heavy
// Many components read, few updates
let config = Arc::new(RwLock::new(Config {
database_url: "localhost:5432".to_string(),
timeout_ms: 5000,
max_connections: 10,
}));
// Multiple readers can access concurrently
let mut handles = vec![];
for i in 0..5 {
let config = config.clone();
handles.push(tokio::spawn(async move {
let guard = config.read().await;
println!("Task {} using config: {}", i, guard.database_url);
}));
}
// Rare writer
{
let mut guard = config.write().await;
guard.timeout_ms = 10000;
}
futures::future::join_all(handles).await;
}Read-heavy configuration benefits from RwLock.
use tokio::sync::Mutex;
use std::sync::Arc;
enum State {
Idle,
Processing(String),
Complete,
}
#[tokio::main]
async fn main() {
// State machines are often write-heavy
// State changes on each operation
let state = Arc::new(Mutex::new(State::Idle));
// Each operation modifies state
{
let mut guard = state.lock().await;
*guard = State::Processing("task1".to_string());
}
{
let mut guard = state.lock().await;
*guard = State::Complete;
}
// Reads also need lock
{
let guard = state.lock().await;
match &*guard {
State::Idle => println!("State: Idle"),
State::Processing(task) => println!("State: Processing {}", task),
State::Complete => println!("State: Complete"),
}
}
}State machines with frequent updates suit Mutex.
use tokio::sync::{Mutex, RwLock};
#[tokio::main]
async fn main() {
// Non-blocking lock attempts
let mutex = Mutex::new(42);
let rwlock = RwLock::new(42);
// Mutex try_lock
match mutex.try_lock() {
Ok(guard) => println!("Mutex acquired: {}", *guard),
Err(_) => println!("Mutex locked by another task"),
}
// RwLock try_read and try_write
match rwlock.try_read() {
Ok(guard) => println!("Read lock acquired: {}", *guard),
Err(_) => println!("Read lock not available"),
}
match rwlock.try_write() {
Ok(mut guard) => {
*guard += 1;
println!("Write lock acquired: {}", *guard);
}
Err(_) => println!("Write lock not available"),
}
}try_lock, try_read, and try_write avoid blocking.
use tokio::sync::RwLock;
#[tokio::main]
async fn main() {
let lock = RwLock::new(vec![1, 2, 3]);
// Acquire write lock for modification
let write_guard = lock.write().await;
// Cannot downgrade directly in tokio RwLock
// (unlike std::sync::RwLock which supports downgrade)
// Workaround: release and reacquire
drop(write_guard);
let read_guard = lock.read().await;
println!("Data: {:?}", *read_guard);
// Note: releasing write lock allows other writers in
// True downgrade not currently supported in tokio::sync::RwLock
}Tokio's RwLock doesn't support lock downgrading (unlike std::sync::RwLock).
use std::sync::RwLock as StdRwLock;
use tokio::sync::RwLock as AsyncRwLock;
fn main() {
// std::sync::RwLock: blocking, use in sync code
let std_lock = StdRwLock::new(42);
{
let guard = std_lock.read().unwrap();
println!("std read: {}", *guard);
}
// tokio::sync::RwLock: async, use in async code
// Must use .await
// let async_lock = AsyncRwLock::new(42);
// let guard = async_lock.read().await;
// std::sync::RwLock blocks thread - dangerous in async
// tokio::sync::RwLock yields - safe in async
}Use async locks in async code; blocking locks can deadlocks the runtime.
// Use Mutex when:
// 1. Write-heavy workloads (> 30% writes)
// 2. Contention is high
// 3. Lock hold time is short
// 4. Fairness matters (writer starvation is unacceptable)
// 5. Simplicity is preferred
// Use RwLock when:
// 1. Read-heavy workloads (> 70% reads)
// 2. Read operations take significant time
// 3. Many concurrent readers
// 4. Writers are infrequent
// Consider alternatives when:
// 1. Very high contention: lock-free data structures
// 2. Simple data: atomic types
// 3. Message passing: channels instead of shared stateChoose based on read/write ratio, contention, and hold time.
// Rough performance characteristics:
//
// Uncontended:
// Mutex: ~20-50ns per operation
// RwLock read: ~30-70ns per operation
// RwLock write: ~30-70ns per operation
//
// Contended (multiple tasks):
// Mutex: serialization overhead
// RwLock read: parallel reads possible
// RwLock write: serialization + waiting for readers
//
// Memory overhead:
// Mutex: minimal (just state)
// RwLock: reader count + waiting queue
// Break-even point:
// - RwLock pays overhead on every operation
// - RwLock gains when concurrent reads happen
// - If reads are very fast, Mutex may be faster even for read-heavy
// - If reads involve computation, RwLock wins
// Rule of thumb:
// - < 70% reads: prefer Mutex
// - > 90% reads: prefer RwLock
// - 70-90% reads: benchmark bothConsider overhead vs. concurrency gains.
| Aspect | Mutex | RwLock |
|--------|---------|---------|
| Concurrent readers | No | Yes |
| Concurrent reader + writer | No | No |
| Overhead per operation | Lower | Higher |
| Write-heavy performance | Better | Worse |
| Read-heavy performance | Worse | Better |
| Writer fairness | Equal priority | May starve |
| Lock types | One | Two (read/write) |
| Downgrade support | N/A | No |
| Use case | General purpose | Read-heavy |
The choice between Mutex and RwLock is fundamentally about trading overhead for concurrency:
Mutex: Lower per-operation overhead, simpler state management, fair access for all operations. Every lock acquisition is the same: wait for exclusive access. For write-heavy workloads (where most operations need exclusive access anyway) or uncontended scenarios, Mutex wins because the overhead of RwLock's reader tracking isn't justified.
RwLock: Higher per-operation overhead, but enables concurrent reads. The lock maintains a reader count and manages writer queues. For read-heavy workloads with significant read hold times, concurrent reads provide throughput gains that outweigh the overhead. However, RwLock can starve writersāa continuous stream of readers can prevent a waiting writer from ever acquiring the lock.
Key insight: The performance trade-off depends on the ratio of reads to writes, the duration of lock holds, and the level of contention. If reads are instant (just checking a value), Mutex may be faster even with 90% readsāthe overhead of RwLock coordination exceeds the serialization cost. If reads involve computation or I/O while holding the lock, RwLock wins because concurrent reads provide real parallelism. Measure your specific workload rather than assuming one is universally better. For contested data with frequent writes, Mutex is often the right choice despite being "simpler"āthe complexity of RwLock's reader/writer coordination becomes overhead without benefit when writes dominate.