Loading pageā¦
Rust walkthroughs
Loading pageā¦
parking_lot::Mutex and std::sync::Mutex in async contexts?Mutex selection in async Rust involves understanding how each implementation interacts with the runtime scheduler, handles lock contention, and affects system behavior under load. The choice between parking_lot::Mutex and std::sync::Mutex has implications that extend beyond simple API differences into the realm of async runtime semantics.
Both parking_lot::Mutex and std::sync::Mutex are synchronous, blocking mutexes. When a thread attempts to acquire a held lock, it blocksāperiod. This is the core issue in async contexts:
use std::sync::Mutex;
use parking_lot::Mutex as PlMutex;
// Both of these can cause problems in async code
async fn with_std_mutex(mutex: &Mutex<Data>) -> Data {
let guard = mutex.lock().unwrap(); // Blocks the OS thread!
guard.clone()
}
async fn with_parking_lot_mutex(mutex: &PlMutex<Data>) -> Data {
let guard = mutex.lock(); // Also blocks the OS thread!
guard.clone()
}When an async task blocks on a synchronous mutex, it blocks the entire OS thread. In a multi-threaded async runtime like tokio, this means the runtime cannot schedule other tasks on that thread until the lock is acquired.
The standard library mutex uses OS-provided synchronization primitives:
use std::sync::Mutex;
let mutex = Mutex::new(42);
// On contention, this triggers a syscall to put the thread to sleep
let guard = mutex.lock().unwrap();When contention occurs:
This process involves kernel transitions and is relatively expensive in terms of latency and CPU cycles.
The parking_lot implementation uses a different strategy optimized for common cases:
use parking_lot::Mutex;
let mutex = Mutex::new(42);
// On contention, this uses a spin-then-park strategy
let guard = mutex.lock();When contention occurs:
The spinning phase can improve latency for very short critical sections but consumes CPU cycles that could run other tasks.
In tokio or async-std, an OS thread runs multiple tasks cooperatively:
use tokio::{task, sync::Mutex as AsyncMutex};
use std::sync::Mutex as StdMutex;
#[tokio::main]
async fn main() {
let std_mutex = StdMutex::new(0);
let async_mutex = AsyncMutex::new(0);
// Problem scenario: holding a sync mutex across an await point
let mutex = StdMutex::new(String::new());
// This can deadlock or cause thread pool starvation
let guard = mutex.lock().unwrap();
some_async_operation().await; // BAD: guard held across await
guard.push("data");
}Holding a synchronous mutex across an .await point is the critical anti-pattern. The mutex guard is not Send across await points (correctly), but the blocking behavior before the await is equally problematic.
Consider a tokio runtime with 4 worker threads and synchronous mutexes:
use std::sync::Mutex;
use tokio::task;
#[tokio::main(flavor = "multi_thread", worker_threads = 4)]
async fn main() {
let mutex = Mutex::new(SharedState::new());
// Spawn 8 tasks that all need the mutex
for i in 0..8 {
let mutex = &mutex;
task::spawn(async move {
let guard = mutex.lock().unwrap(); // Blocks thread!
process(i, &guard);
// Guard dropped here
});
}
}If task 0 acquires the mutex, tasks 1-7 all block. With only 4 threads, if 4 tasks are blocked waiting for the mutex, no threads are available for other work. The runtime becomes starved.
parking_lot::Mutex provides a fairness mechanism that std::sync::Mutex doesn't guarantee:
use parking_lot::{Mutex, MutexGuard};
use std::time::Duration;
let mutex = Mutex::new(0);
// parking_lot ensures eventual fairness
// A waiting thread will get the lock even if new threads keep arriving
// std::sync::Mutex behavior is platform-dependent
// On some platforms, newly arriving threads can "steal" the lockIn async contexts with high contention, fairness matters because it prevents any single task from being indefinitely starved of lock access.
use std::mem::size_of;
// std::sync::Mutex size varies by platform
// Linux: typically 40 bytes
// macOS: typically 64 bytes
println!("std::sync::Mutex size: {}", size_of::<std::sync::Mutex<()>>());
// parking_lot::Mutex is consistent across platforms
println!("parking_lot::Mutex size: {}", size_of::<parking_lot::Mutex<()>>());parking_lot::Mutex is typically smaller and has consistent behavior across platforms. The std::sync::Mutex varies because it wraps OS-specific primitives.
Benchmarks generally show parking_lot::Mutex outperforming std::sync::Mutex in uncontended and lightly contended scenarios due to:
For async code, neither std::sync::Mutex nor parking_lot::Mutex is ideal when held across await points:
use tokio::sync::Mutex;
async fn correct_async_pattern(mutex: &Mutex<Data>) -> Data {
// This yields to the runtime if the lock isn't available
let guard = mutex.lock().await;
guard.clone()
}The tokio::sync::Mutex (and async_std::sync::Mutex) yields the task instead of blocking the thread. Other tasks can run while waiting for the lock.
However, async mutexes have their own trade-offs:
Arc<Mutex<T>> for sharingstd::sync::Mutex when:use std::sync::Mutex;
// Short, synchronous critical sections
fn increment_counter(mutex: &Mutex<u64>) {
let mut guard = mutex.lock().unwrap();
*guard += 1;
// Guard dropped immediately, no async involved
}
// Protecting synchronous resources in async code
struct Cache {
data: Mutex<HashMap<String, Vec<u8>>>,
}
impl Cache {
fn get(&self, key: &str) -> Option<Vec<u8>> {
let guard = self.data.lock().unwrap();
guard.get(key).cloned()
}
}parking_lot::Mutex when:use parking_lot::Mutex;
// Same patterns as std::sync::Mutex, but with better performance
// and guaranteed fairness
struct PerformanceSensitiveCache {
data: Mutex<HashMap<String, Vec<u8>>>,
}
// parking_lot doesn't return a Result (no poisoning)
// This simplifies code when you don't need poisoning semantics
impl Cache {
fn insert(&self, key: String, value: Vec<u8>) {
let mut guard = self.data.lock();
guard.insert(key, value);
}
}tokio::sync::Mutex when:use tokio::sync::Mutex;
use std::sync::Arc;
// Guard must be held across an await point
async fn process_with_mutex(
mutex: Arc<Mutex<DatabaseConnection>>,
data: Data,
) -> Result<(), Error> {
let conn = mutex.lock().await;
conn.insert(&data).await?; // Async operation while holding lock
conn.commit().await?;
Ok(())
}
// Or when you need to limit concurrent async operations
async fn rate_limited_processing(
semaphore: Arc<Mutex<()>>,
items: Vec<Item>,
) {
// Only one processing at a time, but yields during I/O
let _guard = semaphore.lock().await;
for item in items {
process_item(&item).await;
}
}std::sync::Mutex implements poisoningāwhen a thread panics while holding the lock, subsequent lock attempts return an error:
use std::sync::Mutex;
let mutex = Mutex::new(0);
std::thread::scope(|s| {
s.spawn(|| {
let guard = mutex.lock().unwrap();
panic!("oops"); // Guard is poisoned
});
});
// This returns Err(PoisonError)
let result = mutex.lock();
assert!(result.is_err());parking_lot::Mutex does not implement poisoning:
use parking_lot::Mutex;
let mutex = Mutex::new(0);
std::thread::scope(|s| {
s.spawn(|| {
let guard = mutex.lock();
panic!("oops"); // Lock is released, no poisoning
});
});
// This succeeds
let guard = mutex.lock();In async contexts, poisoning behavior matters for error handling strategy. If you need to know that data was potentially corrupted by a panic, use std::sync::Mutex. If you want simpler error handling and can tolerate potentially corrupted state, parking_lot::Mutex reduces boilerplate.
use std::sync::Mutex;
use parking_lot::Mutex as PlMutex;
use tokio::sync::Mutex as AsyncMutex;
use std::sync::Arc;
struct Application {
// Synchronous data: use parking_lot for performance
metrics: PlMutex<Metrics>,
// Simple cache: std::sync or parking_lot, locked briefly
cache: PlMutex<LruCache<String, Vec<u8>>>,
// Shared async resource: use tokio::sync::Mutex
db_pool: Arc<AsyncMutex<DatabasePool>>,
}
impl Application {
// Sync method: blocking mutex is fine
fn record_metric(&self, name: &str, value: f64) {
let mut metrics = self.metrics.lock();
metrics.record(name, value);
}
// Async method with short lock: blocking mutex is acceptable
async fn get_cached(&self, key: &str) -> Option<Vec<u8>> {
// Lock held for microseconds, no await
let mut cache = self.cache.lock();
cache.get(key).cloned()
}
// Async method with await under lock: must use async mutex
async fn store_data(&self, data: &Data) -> Result<(), Error> {
let pool = self.db_pool.lock().await;
pool.insert(data).await?;
Ok(())
}
}parking_lot offers a deadlock detection feature (enabled via feature flag):
// In Cargo.toml:
// parking_lot = { version = "0.12", features = ["deadlock_detection"] }
use parking_lot::Mutex;
fn setup_deadlock_detection() {
parking_lot::deadlock::check_for_deadlocks();
}
// Call periodically or on a background threadThis can help identify deadlock bugs during development that might otherwise be difficult to diagnose in async contexts.
The choice between parking_lot::Mutex and std::sync::Mutex in async contexts is less important than understanding when to use synchronous mutexes at all. Both block OS threads, which can starve async runtimes.
For short, non-async critical sections, prefer parking_lot::Mutex for its:
For guards held across .await points, use tokio::sync::Mutex or async_std::sync::Mutex instead. These yield to the runtime rather than blocking threads, preserving async responsiveness at the cost of some performance overhead.
The real insight is that mutex selection flows from understanding your locking pattern: how long is the lock held, does it span async boundaries, and what's the contention level? Answer those questions first, then select the appropriate tool.