How does parking_lot::Mutex handle potential interruption during lock acquisition?
The parking_lot::Mutex handles interruption scenarios differently from std::sync::Mutex due to its implementation approach. When a thread waiting to acquire a lock is interrupted (e.g., by a signal on Unix or through a custom park/unpark mechanism), parking_lot uses a robust queuing system that ensures proper handoff and prevents lost wakeups. The key difference is that parking_lot uses a fair queue-based locking mechanism where waiting threads are queued and explicitly unparked by the unlocking thread, rather than relying on OS-level futex wakeups that might miss interrupted threads. This design also eliminates the poisoning problem present in std::sync::Mutex.
Basic Mutex Usage Comparison
use std::sync::Mutex as StdMutex;
use parking_lot::Mutex as ParkingMutex;
fn basic_usage() {
// std::sync::Mutex
let std_mutex = StdMutex::new(42);
{
let mut guard = std_mutex.lock().unwrap();
*guard += 1;
}
// parking_lot::Mutex
let parking_mutex = ParkingMutex::new(42);
{
let mut guard = parking_mutex.lock();
*guard += 1;
}
}The parking_lot version returns a guard directly without Result since it doesn't poison.
The Poisoning Problem in std::sync::Mutex
use std::sync::{Mutex, Arc};
use std::thread;
fn poisoning_demonstration() {
let mutex = Arc::new(Mutex::new(42));
let mutex_clone = Arc::clone(&mutex);
// Thread panics while holding lock
let handle = thread::spawn(move || {
let _guard = mutex_clone.lock().unwrap();
panic!("Thread panicked while holding mutex");
});
// Wait for panic
handle.join().unwrap_err();
// Mutex is now poisoned
let result = mutex.lock();
// Result::Err because mutex is poisoned
match result {
Err(poison_error) => {
// Can still access data through into_inner()
let data = poison_error.into_inner();
println!("Data after panic: {}", *data);
}
Ok(_) => unreachable!(),
}
}std::sync::Mutex becomes poisoned when a thread panics while holding it, forcing all subsequent acquisitions to handle the poisoned state.
parking_lot::Mutex and Panic Handling
use parking_lot::Mutex;
use std::sync::Arc;
use std::thread;
fn parking_lot_no_poisoning() {
let mutex = Arc::new(Mutex::new(42));
let mutex_clone = Arc::clone(&mutex);
// Thread panics while holding lock
let handle = thread::spawn(move || {
let _guard = mutex_clone.lock();
panic!("Thread panicked while holding mutex");
});
handle.join().unwrap_err();
// Mutex is NOT poisoned - can still use normally
let guard = mutex.lock();
println!("Mutex still works: {}", *guard);
// Data is preserved
assert_eq!(*guard, 42);
}parking_lot::Mutex doesn't poisonâother threads can continue using it after a panic.
Lock Acquisition Mechanism
use parking_lot::Mutex;
use std::sync::Arc;
use std::thread;
fn lock_acquisition() {
let mutex = Arc::new(Mutex::new(0));
// parking_lot uses a queue-based approach:
// 1. First thread acquires lock immediately
// 2. Subsequent threads add themselves to queue
// 3. Threads "park" (sleep) until explicitly unparked
// 4. Unlocking thread hands off to next in queue
let mutex_clone = Arc::clone(&mutex);
let guard = mutex.lock();
// Other threads queue up
let handle = thread::spawn(move || {
let _guard = mutex_clone.lock();
// This thread is parked until first thread releases
});
// First thread still holds lock
// Second thread is parked in queue
drop(guard); // Unlocks, unparks next thread
handle.join().unwrap();
}The queue-based mechanism ensures threads are woken in order and no thread is left waiting.
Interruption During Lock Acquisition
use parking_lot::Mutex;
use std::sync::Arc;
use std::thread;
fn interruption_handling() {
// Interruption can occur from:
// 1. OS signals (SIGINT, SIGTERM)
// 2. Thread::unpark() from another thread
// 3. Custom park/unpark implementations
// parking_lot's design handles this by:
// 1. Using explicit handoff (fair lock)
// 2. Checking for spurious wakeups
// 3. Re-queueing if unparked without lock ownership
let mutex = Mutex::new(42);
// When lock() is called and mutex is held:
// Thread adds itself to queue, then parks
// If interrupted (e.g., signal), parking_lot:
// 1. May return from park early
// 2. Checks if lock is actually available
// 3. If not, goes back to waiting
// Unlike some implementations, parking_lot ensures
// the thread is properly re-queued
}Spurious wakeups are handled by checking conditions and re-parking if needed.
The Fair Lock Queue
use parking_lot::Mutex;
use std::sync::Arc;
use std::thread;
use std::time::Duration;
fn fair_queue() {
let mutex = Arc::new(Mutex::new(0));
let mut handles = Vec::new();
// Thread 1 acquires
let m1 = Arc::clone(&mutex);
let h1 = thread::spawn(move || {
let _guard = m1.lock();
thread::sleep(Duration::from_millis(100));
"Thread 1"
});
// Give thread 1 time to acquire
thread::sleep(Duration::from_millis(10));
// Threads 2, 3, 4 queue up
for i in 2..=4 {
let m = Arc::clone(&mutex);
handles.push(thread::spawn(move || {
let _guard = m.lock();
format!("Thread {}", i)
}));
}
// With fair queuing:
// Thread 2 will get lock after Thread 1
// Thread 3 after Thread 2
// etc.
// This is guaranteed regardless of timing or interruptions
}The queue ensures FIFO ordering of lock acquisition.
try_lock for Non-Blocking Acquisition
use parking_lot::Mutex;
use std::thread;
fn try_lock_usage() {
let mutex = Mutex::new(42);
// try_lock returns immediately
match mutex.try_lock() {
Some(guard) => {
println!("Got lock: {}", *guard);
}
None => {
println!("Lock not available");
}
}
// Can use try_lock to avoid blocking
// when you don't want to wait
let guard = mutex.lock();
// Another thread would fail try_lock
let mutex_clone = mutex.clone();
let handle = thread::spawn(move || {
if let Some(guard) = mutex_clone.try_lock() {
println!("Unexpected success");
} else {
println!("Lock held by another thread");
}
});
handle.join().unwrap();
}try_lock provides non-blocking acquisition that returns Option<MutexGuard>.
try_lock_for with Timeout
use parking_lot::Mutex;
use std::time::Duration;
fn try_lock_for_timeout() {
let mutex = Mutex::new(42);
let _guard = mutex.lock();
// Try to acquire for up to 100ms
match mutex.try_lock_for(Duration::from_millis(100)) {
Some(guard) => {
println!("Got lock within timeout: {}", *guard);
}
None => {
println!("Timeout elapsed, lock not available");
}
}
// Another approach: try_lock_until with Instant
use std::time::Instant;
let deadline = Instant::now() + Duration::from_millis(50);
match mutex.try_lock_until(deadline) {
Some(guard) => {
println!("Got lock before deadline");
}
None => {
println!("Deadline passed");
}
}
}Timed acquisition helps handle interruptions gracefully with bounded wait times.
Deadlock Detection (Debug Feature)
// parking_lot can detect deadlocks at runtime (debug builds)
// when the deadlock_detection feature is enabled
use parking_lot::Mutex;
use std::sync::Arc;
use std::thread;
fn deadlock_detection() {
// Note: This requires parking_lot's deadlock_detection feature
let m1 = Arc::new(Mutex::new(0));
let m2 = Arc::new(Mutex::new(0));
// Potential deadlock:
// Thread 1: locks m1, then m2
// Thread 2: locks m2, then m1
// With deadlock_detection enabled:
// parking_lot can report detected deadlocks
// In debug mode, it will panic on deadlock detection
// This helps identify issues during development
}Debug builds can detect and report deadlocks for easier debugging.
Comparing Guard Types
use std::sync::Mutex as StdMutex;
use parking_lot::Mutex as ParkingMutex;
use std::ops::{Deref, DerefMut};
fn guard_comparison() {
// std::sync::MutexGuard
let std_mutex = StdMutex::new(42);
let std_guard = std_mutex.lock().unwrap();
// Implements Deref and DerefMut
let value: &i32 = &*std_guard;
let value_mut: &mut i32 = &mut *std_guard;
// parking_lot::MutexGuard
let parking_mutex = ParkingMutex::new(42);
let parking_guard = parking_mutex.lock();
// Also implements Deref and DerefMut
let value: &i32 = &*parking_guard;
let value_mut: &mut i32 = &mut *parking_guard;
// Main difference: std_guard can be unwrapped from Result
// parking_guard is returned directly (no poisoning)
}Both guard types implement Deref and DerefMut for transparent access.
Thread Parking Mechanism
use parking_lot::Mutex;
use parking_lot::Condvar;
use std::sync::Arc;
use std::thread;
fn parking_mechanism() {
// parking_lot uses its own parking implementation
// rather than OS futex directly
// Thread parking is:
// 1. Cooperative - threads voluntarily park
// 2. Explicit - another thread must unpark
// 3. Fair - queuing ensures ordering
// The parking_lot crate provides:
// - parking_lot::thread::park()
// - parking_lot::thread::Thread::unpark()
// These are used internally by Mutex
let pair = Arc::new((Mutex::new(false), Condvar::new()));
let pair_clone = Arc::clone(&pair);
// Worker thread
let handle = thread::spawn(move || {
let (lock, cvar) = &*pair_clone;
let mut started = lock.lock();
*started = true;
cvar.notify_one();
// Thread releases lock, unblocks waiting threads
});
// Main thread waits
{
let (lock, cvar) = &*pair;
let mut started = lock.lock();
while !*started {
cvar.wait(&mut started);
}
}
handle.join().unwrap();
}The parking mechanism is explicit and cooperative, not relying on OS signals.
Signal Handling and Locks
use parking_lot::Mutex;
use std::sync::Arc;
use std::thread;
fn signal_handling() {
// On Unix, signals can interrupt system calls
// including futex wait operations
// parking_lot handles this differently:
// 1. Uses its own parking implementation
// 2. Can be interrupted by thread unpark
// 3. Re-checks lock state after waking
// This makes signal handling more predictable
// compared to relying on OS futex semantics
let mutex = Arc::new(Mutex::new(0));
// If a signal arrives while thread is parked
// waiting for mutex, parking_lot ensures:
// 1. Thread wakes up (if signal handler unparks)
// 2. Lock state is rechecked
// 3. If still locked, thread goes back to waiting
// 4. No lost wakeup or missed signal
}Explicit parking allows for more controlled interruption handling.
Const Initialization
use parking_lot::Mutex;
// Both support const initialization
const GLOBAL_MUTEX: Mutex<i32> = Mutex::new(42);
fn const_init() {
// Can use in static contexts
static STATIC_MUTEX: Mutex<i32> = Mutex::new(0);
// Useful for lazy initialization patterns
let guard = STATIC_MUTEX.lock();
println!("Static value: {}", *guard);
}parking_lot::Mutex supports const initialization, matching recent std::sync::Mutex capabilities.
Performance Characteristics
use std::sync::Mutex as StdMutex;
use parking_lot::Mutex as ParkingMutex;
use std::sync::Arc;
use std::thread;
fn performance_notes() {
// std::sync::Mutex:
// - Uses OS futex on Linux
// - Platform-dependent behavior
// - Poisoning overhead
// - Result unwrapping required
// parking_lot::Mutex:
// - Custom parking implementation
// - Consistent cross-platform behavior
// - No poisoning overhead
// - Direct guard return
// - Generally faster for uncontended cases
// - Fair queuing for contended cases
// Trade-off:
// - Fair queuing may reduce throughput vs unfair locks
// - But prevents starvation
}parking_lot prioritizes consistency and fairness with good performance.
Real-World Example: Shared State Manager
use parking_lot::Mutex;
use std::sync::Arc;
use std::collections::HashMap;
struct StateManager {
data: Mutex<HashMap<String, String>>,
}
impl StateManager {
fn new() -> Self {
StateManager {
data: Mutex::new(HashMap::new()),
}
}
fn get(&self, key: &str) -> Option<String> {
let data = self.data.lock();
data.get(key).cloned()
}
fn set(&self, key: String, value: String) {
let mut data = self.data.lock();
data.insert(key, value);
}
fn remove(&self, key: &str) -> Option<String> {
let mut data = self.data.lock();
data.remove(key)
}
}
fn state_manager_usage() {
let manager = Arc::new(StateManager::new());
// Multiple threads access shared state
let mut handles = Vec::new();
for i in 0..10 {
let m = Arc::clone(&manager);
handles.push(std::thread::spawn(move || {
m.set(format!("key{}", i), format!("value{}", i));
m.get(&format!("key{}", i));
}));
}
for handle in handles {
handle.join().unwrap();
}
// No poisoning concern if any thread panics
}State management benefits from no-poison behaviorâpanics don't corrupt future access.
Real-World Example: Connection Pool
use parking_lot::Mutex;
use std::sync::Arc;
use std::collections::VecDeque;
struct Connection;
struct ConnectionPool {
connections: Mutex<VecDeque<Connection>>,
max_size: usize,
}
impl ConnectionPool {
fn new(max_size: usize) -> Self {
let mut connections = VecDeque::new();
for _ in 0..max_size {
connections.push_back(Connection);
}
ConnectionPool {
connections: Mutex::new(connections),
max_size,
}
}
fn acquire(&self) -> Option<Connection> {
let mut pool = self.connections.lock();
pool.pop_front()
}
fn release(&self, conn: Connection) {
let mut pool = self.connections.lock();
if pool.len() < self.max_size {
pool.push_back(conn);
}
}
fn available(&self) -> usize {
let pool = self.connections.lock();
pool.len()
}
}
fn pool_usage() {
let pool = Arc::new(ConnectionPool::new(5));
// Worker threads acquire/release connections
let mut handles = Vec::new();
for _ in 0..20 {
let p = Arc::clone(&pool);
handles.push(std::thread::spawn(move || {
if let Some(conn) = p.acquire() {
// Use connection
std::thread::sleep(std::time::Duration::from_millis(10));
p.release(conn);
}
}));
}
// If any thread panics while holding a connection,
// pool remains usable (no poisoning)
for handle in handles {
handle.join().unwrap();
}
}Connection pools benefit from robust behavior during panics.
Synthesis
Key differences from std::sync::Mutex:
| Feature | std::sync::Mutex |
parking_lot::Mutex |
|---|---|---|
| Poisoning | Yes (panics propagate) | No |
| Lock result | LockResult<MutexGuard> |
MutexGuard directly |
| Fair queuing | Platform-dependent | Guaranteed FIFO |
try_lock |
TryLockResult |
Option |
| Timeout methods | No | try_lock_for, try_lock_until |
| Const init | Yes (Rust 1.63+) | Yes |
| Deadlock detection | No | Yes (debug, optional) |
Interruption handling:
| Scenario | Behavior |
|---|---|
| Signal during wait | Re-checks lock, re-parks if needed |
| Panic while held | No poisoning, other threads continue |
| Thread killed (extreme) | Lock remains valid, queue intact |
| Spurious wakeup | Handled internally, thread waits again |
When to use parking_lot::Mutex:
| Use Case | Benefit |
|---|---|
| High concurrency | Fair queuing prevents starvation |
| Panic recovery | No poisoning, continues operation |
| Timeout patterns | Built-in try_lock_for support |
| Cross-platform | Consistent behavior everywhere |
| Debug deadlocks | Detection in debug builds |
Key insight: parking_lot::Mutex handles interruption through a robust queue-based design that ensures proper handoff between waiting threads. When a thread is parked waiting for the mutex and gets interrupted (by signal, unpark, or spurious wakeup), it checks whether the lock is actually available and re-queues if not. This eliminates lost wakeups that can occur with OS-level futex operations. The absence of poisoning is a deliberate design choiceâpanics don't mark the mutex as tainted, allowing other threads to continue using it. This is particularly valuable in long-running services where a single thread's panic shouldn't prevent future operations. The fair queuing mechanism also prevents starvation, ensuring all waiting threads eventually acquire the lock in order, unlike unfair locks that might preferentially grant locks to newly arriving threads over those already waiting.
