How does parking_lot::Condvar::notify_all differ from notify_one for waking waiting threads?

notify_one wakes a single waiting thread while notify_all wakes every thread waiting on the condition variableβ€”choosing between them depends on whether multiple threads can productively respond to the state change or only one should proceed. This distinction matters for both correctness and performance: waking too few threads can cause missed signals, while waking too many creates unnecessary contention. Understanding the semantics of each method helps you write correct concurrent programs.

Condition Variables and Waiting Threads

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
fn basic_pattern() {
    let pair = Arc::new((Mutex::new(false), Condvar::new()));
    let (lock, cvar) = &*pair;
    
    // Thread waiting for a condition
    let pair_clone = Arc::clone(&pair);
    let handle = thread::spawn(move || {
        let (lock, cvar) = &*pair_clone;
        let mut started = lock.lock();
        while !*started {
            cvar.wait(&mut started);
        }
        println!("Thread proceeding!");
    });
    
    // Main thread signals the condition
    thread::sleep(std::time::Duration::from_millis(10));
    {
        let mut started = lock.lock();
        *started = true;
        // Now we need to wake the waiting thread
        cvar.notify_one();  // or notify_all()
    }
    
    handle.join().unwrap();
}

Condition variables let threads wait for state changes; notify_one and notify_all control who wakes up.

The Core Difference

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
fn demonstrate_difference() {
    let pair = Arc::new((Mutex::new(0_usize), Condvar::new()));
    let (lock, cvar) = &*pair;
    
    // Spawn 5 threads all waiting on the same condition
    let handles: Vec<_> = (0..5)
        .map(|i| {
            let pair = Arc::clone(&pair);
            thread::spawn(move || {
                let (lock, cvar) = &*pair;
                let mut value = lock.lock();
                while *value == 0 {
                    cvar.wait(&mut value);
                }
                println!("Thread {} woke up, value = {}", i, *value);
            })
        })
        .collect();
    
    thread::sleep(std::time::Duration::from_millis(50));
    
    {
        let mut value = lock.lock();
        *value = 42;
        
        // notify_one: wakes ONLY ONE thread
        // cvar.notify_one();
        // Result: Only thread 0 (or one arbitrary thread) wakes
        
        // notify_all: wakes ALL threads
        cvar.notify_all();
        // Result: All 5 threads wake up
    }
    
    for h in handles {
        h.join().unwrap();
    }
}

notify_one wakes exactly one waiting thread; notify_all wakes every thread waiting on the condition variable.

When notify_one is Correct

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
// Pattern: Single resource, multiple consumers
// Only one thread can use the resource at a time
 
fn producer_consumer_single() {
    let state = Arc::new((
        Mutex::new(Vec::<u32>::new()),  // Queue
        Condvar::new(),
    ));
    
    // Producer
    let producer_state = Arc::clone(&state);
    let producer = thread::spawn(move || {
        let (queue, cvar) = &*producer_state;
        for i in 0..10 {
            let mut q = queue.lock();
            q.push(i);
            println!("Produced: {}", i);
            cvar.notify_one();  // Wake ONE consumer
            drop(q);
            thread::sleep(std::time::Duration::from_millis(10));
        }
    });
    
    // Multiple consumers - but each item only consumed once
    let consumers: Vec<_> = (0..3)
        .map(|id| {
            let state = Arc::clone(&state);
            thread::spawn(move || {
                let (queue, cvar) = &*state;
                loop {
                    let mut q = queue.lock();
                    while q.is_empty() {
                        cvar.wait(&mut q);
                    }
                    let item = q.pop().unwrap();
                    if item == 9 && id != 0 {
                        // Exit signal for extra consumers
                        q.push(item);
                        cvar.notify_one();  // Pass to another consumer
                        break;
                    }
                    println!("Consumer {} got: {}", id, item);
                    if item == 9 {
                        break;  // Last item
                    }
                }
            })
        })
        .collect();
    
    producer.join().unwrap();
    for c in consumers {
        c.join().unwrap();
    }
}

Use notify_one when only one thread should act on the state changeβ€”like a work queue where each item is consumed once.

When notify_all is Required

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
// Pattern: State change affects all waiting threads
// All threads need to re-evaluate their conditions
 
fn state_change_broadcast() {
    let state = Arc::new((
        Mutex::new((false, 0)),  // (shutdown flag, counter)
        Condvar::new(),
    ));
    
    // Multiple worker threads waiting for work or shutdown
    let workers: Vec<_> = (0..3)
        .map(|id| {
            let state = Arc::clone(&state);
            thread::spawn(move || {
                let (lock, cvar) = &*state;
                let mut guard = lock.lock();
                loop {
                    // Check conditions
                    if guard.0 {
                        println!("Worker {} shutting down", id);
                        return;
                    }
                    if guard.1 > 0 {
                        println!("Worker {} processing, count = {}", id, guard.1);
                        guard.1 -= 1;
                        continue;
                    }
                    // Wait for state change
                    cvar.wait(&mut guard);
                }
            })
        })
        .collect();
    
    // Add some work
    {
        let (lock, cvar) = &*state;
        let mut guard = lock.lock();
        guard.1 = 5;
        cvar.notify_one();  // Wake one worker to process
    }
    
    thread::sleep(std::time::Duration::from_millis(50));
    
    // Shutdown all workers
    {
        let (lock, cvar) = &*state;
        let mut guard = lock.lock();
        guard.0 = true;
        cvar.notify_all();  // ALL workers must see shutdown
    }
    
    for w in workers {
        w.join().unwrap();
    }
}

Use notify_all when state changes affect all waitersβ€”like shutdown signals or state transitions.

The Missed Signal Problem

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
// Problem: notify_one can cause missed signals
 
fn missed_signal_problem() {
    let state = Arc::new((Mutex::new(false), Condvar::new()));
    
    // Thread 1: Waits for flag
    let t1_state = Arc::clone(&state);
    let t1 = thread::spawn(move || {
        let (lock, cvar) = &*t1_state;
        let mut flag = lock.lock();
        while !*flag {
            cvar.wait(&mut flag);
        }
        println!("Thread 1: Got signal");
    });
    
    // Thread 2: Also waits for flag
    let t2_state = Arc::clone(&state);
    let t2 = thread::spawn(move || {
        let (lock, cvar) = &*t2_state;
        thread::sleep(std::time::Duration::from_millis(5));
        let mut flag = lock.lock();
        while !*flag {
            cvar.wait(&mut flag);
        }
        println!("Thread 2: Got signal");
    });
    
    thread::sleep(std::time::Duration::from_millis(20));
    
    // Set flag and notify_one
    {
        let (lock, cvar) = &*state;
        let mut flag = lock.lock();
        *flag = true;
        cvar.notify_one();  // Only wakes ONE thread!
        // The other thread misses the signal
    }
    
    // With notify_one:
    // Thread 1 OR Thread 2 wakes (not both)
    // The other waits forever (if no more signals)
    
    // With notify_all:
    // Both threads wake and see flag = true
    
    t1.join().unwrap();
    t2.join().unwrap();
}
 
// Correct pattern for broadcast events:
fn broadcast_correct() {
    let state = Arc::new((Mutex::new(false), Condvar::new()));
    
    // Multiple threads waiting for the same condition
    let handles: Vec<_> = (0..5)
        .map(|i| {
            let state = Arc::clone(&state);
            thread::spawn(move || {
                let (lock, cvar) = &*state;
                let mut ready = lock.lock();
                while !*ready {
                    cvar.wait(&mut ready);
                }
                println!("Thread {} is ready!", i);
            })
        })
        .collect();
    
    thread::sleep(std::time::Duration::from_millis(50));
    
    // Broadcast to ALL threads
    {
        let (lock, cvar) = &*state;
        let mut ready = lock.lock();
        *ready = true;
        cvar.notify_all();  // Required! All threads need to proceed
    }
    
    for h in handles {
        h.join().unwrap();
    }
}

notify_one can miss signals when multiple threads wait on the same condition; notify_all ensures all threads see the state change.

Thread Wake Order and Spurious Wakeups

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
fn wakeup_order() {
    let state = Arc::new((Mutex::new(0), Condvar::new()));
    
    // notify_one: which thread wakes?
    // Answer: unspecified! Implementation-dependent.
    // Could be FIFO, LIFO, or random.
    
    // This means you CANNOT rely on specific thread ordering
    let handles: Vec<_> = (0..3)
        .map(|i| {
            let state = Arc::clone(&state);
            thread::spawn(move || {
                let (lock, cvar) = &*state;
                let mut v = lock.lock();
                while *v == 0 {
                    cvar.wait(&mut v);
                }
                println!("Thread {} woke", i);
            })
        })
        .collect();
    
    thread::sleep(std::time::Duration::from_millis(10));
    
    {
        let (lock, cvar) = &*state;
        let mut v = lock.lock();
        *v = 1;
        cvar.notify_one();  // Arbitrary thread wakes
        // Then later:
        *v = 2;
        cvar.notify_one();  // Another arbitrary thread
    }
    
    for h in handles {
        h.join().unwrap();
    }
}
 
// Spurious wakeups: threads can wake without notify
// This is why conditions must be checked in a loop!
 
fn correct_loop_pattern() {
    let state = Arc::new((Mutex::new(false), Condvar::new()));
    let (lock, cvar) = &*state;
    
    let handle = {
        let state = Arc::clone(&state);
        thread::spawn(move || {
            let (lock, cvar) = &*state;
            let mut ready = lock.lock();
            
            // CORRECT: Loop checks condition
            while !*ready {
                cvar.wait(&mut ready);
                // Thread wakes - but is ready true?
                // Might be spurious wakeup!
                // Loop re-checks condition
            }
            println!("Ready!");
        })
    };
    
    // WRONG: One-time check
    // if !*ready { cvar.wait(&mut ready); }
    // Could proceed even if ready is false!
    
    thread::sleep(std::time::Duration::from_millis(10));
    {
        let mut ready = lock.lock();
        *ready = true;
        cvar.notify_one();
    }
    
    handle.join().unwrap();
}

Always use a while loop to check conditionsβ€”spurious wakeups can occur, and notify_one doesn't guarantee which thread wakes.

Performance Implications

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
use std::time::Instant;
 
fn performance_comparison() {
    // notify_one: wakes one thread, lower overhead
    // notify_all: wakes all threads, higher overhead
    
    // With many waiting threads:
    let state = Arc::new((Mutex::new(0), Condvar::new()));
    
    // Create 100 waiting threads
    let handles: Vec<_> = (0..100)
        .map(|_| {
            let state = Arc::clone(&state);
            thread::spawn(move || {
                let (lock, cvar) = &*state;
                let mut v = lock.lock();
                while *v == 0 {
                    cvar.wait(&mut v);
                }
            })
        })
        .collect();
    
    thread::sleep(std::time::Duration::from_millis(100));
    
    // notify_one: Only one thread wakes
    // Cost: O(1) thread wakeup + lock contention for that thread
    let start = Instant::now();
    {
        let (lock, cvar) = &*state;
        let mut v = lock.lock();
        *v = 1;
        cvar.notify_one();
    }
    // Only one thread proceeds, 99 still wait
    
    // notify_all: All 100 threads wake
    // Cost: O(n) thread wakeups + lock contention for all threads
    // {
    //     let (lock, cvar) = &*state;
    //     let mut v = lock.lock();
    //     *v = 1;
    //     cvar.notify_all();
    // }
    // All 100 threads wake, contend for lock, check condition
    
    println!("notify_one completed in {:?}", start.elapsed());
    
    // Wake remaining threads
    {
        let (lock, cvar) = &*state;
        let mut v = lock.lock();
        *v = 2;
        cvar.notify_all();  // Now wake everyone
    }
    
    for h in handles {
        h.join().unwrap();
    }
    
    // Performance rule of thumb:
    // - notify_one: O(1) wakeup cost
    // - notify_all: O(n) wakeup cost where n = waiting threads
    // - But notify_all is necessary when all threads need to respond
}

notify_one has O(1) wakeup cost; notify_all has O(n) where n is the number of waiting threads.

Common Patterns

use parking_lot::{Mutex, Condvar};
use std::sync::Arc;
use std::thread;
 
// Pattern 1: Work queue (notify_one)
fn work_queue() {
    let queue = Arc::new((Mutex::new(Vec::new()), Condvar::new()));
    
    // Multiple workers, one queue
    // Each item should be processed by exactly one worker
    // notify_one wakes one worker to claim the item
    
    let (lock, cvar) = &*queue;
    
    // Add work item
    {
        let mut q = lock.lock();
        q.push("task1".to_string());
        cvar.notify_one();  // Wake one worker
    }
    
    // Worker pattern:
    // while queue.is_empty() { cvar.wait(&mut q); }
    // let item = queue.pop();
    // notify_one wakes ONE worker to claim item
}
 
// Pattern 2: Shutdown signal (notify_all)
fn shutdown_signal() {
    let state = Arc::new((Mutex::new(false), Condvar::new()));
    
    // All threads need to see shutdown and exit
    // notify_all wakes ALL workers
    
    let (lock, cvar) = &*state;
    
    // Shutdown
    {
        let mut shutdown = lock.lock();
        *shutdown = true;
        cvar.notify_all();  // All workers must see this
    }
}
 
// Pattern 3: State transition (notify_all)
fn state_transition() {
    let state = Arc::new((
        Mutex::new("idle".to_string()),
        Condvar::new(),
    ));
    
    // Multiple threads waiting for "running" state
    // When state changes to "running", ALL threads should proceed
    
    {
        let (lock, cvar) = &*state;
        let mut s = lock.lock();
        *s = "running".to_string();
        cvar.notify_all();  // State changed, all check it
    }
}
 
// Pattern 4: Barrier (notify_all)
fn barrier() {
    let barrier = Arc::new((
        Mutex::new(0_usize),  // Count of arrived threads
        Condvar::new(),
    ));
    
    let THREADS = 4;
    
    // Each thread increments count
    // Last thread to arrive wakes everyone
    {
        let (lock, cvar) = &*barrier;
        let mut count = lock.lock();
        *count += 1;
        if *count == THREADS {
            cvar.notify_all();  // All threads can proceed
        } else {
            while *count < THREADS {
                cvar.wait(&mut count);
            }
        }
    }
}
 
// Pattern 5: Resource available (notify_one)
fn resource_available() {
    // Pool of resources (e.g., connections)
    // When resource returned, wake ONE waiting thread
    
    let pool = Arc::new((
        Mutex::new(vec!["conn1", "conn2"]),
        Condvar::new(),
    ));
    
    // Return connection
    {
        let (lock, cvar) = &*pool;
        let mut conns = lock.lock();
        conns.push("conn1");
        cvar.notify_one();  // One waiter can use it
        // Don't need to wake all - only one connection available
    }
}

Use notify_one for single-resource availability; use notify_all for state broadcasts and barriers.

parking_lot vs std::sync

use parking_lot::{Mutex as PlMutex, Condvar as PlCondvar};
use std::sync::{Mutex as StdMutex, Condvar as StdCondvar};
 
fn compare_implementations() {
    // Both have notify_one and notify_all with same semantics
    
    // std::sync::Condvar:
    // - wait() returns MutexGuard
    // - wait_timeout() returns WaitTimeoutResult
    // - More verbose lock management
    
    let std_pair = (StdMutex::new(false), StdCondvar::new());
    {
        let (lock, cvar) = &std_pair;
        let mut guard = lock.lock().unwrap();
        while !*guard {
            guard = cvar.wait(guard).unwrap();  // Re-assign guard
        }
    }
    
    // parking_lot::Condvar:
    // - wait() takes &mut MutexGuard
    // - Simpler API, no Result unwrapping
    // - Returns &mut guard directly
    
    let pl_pair = (PlMutex::new(false), PlCondvar::new());
    {
        let (lock, cvar) = &pl_pair;
        let mut guard = lock.lock();
        while !*guard {
            cvar.wait(&mut guard);  // Simpler!
        }
    }
    
    // Both notify_one/notify_all work the same:
    // - notify_one: wake one thread
    // - notify_all: wake all threads
    
    // parking_lot advantages:
    // - No poisoning (no unwrap needed)
    // - Simpler API
    // - Better performance in some cases
    // - Smaller memory footprint
}

parking_lot::Condvar offers a cleaner API but same semantics for notify_one and notify_all.

Synthesis

Quick reference:

use parking_lot::{Mutex, Condvar};
 
// notify_one: wakes ONE waiting thread
cvar.notify_one();
 
// notify_all: wakes ALL waiting threads
cvar.notify_all();
 
// Decision matrix:
// β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
// β”‚ Scenario                           β”‚ Use          β”‚ Why             β”‚
// β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
// β”‚ Work queue (one consumer per item) β”‚ notify_one   β”‚ One can process β”‚
// β”‚ Connection pool (item returned)    β”‚ notify_one   β”‚ One can use it  β”‚
// β”‚ Single resource available          β”‚ notify_one   β”‚ One can claim   β”‚
// β”‚ Shutdown signal                    β”‚ notify_all   β”‚ All must exit   β”‚
// β”‚ State transition                   β”‚ notify_all   β”‚ All must see    β”‚
// β”‚ Barrier release                    β”‚ notify_all   β”‚ All proceed     β”‚
// β”‚ Condition might affect multiple    β”‚ notify_all   β”‚ All re-check    β”‚
// β”‚ Unsure which thread should wake    β”‚ notify_all   β”‚ Safe default    β”‚
// β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
 
// Always use while loop for condition check:
let mut guard = lock.lock();
while !condition(&guard) {  // NOT: if !condition()
    cvar.wait(&mut guard);
}
 
// notify_one performance: O(1) wakeups
// notify_all performance: O(n) wakeups (n = waiting threads)
 
// Common bugs:
// ❌ Using notify_one when all threads need to respond
// ❌ Using if instead of while for condition check
// ❌ Not holding lock when calling notify_* (technically allowed but risky)
// βœ… Use notify_all for safety when in doubt
// βœ… Use notify_one when only one thread can act

Key insight: The choice between notify_one and notify_all is fundamentally about whether the state change is a unicast (one thread should act) or broadcast (all threads should re-evaluate). Work queues, connection pools, and single-resource availability use notify_one because only one thread can consume the resource. Shutdown signals, state transitions, and barriers use notify_all because every waiting thread needs to respond. The performance differenceβ€”O(1) vs O(n) wakeup costβ€”matters at scale, but correctness comes first: a missed signal causes deadlock while an unnecessary wakeup just causes brief contention. When in doubt, use notify_all; when you're certain only one thread should proceed, use notify_one.