Loading page…
Rust walkthroughs
Loading page…
std::sync::atomic::Ordering::SeqCst and when would you use weaker orderings like Acquire/Release?std::sync::atomic::Ordering::SeqCst (sequentially consistent) is the strongest memory ordering in Rust's atomic operations, guaranteeing a single total order of all atomic operations across all threads that matches program order. Weaker orderings like Acquire and Release provide partial ordering guarantees that can enable better performance on weakly-ordered architectures like ARM, but require careful reasoning about synchronization points. SeqCst is the safest default because it provides intuitive behavior—operations appear to happen in the order written in the code—but on some architectures it requires expensive memory barriers. Acquire/Release semantics synchronize data between threads through specific points: Release stores make prior writes visible to threads that perform Acquire loads of the same atomic, establishing a happens-before relationship.
use std::sync::atomic::{AtomicI32, Ordering};
use std::thread;
fn main() {
let counter = AtomicI32::new(0);
// SeqCst is the default and safest ordering
counter.fetch_add(1, Ordering::SeqCst);
let threads: Vec<_> = (0..10)
.map(|_| {
thread::spawn(|| {
// SeqCst guarantees this increment is visible
// to all threads in a consistent order
counter.fetch_add(1, Ordering::SeqCst);
})
})
.collect();
for t in threads {
t.join().unwrap();
}
assert_eq!(counter.load(Ordering::SeqCst), 11);
}SeqCst provides intuitive ordering: all threads see operations in the same total order.
use std::sync::atomic::Ordering;
// From weakest to strongest:
// Relaxed - No ordering guarantees, only atomicity
// Release - Writes before this operation stay before
// Acquire - Reads after this operation stay after
// AcqRel - Combined Acquire and Release
// SeqCst - Total order across all threads
fn demonstrate_orderings() {
let atom = std::sync::atomic::AtomicI32::new(0);
// Relaxed: just atomic, no ordering
atom.load(Ordering::Relaxed);
atom.store(1, Ordering::Relaxed);
// Acquire: subsequent reads can't be reordered before this
atom.load(Ordering::Acquire);
// Release: prior writes can't be reordered after this
atom.store(2, Ordering::Release);
// AcqRel: both acquire and release semantics
atom.fetch_add(1, Ordering::AcqRel);
// SeqCst: global total order
atom.load(Ordering::SeqCst);
}Each ordering level provides stronger guarantees at potential performance cost.
use std::sync::atomic::{AtomicBool, Ordering};
use std::thread;
fn main() {
// SeqCst ensures all threads see operations in same order
let x = AtomicBool::new(false);
let y = AtomicBool::new(false);
let t1 = thread::spawn(|| {
x.store(true, Ordering::SeqCst); // A
let r1 = y.load(Ordering::SeqCst); // B
r1
});
let t2 = thread::spawn(|| {
y.store(true, Ordering::SeqCst); // C
let r2 = x.load(Ordering::SeqCst); // D
r2
});
let r1 = t1.join().unwrap();
let r2 = t2.join().unwrap();
// With SeqCst: at least one of r1 or r2 must be true
// It's impossible for both to see false
// (x stored before y loaded) AND (y stored before x loaded)
assert!(r1 || r2, "SeqCst guarantees consistent ordering");
}SeqCst prevents counterintuitive reorderings that weaker orderings allow.
use std::sync::atomic::{AtomicBool, AtomicI32, Ordering};
use std::thread;
fn main() {
let data = AtomicI32::new(0);
let ready = AtomicBool::new(false);
// Producer thread
let producer = thread::spawn(|| {
// Write data first
data.store(42, Ordering::Relaxed);
// Release: makes prior writes visible to acquiring threads
ready.store(true, Ordering::Release);
});
// Consumer thread
let consumer = thread::spawn(|| {
// Spin until ready
while !ready.load(Ordering::Acquire) {
// Acquire: synchronizes with Release
// When we see true, we're guaranteed to see data = 42
}
// Safe to read data now
assert_eq!(data.load(Ordering::Relaxed), 42);
});
producer.join().unwrap();
consumer.join().unwrap();
}Acquire/Release establish happens-before relationships for message passing patterns.
use std::sync::atomic::{AtomicPtr, Ordering};
use std::thread;
struct Message {
id: u32,
content: String,
}
fn main() {
let msg_ptr = AtomicPtr::new(std::ptr::null_mut());
thread::spawn(move || {
// Create message
let msg = Box::new(Message {
id: 1,
content: "Hello".to_string(),
});
// All writes above (to msg fields) happen-before this store
// Release ensures the consuming thread sees complete data
msg_ptr.store(Box::into_raw(msg), Ordering::Release);
});
// Wait for message
while msg_ptr.load(Ordering::Acquire).is_null() {
std::hint::spin_loop();
}
let msg = unsafe { &*msg_ptr.load(Ordering::Acquire) };
println!("Message {}: {}", msg.id, msg.content);
// Cleanup
unsafe { drop(Box::from_raw(msg_ptr.load(Ordering::Relaxed))); }
}Release ensures all prior writes become visible to the acquiring thread.
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
fn main() {
let ready = AtomicBool::new(false);
let counter = AtomicUsize::new(0);
// Thread 1: producer
std::thread::scope(|s| {
s.spawn(|| {
counter.store(100, Ordering::Relaxed);
ready.store(true, Ordering::Release);
});
// Thread 2: consumer
s.spawn(|| {
// Acquire synchronizes with Release
while !ready.load(Ordering::Acquire) {}
// Guaranteed to see counter = 100
// Acquire prevents reordering this load before ready.load
assert_eq!(counter.load(Ordering::Relaxed), 100);
});
});
}Acquire prevents subsequent reads from being reordered before the acquire operation.
use std::sync::atomic::{AtomicI32, Ordering};
use std::thread;
fn main() {
let counter = AtomicI32::new(0);
// Relaxed only guarantees atomicity, no ordering
// Useful when only the final value matters
let threads: Vec<_> = (0..100)
.map(|_| {
thread::spawn(|| {
// Each thread increments 100 times
for _ in 0..100 {
counter.fetch_add(1, Ordering::Relaxed);
}
})
})
.collect();
for t in threads {
t.join().unwrap();
}
// Final count is correct (atomicity)
assert_eq!(counter.load(Ordering::Relaxed), 10000);
// But intermediate observations might see different orders
// across threads - that's okay for simple counters
}Relaxed is sufficient when only atomicity matters, not ordering.
use std::sync::atomic::{AtomicI32, Ordering};
fn main() {
let counter = AtomicI32::new(0);
// fetch_add is a read-modify-write operation
// AcqRel combines Acquire (for the read) and Release (for the write)
let old = counter.fetch_add(1, Ordering::AcqRel);
// This both:
// 1. Acquires: sees writes from prior release operations
// 2. Releases: makes this write visible to acquiring threads
// For simple counters, Relaxed is often sufficient
counter.fetch_add(1, Ordering::Relaxed);
}AcqRel is appropriate for RMW operations that need both acquire and release semantics.
use std::sync::atomic::{AtomicI32, Ordering};
use std::time::Instant;
fn main() {
let counter = AtomicI32::new(0);
let iterations = 10_000_000;
// SeqCst: strongest, potentially slowest
let start = Instant::now();
for _ in 0..iterations {
counter.fetch_add(1, Ordering::SeqCst);
}
println!("SeqCst: {:?}", start.elapsed());
// AcqRel: still strong, typically faster than SeqCst
let start = Instant::now();
for _ in 0..iterations {
counter.fetch_add(1, Ordering::AcqRel);
}
println!("AcqRel: {:?}", start.elapsed());
// Relaxed: weakest, typically fastest
let start = Instant::now();
for _ in 0..iterations {
counter.fetch_add(1, Ordering::Relaxed);
}
println!("Relaxed: {:?}", start.elapsed());
// Note: Performance differences vary by architecture
// x86: SeqCst often similar to AcqRel (strong memory model)
// ARM: SeqCst significantly slower (weak memory model)
}Stronger orderings can be slower, especially on weakly-ordered architectures.
use std::sync::atomic::{AtomicPtr, Ordering};
use std::sync::Mutex;
struct LazyData {
data: i32,
}
static INSTANCE: AtomicPtr<LazyData> = AtomicPtr::new(std::ptr::null_mut());
static LOCK: Mutex<()> = Mutex::new(());
fn get_instance() -> &'static LazyData {
// First check: Relaxed is okay because we'll check again under lock
let ptr = INSTANCE.load(Ordering::Acquire);
if ptr.is_null() {
let _lock = LOCK.lock().unwrap();
// Second check: Acquire to see the write from another thread
let ptr = INSTANCE.load(Ordering::Acquire);
if ptr.is_null() {
let boxed = Box::new(LazyData { data: 42 });
let raw = Box::into_raw(boxed);
// Release makes the boxed data visible to other threads
INSTANCE.store(raw, Ordering::Release);
return unsafe { &*raw };
}
return unsafe { &*ptr };
}
unsafe { &*ptr }
}Acquire/Release is ideal for double-checked locking.
use std::sync::atomic::{AtomicBool, AtomicI32, Ordering};
use std::thread;
fn main() {
// When multiple atomics must be seen in consistent order
// across all threads, SeqCst is necessary
let flag1 = AtomicBool::new(false);
let flag2 = AtomicBool::new(false);
let data = AtomicI32::new(0);
thread::scope(|s| {
s.spawn(|| {
data.store(42, Ordering::SeqCst);
flag1.store(true, Ordering::SeqCst);
});
s.spawn(|| {
flag2.store(true, Ordering::SeqCst);
});
s.spawn(|| {
// With SeqCst, all threads see same order
// If we see flag1 = true, we MUST see data = 42
if flag1.load(Ordering::SeqCst) {
assert_eq!(data.load(Ordering::SeqCst), 42);
}
// The order between flag1 and flag2 is also consistent
let f1 = flag1.load(Ordering::SeqCst);
let f2 = flag2.load(Ordering::SeqCst);
// All threads agree on whether f1 happened before f2 or vice versa
});
});
}Use SeqCst when you need global ordering guarantees across multiple atomics.
use std::sync::atomic::{AtomicBool, Ordering};
use std::thread;
fn main() {
let done = AtomicBool::new(false);
// WRONG: Using Relaxed when synchronization is needed
// This might loop forever or miss updates
thread::scope(|s| {
s.spawn(|| {
// Signal done
done.store(true, Ordering::Relaxed); // Wrong!
});
s.spawn(|| {
while !done.load(Ordering::Relaxed) { // Wrong!
// Might never see done = true
}
});
});
// CORRECT: Use Acquire/Release for synchronization
let done2 = AtomicBool::new(false);
thread::scope(|s| {
s.spawn(|| {
done2.store(true, Ordering::Release); // Release
});
s.spawn(|| {
while !done2.load(Ordering::Acquire) { // Acquire
std::hint::spin_loop();
}
// Guaranteed to see done = true eventually
});
});
}Relaxed is unsafe for synchronization; use Acquire/Release when threads must communicate.
use std::sync::atomic::{AtomicBool, AtomicI32, Ordering};
// The happens-before relationship:
//
// Thread A: Thread B:
// data.store(42, Relaxed) ready.load(Acquire) sees true
// ready.store(true, Release) data.load(Relaxed) sees 42
//
// Release-Acquire establishes happens-before:
// - All writes before Release in Thread A
// - Are visible after Acquire in Thread B
//
// This does NOT establish ordering with Thread C using Relaxed
fn main() {
let data = AtomicI32::new(0);
let ready = AtomicBool::new(false);
std::thread::scope(|s| {
// Thread A: writer
s.spawn(|| {
data.store(42, Ordering::Relaxed);
// Release synchronizes with...
ready.store(true, Ordering::Release);
});
// Thread B: reader
s.spawn(|| {
// ...Acquire here
while !ready.load(Ordering::Acquire) {
std::hint::spin_loop();
}
// Now we're guaranteed to see data = 42
assert_eq!(data.load(Ordering::Relaxed), 42);
});
});
}Release in one thread and Acquire in another creates a happens-before relationship.
use std::sync::atomic::{AtomicBool, Ordering};
// SeqCst is necessary when:
// 1. Multiple atomic variables interact
// 2. You need a global total order
// 3. Intuition about program order matters more than performance
fn main() {
let x = AtomicBool::new(false);
let y = AtomicBool::new(false);
// With Acquire/Release only, this can fail:
// Thread 1: x.store(true, Release); r1 = y.load(Acquire);
// Thread 2: y.store(true, Release); r2 = x.load(Acquire);
// Both r1 and r2 can be false!
// With SeqCst, at least one must see true:
// Thread 1: x.store(true, SeqCst); r1 = y.load(SeqCst);
// Thread 2: y.store(true, SeqCst); r2 = x.load(SeqCst);
// r1 || r2 is guaranteed to be true
}Use SeqCst when operations on multiple atomics must have a consistent global order.
use std::sync::atomic::{AtomicI32, Ordering};
fn increment_to_max(atomic: &AtomicI32, max: i32) -> i32 {
// Compare-and-swap loop with Acquire/Release
loop {
let current = atomic.load(Ordering::Acquire);
if current >= max {
return current;
}
match atomic.compare_exchange(
current,
current + 1,
Ordering::AcqRel, // Success: both Acquire and Release
Ordering::Acquire, // Failure: still need Acquire for next iteration
) {
Ok(v) => return v,
Err(_) => continue, // Try again
}
}
}
fn main() {
let counter = AtomicI32::new(0);
let result = increment_to_max(&counter, 10);
println!("Result: {}", result);
}compare_exchange needs appropriate ordering for both success and failure cases.
use std::sync::atomic::Ordering;
// Relaxed:
// - Atomicity only
// - No synchronization
// - Use for counters, statistics
// Release (store operations):
// - Prior writes can't be reordered after
// - Use when publishing data
// Acquire (load operations):
// - Subsequent reads can't be reordered before
// - Use when consuming published data
// AcqRel (RMW operations):
// - Both Acquire and Release
// - Use for operations that both read and write
// SeqCst:
// - Global total order
// - Most intuitive, potentially slowest
// - Use when multiple atomics interactChoose the weakest ordering that provides required guarantees.
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
// Rule 1: Default to SeqCst when unsure
// Rule 2: Use Acquire/Release for message passing
// Rule 3: Use Relaxed only for statistics/counters
// Rule 4: Document why weaker orderings are safe
fn guidelines() {
let counter = AtomicUsize::new(0);
let flag = AtomicBool::new(false);
// Statistics: Relaxed is fine
counter.fetch_add(1, Ordering::Relaxed);
// Message passing: Acquire/Release
flag.store(true, Ordering::Release); // Publisher
while !flag.load(Ordering::Acquire) { // Consumer
std::hint::spin_loop();
}
// Multiple atomics or complex invariants: SeqCst
// When in doubt, use SeqCst
}Start with SeqCst, optimize to weaker orderings only when necessary.
| Ordering | Guarantees | Use Case | Performance |
|----------|-----------|----------|-------------|
| Relaxed | Atomicity only | Counters, statistics | Fastest |
| Release | Prior writes visible | Publisher side | Good |
| Acquire | See prior writes | Consumer side | Good |
| AcqRel | Both above | Read-modify-write | Good |
| SeqCst | Global total order | Multiple atomics | Slowest (varies) |
Memory orderings in Rust's atomics control how operations synchronize across threads:
SeqCst (sequentially consistent) provides the strongest guarantees: all SeqCst operations across all threads have a single total order that matches program order. This matches programmer intuition but may require expensive memory barriers. Use SeqCst as the default choice, especially when multiple atomic variables interact or when correctness depends on global ordering.
Acquire/Release semantics establish happens-before relationships: a Release store in one thread synchronizes with an Acquire load in another thread, making all writes before the Release visible after the Acquire. This is sufficient for most message-passing patterns and producer-consumer scenarios. Use Acquire for loads that must see prior writes, Release for stores that publish data.
Relaxed ordering guarantees only atomicity—no ordering or synchronization. Use for simple counters, statistics, and cases where only the final value matters, not the order of operations.
Key insight: Memory ordering is about controlling which writes become visible to which reads and in what order. SeqCst enforces a global order at potential performance cost. Acquire/Release establishes point-to-point synchronization through specific atomic variables. The right choice depends on what ordering guarantees your algorithm actually needs, not on what feels safest. Understanding happens-before relationships lets you use weaker orderings correctly while maintaining correctness.