How does futures::future::FutureExt::shared enable cloning future results across multiple consumers?
FutureExt::shared wraps a future in a Shared combinator that allows multiple consumers to await the same underlying future, all receiving the same result without executing the future multiple times. The shared future is cloneable, and each clone can be awaited independently. The underlying future runs exactly once, and its result is stored in an internal Arc for all clones to access. This is essential for scenarios where the same expensive computation or I/O operation should be shared across multiple parts of a systemâsuch as a single network request whose result is needed by multiple tasks, or an expensive calculation that multiple consumers need.
Basic shared Usage
use futures::future::{self, FutureExt};
use std::time::Duration;
use tokio::time::sleep;
async fn basic_shared() {
// Create a future that simulates expensive work
let expensive_future = async {
sleep(Duration::from_millis(100)).await;
println!("Expensive computation executed");
42
};
// Make it shared - can be cloned and awaited multiple times
let shared = expensive_future.shared();
// Clone for multiple consumers
let shared1 = shared.clone();
let shared2 = shared.clone();
// Both can await the same result
let (result1, result2) = tokio::join!(shared1, shared2);
println!("Result 1: {}", result1); // 42
println!("Result 2: {}", result2); // 42
// "Expensive computation executed" prints only once
}The future runs once, and all clones receive the same result.
Without shared: The Problem
use futures::future::FutureExt;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::Arc;
async fn without_shared() {
let call_count = Arc::new(AtomicU32::new(0));
let count_clone = call_count.clone();
// A future that tracks how many times it runs
let compute = move || {
let count = count_clone.clone();
async move {
count.fetch_add(1, Ordering::SeqCst);
42
}
};
// Without shared, each await runs the future
let future1 = compute();
let future2 = compute();
let (r1, r2) = tokio::join!(future1, future2);
println!("Results: {}, {}", r1, r2);
println!("Call count: {}", call_count.load(Ordering::SeqCst));
// Call count: 2 - the computation ran twice
}Without shared, each consumer would need its own future instance, executing the work multiple times.
With shared: Single Execution
use futures::future::FutureExt;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::Arc;
async fn with_shared() {
let call_count = Arc::new(AtomicU32::new(0));
let count_clone = call_count.clone();
// A future that tracks how many times it runs
let compute = async move {
count_clone.fetch_add(1, Ordering::SeqCst);
42
};
// Make it shared
let shared = compute.shared();
// Clone for multiple consumers
let s1 = shared.clone();
let s2 = shared.clone();
let (r1, r2) = tokio::join!(s1, s2);
println!("Results: {}, {}", r1, r2);
println!("Call count: {}", call_count.load(Ordering::SeqCst));
// Call count: 1 - the computation ran only once
}With shared, the computation runs exactly once regardless of how many clones await it.
The Shared Future Type
use futures::future::{FutureExt, Shared};
use std::pin::Pin;
async fn shared_type() {
let future = async { 42u32 };
// .shared() returns a Shared<F> type
let shared: Shared<_> = future.shared();
// Shared implements Clone
let cloned = shared.clone();
// Shared implements Future
let result = shared.await;
println!("Result: {}", result);
// The cloned future can also be awaited
let result2 = cloned.await;
println!("Result2: {}", result2);
}Shared<F> wraps the original future and implements both Clone and Future.
Cloning Before Polling
use futures::future::FutureExt;
async fn cloning_behavior() {
let future = async {
println!("Running the future");
42
};
let shared = future.shared();
// Can clone before or after polling starts
let s1 = shared.clone(); // Clone before await
let s2 = shared.clone(); // Another clone
// Even if the future completes
let result = s1.await;
println!("First result: {}", result);
// Clones created after completion still work
let s3 = shared.clone();
let result2 = s2.await;
let result3 = s3.await;
println!("Result2: {}, Result3: {}", result2, result3);
// All return 42, "Running the future" prints only once
}Clones can be created at any timeâbefore, during, or after the future completes.
Thread-Safety and Synchronization
use futures::future::FutureExt;
use tokio::task::JoinSet;
async fn thread_safety() {
let future = async {
println!("Starting expensive work...");
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
println!("Work complete");
42
};
let shared = future.shared();
let mut tasks = JoinSet::new();
// Spawn multiple tasks that all await the same future
for i in 0..5 {
let s = shared.clone();
tasks.spawn(async move {
let result = s.await;
println!("Task {} got result: {}", i, result);
result
});
}
// Wait for all tasks
let mut results = Vec::new();
while let Some(res) = tasks.join_next().await {
results.push(res.unwrap());
}
// Output:
// "Starting expensive work..." (once)
// "Work complete" (once)
// "Task X got result: 42" (5 times, one per task)
}Shared provides internal synchronization so multiple tasks can safely await the same future.
Result Type Requirements
use futures::future::FutureExt;
use std::sync::Arc;
async fn result_requirements() {
// The output type must be Clone
// Most types that are Clone work
// Primitive types work
let int_future = async { 42u32 };
let shared_int = int_future.shared();
let _ = shared_int.clone();
// Arc works for non-Clone types
let expensive_result = Arc::new(vec![1, 2, 3, 4, 5]);
let arc_future = async { expensive_result };
let shared_arc = arc_future.shared();
// Strings work
let string_future = async { "hello".to_string() };
let shared_string = string_future.shared();
// Result types work if Ok and Err are Clone
let result_future = async { Ok::<_, String>(42) };
let shared_result = result_future.shared();
}
// Non-Clone types need wrapping
async fn non_clone_types() {
// This won't compile if Output isn't Clone:
// let future = async { std::fs::File::open("test").unwrap() };
// let shared = future.shared(); // Error: File doesn't implement Clone
// Wrap in Arc for non-Clone types:
use std::fs::File;
let file_future = async {
Arc::new(File::open("test").unwrap())
};
let shared = file_future.shared(); // Works
}The output type must implement Clone. For non-Clone types, wrap in Arc.
Error Handling with Shared
use futures::future::FutureExt;
async fn error_handling() {
// Errors are also shared - all consumers see the same error
let fallible_future = async {
Err::<u32, &str>("something went wrong")
};
let shared = fallible_future.shared();
let s1 = shared.clone();
let s2 = shared.clone();
let (r1, r2) = tokio::join!(s1, s2);
println!("Result 1: {:?}", r1); // Err("something went wrong")
println!("Result 2: {:?}", r2); // Err("something went wrong")
// Both get the same error, but error only "happened" once
}Errors are also sharedâuseful for operations where the failure should be consistent across consumers.
Real-World Use Case: Shared Network Request
use futures::future::FutureExt;
use tokio::task::JoinSet;
async fn shared_network_request() {
// Simulate an expensive HTTP request
async fn fetch_data(id: u32) -> String {
println!("Making network request for id {}", id);
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
format!("Data for {}", id)
}
// Multiple parts of the app need the same data
let data_future = fetch_data(1).shared();
// Different subsystems await the same result
let mut tasks = JoinSet::new();
// Logging subsystem
{
let f = data_future.clone();
tasks.spawn(async move {
let data = f.await;
println!("Logger: received data");
});
}
// Processing subsystem
{
let f = data_future.clone();
tasks.spawn(async move {
let data = f.await;
println!("Processor: received data");
});
}
// Response handler
{
let f = data_future.clone();
tasks.spawn(async move {
let data = f.await;
println!("Handler: received data");
});
}
while tasks.join_next().await.is_some() {}
// "Making network request" prints only once
}A common pattern is sharing expensive I/O operations across multiple consumers.
Caching with Shared
use futures::future::FutureExt;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
struct Cache {
entries: RwLock<HashMap<String, futures::future::Shared<futures::future::BoxFuture<'static, String>>>>>,
}
impl Cache {
fn new() -> Self {
Self {
entries: RwLock::new(HashMap::new()),
}
}
async fn get(&self, key: &str) -> String {
// Check if we already have a pending/completed future
{
let entries = self.entries.read().await;
if let Some(future) = entries.get(key) {
// Clone the shared future and await it
return future.clone().await;
}
}
// Need to create the future
let future = async {
println!("Computing for {}", key);
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
format!("Result for {}", key)
};
let shared = future.boxed().shared();
{
let mut entries = self.entries.write().await;
entries.insert(key.to_string(), shared.clone());
}
shared.await
}
}Shared enables a simple cache where concurrent requests for the same key share the computation.
Comparison with Other Patterns
use futures::future::FutureExt;
use tokio::sync::OnceCell;
async fn comparison_patterns() {
// 1. shared: Multiple concurrent awaits, runs once
let future = async { 42u32 };
let shared = future.shared();
let (a, b) = tokio::join!(shared.clone(), shared.clone());
// Both get 42, computation runs once
// 2. OnceCell: Similar but for static/scope-based caching
let cell = OnceCell::new();
let result = cell.get_or_init(|| async { 42u32 }).await;
// Also runs once, but different API
// 3. tokio::sync::broadcast: Different pattern - sends values
// Not equivalent, broadcasts each value to all receivers
// Key differences:
// - shared: Future-level sharing, all await same result
// - OnceCell: Storage-level sharing, good for long-lived caches
// - broadcast: Channel pattern, different use case
}Shared is for future-level sharing; OnceCell is for storage-level caching.
Lifetime and Ownership
use futures::future::FutureExt;
use std::sync::Arc;
async fn lifetimes_and_ownership() {
// Shared future must be 'static
let data = Arc::new(vec![1, 2, 3]);
let data_clone = data.clone();
let future = async move {
// data_clone is moved into the async block
data_clone.len()
};
let shared = future.shared();
// shared is 'static and can be sent across threads
let handle = tokio::spawn(async move {
shared.await
});
let result = handle.await.unwrap();
println!("Result: {}", result);
// If you need to share references, use Arc:
let shared_data = Arc::new(String::from("hello"));
let sd = shared_data.clone();
let future = async move { sd.clone() };
let shared = future.shared();
}Shared futures must be 'static; use Arc for sharing references.
Cancellation Behavior
use futures::future::FutureExt;
use tokio::time::{timeout, Duration};
async fn cancellation() {
let future = async {
println!("Starting work...");
tokio::time::sleep(Duration::from_millis(200)).await;
println!("Work complete");
42
};
let shared = future.shared();
let s1 = shared.clone();
let s2 = shared.clone();
// One consumer times out
let result1 = timeout(Duration::from_millis(50), s1).await;
println!("Consumer 1 result: {:?}", result1); // Err(Elapsed)
// The other consumer can still await successfully
// The future keeps running despite one consumer dropping
let result2 = s2.await;
println!("Consumer 2 result: {}", result2); // 42
// Key insight: Dropping a clone doesn't cancel the underlying future
// The future runs to completion once started
}Dropping one clone doesn't cancel the underlying futureâit runs to completion.
Combining with Other Combinators
use futures::future::FutureExt;
async fn with_combinators() {
// shared works with other combinators
let future = async { 42u32 };
// Map, then share
let mapped = future.map(|x| x * 2).shared();
let result = mapped.await;
println!("Mapped: {}", result); // 84
// Share, then map doesn't work directly
// because Shared doesn't support map
// Instead:
let shared = async { 42u32 }.shared();
let result = shared.await;
println!("Then map: {}", result * 2);
// For combining multiple shared futures:
let f1 = async { 1u32 }.shared();
let f2 = async { 2u32 }.shared();
let s1 = f1.clone();
let s2 = f2.clone();
let combined = async move {
let (a, b) = tokio::join!(s1, s2);
a + b
};
println!("Combined: {}", combined.await); // 3
}Apply transformations before shared for the most flexibility.
When to Use shared
use futures::future::FutureExt;
async fn when_to_use() {
// Use shared when:
// 1. Same expensive computation needed in multiple places
// and computation should run only once
// 2. Concurrent tasks need the same I/O result
// (network request, file read, etc.)
// 3. Result broadcasting to multiple consumers
// where all should see the same value
// 4. Implementing caching patterns with futures
// Don't use shared when:
// 1. The computation is cheap and runs quickly
// - Just create multiple futures
// 2. Each consumer needs independent execution
// - Don't share, use separate futures
// 3. The future's output type isn't Clone
// - Would need to wrap in Arc anyway
// 4. You need different results for different consumers
// - shared gives same result to all
}Use shared for expensive operations that multiple consumers need; avoid for cheap operations.
Performance Considerations
use futures::future::FutureExt;
async fn performance() {
// shared adds overhead:
// 1. Arc wrapper for the result
// 2. Internal synchronization
// For cheap operations, overhead may not be worth it:
let cheap = async { 1 + 1 }; // Very fast
// Just call it multiple times instead of sharing
// For expensive operations, overhead is negligible:
let expensive = async {
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
42
};
let shared = expensive.shared(); // Overhead is tiny vs. saving 100ms
// Memory: shared holds the result in Arc until all clones are dropped
// If clones are held for a long time, memory isn't freed
// Cloning is cheap: just bumps an Arc reference count
let s1 = shared.clone();
let s2 = shared.clone();
// Each clone is just an Arc clone
}shared has minimal overhead compared to running expensive operations multiple times.
Synthesis
Quick reference:
use futures::future::FutureExt;
use tokio::task::JoinSet;
async fn quick_reference() {
// Basic usage: make a future shareable
let future = async { 42u32 };
let shared = future.shared();
// Clone for multiple consumers
let s1 = shared.clone();
let s2 = shared.clone();
// All await the same result, computation runs once
let (r1, r2) = tokio::join!(s1, s2);
assert_eq!(r1, r2);
// Requirements:
// - Output type must implement Clone
// - Future must be 'static
// Key behaviors:
// - Computation runs exactly once
// - All clones receive the same result
// - Errors are also shared
// - Dropping clones doesn't cancel the future
// Use cases:
// - Sharing expensive I/O across multiple consumers
// - Avoiding duplicate work in concurrent systems
// - Caching future results
// - Broadcasting result to multiple tasks
// Alternatives:
// - OnceCell for storage-based caching
// - broadcast channels for streaming values
// - Just running the future multiple times for cheap ops
}Key insight: shared transforms a future from a single-consumer model to a multi-consumer model by wrapping the future's output in an Arc. The first consumer to poll the shared future starts the underlying computation, and subsequent consumers either wait for completion or immediately receive the cached result. This is fundamentally different from simply cloning the future before pollingâshared ensures the computation runs once, whereas cloning a normal future before it runs would duplicate the work. The internal synchronization uses a state machine that tracks whether the future is pending, running, or complete, allowing clones to be created at any point. When the future completes, its result is stored in an Arc that all clones share, making result retrieval O(1) for completed futures. Use shared when you have an expensive operation that multiple concurrent tasks needânetwork requests, file reads, expensive calculationsâand you want to guarantee the operation runs exactly once while making the result available to all consumers.
