What are the trade-offs between tower::ServiceExt::ready and ready_and for service readiness checking?

ready checks if a single service is ready to accept requests, yielding a mutable reference to the service, while ready_and checks readiness of multiple services simultaneously and only proceeds when all services are ready. The key distinction is in handling multiple service dependencies: ready requires manual sequencing when multiple services need to be ready, while ready_and provides a convenient combinator that atomically waits for all specified services to become ready before proceeding—essential for operations that depend on multiple upstream services being simultaneously available.

Service Readiness in Tower

use tower::{Service, ServiceExt};
use std::task::{Context, Poll};
 
// Tower services can be in two states:
// - Ready: can accept a new request via `call()`
// - Not ready: must wait before accepting requests
 
// Services implement `Service<Request>` trait
// and may not always be ready immediately
// (e.g., connection pools, rate limiters, circuit breakers)

Services may need time to become ready—establishing connections, acquiring permits, or recovering from backpressure.

Basic ready Usage

use tower::{Service, ServiceExt};
 
async fn call_when_ready<S, Request>(mut service: S, request: Request) -> Result<S::Response, S::Error>
where
    S: Service<Request> + Unpin,
{
    // ready() returns a future that resolves when service can accept requests
    service.ready().await?;
    
    // Now safe to call the service
    service.call(request).await
}
 
// The service is borrowed during ready check
// After ready completes, we have &mut access to call

ready waits for a single service to be ready, then allows calling it.

Ready Method Signature

use tower::ServiceExt;
use std::future::Future;
 
// Simplified signature for ready:
// fn ready(&mut self) -> Ready<'_, Self>
// where Self: Service<Request> + Unpin
 
// Ready<'a, S> resolves to &mut S when the service is ready
 
// Key characteristics:
// - Takes &mut self (borrows service)
// - Returns a future that borrows the service
// - Resolves to &mut Self when ready
// - Service remains owned by caller

ready borrows the service and returns a future that resolves when ready.

Basic ready_and Usage

use tower::ServiceExt;
 
async fn call_both_services<S1, S2, R1, R2>(
    mut service1: S1,
    mut service2: S2,
    req1: R1,
    req2: R2,
) -> Result<(S1::Response, S2::Response), Box<dyn std::error::Error>>
where
    S1: Service<R1> + Unpin,
    S2: Service<R2> + Unpin,
{
    // ready_and waits for ALL services to be ready
    // Returns mutable references to all services
    let (s1, s2) = ServiceExt::ready_all((&mut service1, &mut service2)).await?;
    
    // Both services are now ready, can call them
    let resp1 = s1.call(req1).await?;
    let resp2 = s2.call(req2).await?;
    
    Ok((resp1, resp2))
}

ready_and (or ready_all in some versions) waits for multiple services simultaneously.

Sequential Readiness with ready

use tower::{Service, ServiceExt};
 
async fn sequential_readiness<S1, S2>(
    mut svc1: S1,
    mut svc2: S2,
    req1: S1::Request,
    req2: S2::Request,
) -> Result<(S1::Response, S2::Response), Box<dyn std::error::Error>>
where
    S1: Service<S1::Request> + Unpin,
    S2: Service<S2::Request> + Unpin,
{
    // Using ready() - must wait sequentially
    
    // Wait for first service
    svc1.ready().await?;
    // Wait for second service
    svc2.ready().await?;
    
    // Problem: What if svc1 becomes not-ready while waiting for svc2?
    // This is a potential race condition
    
    let resp1 = svc1.call(req1).await?;
    let resp2 = svc2.call(req2).await?;
    
    Ok((resp1, resp2))
}

Using ready sequentially can lead to race conditions where a service becomes unavailable while waiting for another.

Simultaneous Readiness with ready_and

use tower::ServiceExt;
 
async fn simultaneous_readiness<S1, S2>(
    mut svc1: S1,
    mut svc2: S2,
) -> Result<(&mut S1, &mut S2), Box<dyn std::error::Error>>
where
    S1: Service<S1::Request> + Unpin,
    S2: Service<S2::Request> + Unpin,
{
    // ready_and (or ready_all) waits for both simultaneously
    // No race condition - both are ready at the same time
    
    let (ready_svc1, ready_svc2) = ServiceExt::ready_all((&mut svc1, &mut svc2)).await?;
    
    // Both services are guaranteed ready now
    // Can call them without one becoming unavailable
    
    Ok((ready_svc1, ready_svc2))
}

ready_and ensures atomic readiness—all services are ready at the same moment.

Handling Multiple Services

use tower::ServiceExt;
 
// ready_and pattern for multiple services
async fn multi_service_ready<S1, S2, S3>(
    mut svc1: S1,
    mut svc2: S2,
    mut svc3: S3,
) -> Result<(&mut S1, &mut S2, &mut S3), Box<dyn std::error::Error>>
where
    S1: Service<S1::Request> + Unpin,
    S2: Service<S2::Request> + Unpin,
    S3: Service<S3::Request> + Unpin,
{
    // Wait for all three simultaneously
    let (r1, r2, r3) = ServiceExt::ready_all((&mut svc1, &mut svc2, &mut svc3)).await?;
    
    Ok((r1, r2, r3))
}
 
// Alternative: sequential ready (problematic)
async fn sequential_multi_ready<S1, S2, S3>(
    mut svc1: S1,
    mut svc2: S2,
    mut svc3: S3,
) -> Result<(), Box<dyn std::error::Error>>
where
    S1: Service<S1::Request> + Unpin,
    S2: Service<S2::Request> + Unpin,
    S3: Service<S3::Request> + Unpin,
{
    // svc1 becomes ready
    svc1.ready().await?;
    // svc2 becomes ready (svc1 might NOT be ready anymore!)
    svc2.ready().await?;
    // svc3 becomes ready (svc1 and svc2 might NOT be ready anymore!)
    svc3.ready().await?;
    
    // svc1.call() might fail - it's no longer guaranteed ready!
    Ok(())
}

Sequential ready calls don't guarantee all services remain ready; ready_and does.

Race Condition Example

use tower::{Service, ServiceExt};
use std::time::Duration;
use tokio::time::sleep;
 
// Hypothetical: Connection pool service that becomes "not ready"
// when connections are exhausted
 
async fn race_condition_example<S1, S2>(
    mut pool_svc: S1,  // Connection pool
    mut cache_svc: S2, // Cache service
    request: S1::Request,
) -> Result<S1::Response, Box<dyn std::error::Error>>
where
    S1: Service<S1::Request> + Unpin,
    S2: Service<S2::Request> + Unpin,
{
    // Sequential ready - race condition
    pool_svc.ready().await?;
    
    // While waiting for cache, pool might:
    // - Lose connections
    // - Be used by another task
    // - Become "not ready"
    
    cache_svc.ready().await?;
    
    // pool_svc might now be NOT ready!
    // This call could fail or panic
    let response = pool_svc.call(request).await?;
    
    Ok(response)
}
 
async fn atomic_readiness<S1, S2>(
    mut pool_svc: S1,
    mut cache_svc: S2,
    request: S1::Request,
) -> Result<S1::Response, Box<dyn std::error::Error>>
where
    S1: Service<S1::Request> + Unpin,
    S2: Service<S2::Request> + Unpin,
{
    // ready_and - atomic readiness
    // Both services are ready at the same instant
    
    let (pool, _cache) = ServiceExt::ready_all((&mut pool_svc, &mut cache_svc)).await?;
    
    // pool_svc is GUARANTEED ready
    // No race condition
    
    let response = pool.call(request).await?;
    
    Ok(response)
}

ready_and eliminates race conditions where services become unavailable while waiting.

When to Use ready

use tower::{Service, ServiceExt};
 
// Use ready() when:
// 1. You only need one service
// 2. Services are independent
// 3. No coordination needed
 
async fn single_service_ready<S, Request>(mut service: S, request: Request) 
    -> Result<S::Response, S::Error>
where
    S: Service<Request> + Unpin,
{
    // Simple case: just one service
    service.ready().await?;
    service.call(request).await
}
 
async fn independent_services<S1, S2, R1, R2>(
    mut svc1: S1,
    mut svc2: S2,
    req1: R1,
    req2: R2,
) -> Result<(S1::Response, S2::Response), Box<dyn std::error::Error>>
where
    S1: Service<R1> + Unpin,
    S2: Service<R2> + Unpin,
{
    // Services are independent - can call separately
    svc1.ready().await?;
    let resp1 = svc1.call(req1).await?;
    
    // svc1's state doesn't affect svc2
    svc2.ready().await?;
    let resp2 = svc2.call(req2).await?;
    
    Ok((resp1, resp2))
}

Use ready for single services or when services don't need atomic readiness.

When to Use ready_and

use tower::ServiceExt;
 
// Use ready_and when:
// 1. Multiple services must all be ready
// 2. Services are related (e.g., primary + fallback)
// 3. Transaction-like behavior needed
 
async fn primary_with_fallback<P, F, Request>(
    mut primary: P,
    mut fallback: F,
    request: Request,
) -> Result<P::Response, Box<dyn std::error::Error>>
where
    P: Service<Request, Error = Box<dyn std::error::Error>> + Unpin,
    F: Service<Request, Response = P::Response, Error = Box<dyn std::error::Error>> + Unpin,
{
    // Both must be ready to attempt primary with fallback capability
    let (p, f) = ServiceExt::ready_all((&mut primary, &mut fallback)).await?;
    
    // Try primary
    match p.call(request.clone()).await {
        Ok(response) => Ok(response),
        Err(e) => {
            // Fallback is guaranteed ready
            f.call(request).await
        }
    }
}
 
async fn distributed_transaction<S1, S2, R>(
    mut svc1: S1,
    mut svc2: S2,
    request1: R,
    request2: R,
) -> Result<(S1::Response, S2::Response), Box<dyn std::error::Error>>
where
    S1: Service<R> + Unpin,
    S2: Service<R> + Unpin,
{
    // Transaction needs both services atomically ready
    let (s1, s2) = ServiceExt::ready_all((&mut svc1, &mut svc2)).await?;
    
    // Both ready - can now make coordinated calls
    let resp1 = s1.call(request1).await?;
    let resp2 = s2.call(request2).await?;
    
    Ok((resp1, resp2))
}

Use ready_and for coordinated multi-service operations.

Performance Characteristics

use tower::ServiceExt;
 
async fn performance_comparison<S1, S2, R>(
    mut svc1: S1,
    mut svc2: S2,
) where
    S1: Service<R> + Unpin,
    S2: Service<R> + Unpin,
{
    // Sequential ready:
    // - Waits for svc1.ready()
    // - Then waits for svc2.ready()
    // - Total time: time(svc1) + time(svc2)
    // - But may need retries if svc1 becomes not-ready
    
    // ready_all (ready_and):
    // - Polls both services concurrently
    // - Waits until ALL are ready
    // - Total time: max(time(svc1), time(svc2))
    // - No retry needed - atomic readiness
    
    // ready_all can be faster when services have overlapping wait times
    // Because waiting happens concurrently, not sequentially
    
    let _ = ServiceExt::ready_all((&mut svc1, &mut svc2)).await;
}

ready_all waits concurrently, potentially reducing total wait time.

Error Handling

use tower::ServiceExt;
 
async fn error_handling<S1, S2, R>(
    mut svc1: S1,
    mut svc2: S2,
) -> Result<(), Box<dyn std::error::Error>>
where
    S1: Service<R> + Unpin,
    S2: Service<R> + Unpin,
    S1::Error: Into<Box<dyn std::error::Error>>,
    S2::Error: Into<Box<dyn std::error::Error>>,
{
    // ready() can fail if service encounters an error while becoming ready
    match svc1.ready().await {
        Ok(svc) => {
            // Service is ready
        }
        Err(e) => {
            // Service failed to become ready
            eprintln!("Service 1 failed: {:?}", e);
        }
    }
    
    // ready_all fails if ANY service fails
    match ServiceExt::ready_all((&mut svc1, &mut svc2)).await {
        Ok((s1, s2)) => {
            // Both services ready
        }
        Err(e) => {
            // At least one service failed
            // Error indicates which service(s) failed
            eprintln!("Ready failed: {:?}", e);
        }
    }
    
    Ok(())
}

Both methods propagate errors; ready_all fails if any service fails.

Working with Tuples

use tower::ServiceExt;
 
async fn tuple_readiness<S1, S2, S3, R>(
    mut svc1: S1,
    mut svc2: S2,
    mut svc3: S3,
) where
    S1: Service<R> + Unpin,
    S2: Service<R> + Unpin,
    S3: Service<R> + Unpin,
{
    // ready_all works with tuples of services
    // Returns tuple of mutable references
    
    let (ref1, ref2, ref3) = ServiceExt::ready_all((
        &mut svc1,
        &mut svc2,
        &mut svc3,
    )).await.unwrap();
    
    // Each reference is &mut Service
    // All are guaranteed ready
    
    // Works with any number of services
    // (tuple size depends on implementation)
}

ready_all accepts a tuple of service references and returns a tuple of ready references.

Integration with Middleware

use tower::{Service, ServiceExt, ServiceBuilder};
use tower::buffer::Buffer;
use tower::limit::RateLimit;
 
async fn with_middleware<S, Request>(mut service: S, request: Request) 
    -> Result<S::Response, S::Error>
where
    S: Service<Request> + Unpin,
{
    // Services wrapped in middleware still support ready
    
    // Example: RateLimited service
    // ready() waits until rate limit allows new request
    
    service.ready().await?;
    service.call(request).await
}
 
async fn multiple_with_middleware<S1, S2, R>(
    svc1: S1,
    svc2: S2,
) where
    S1: Service<R> + Unpin,
    S2: Service<R> + Unpin,
{
    // Both services might have different middleware stacks
    // ready_all waits for all to be ready through their middleware
    
    let (ready1, ready2) = ServiceExt::ready_all((
        &mut svc1,
        &mut svc2,
    )).await.unwrap();
}

ready and ready_all work through middleware layers.

Implementing Custom Ready Patterns

use tower::{Service, ServiceExt};
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
 
// Custom service that demonstrates readiness
pub struct MyService {
    ready: bool,
}
 
impl Service<String> for MyService {
    type Response = String;
    type Error = ();
    type Future = Pin<Box<dyn Future<Output = Result<String, ()>>>>;
 
    fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
        if self.ready {
            Poll::Ready(Ok(()))
        } else {
            // Not ready - would need to register waker
            Poll::Pending
        }
    }
 
    fn call(&mut self, req: String) -> Self::Future {
        self.ready = false; // Mark as not ready after call
        Box::pin(async move { Ok(format!("Processed: {}", req)) })
    }
}
 
async fn custom_service_usage(mut svc: MyService) {
    // Must wait for ready before call
    svc.ready().await.unwrap();
    let result = svc.call("hello".to_string()).await;
}

Custom services implement poll_ready; ServiceExt::ready wraps this.

Comparison Summary

use tower::ServiceExt;
 
fn comparison_table() {
    // | Aspect | ready | ready_and / ready_all |
    // |--------|-------|----------------------|
    // | Services | Single | Multiple |
    // | Atomicity | N/A | All ready simultaneously |
    // | Race conditions | Possible with sequential calls | Eliminated |
    // | Wait time | Sum (sequential) | Max (concurrent) |
    // | Use case | Single service | Multi-service coordination |
    // | Error handling | Per-service | Any service fails |
}

Practical Pattern: Fan-Out

use tower::ServiceExt;
 
async fn fan_out<S, R>(
    services: &mut [S],
    request: R,
) -> Result<Vec<S::Response>, Box<dyn std::error::Error>>
where
    S: Service<R> + Unpin,
    R: Clone,
{
    // Wait for all services to be ready
    // This pattern works with slice of services
    
    // First, ensure all are ready (using ready() in loop or ready_all with tuple)
    for service in services.iter_mut() {
        service.ready().await?;
    }
    
    // Now call all services with same request
    let mut results = Vec::new();
    for service in services.iter_mut() {
        results.push(service.call(request.clone()).await?);
    }
    
    Ok(results)
}

Fan-out patterns benefit from ensuring all services are ready before dispatching.

Synthesis

Quick reference:

use tower::ServiceExt;
 
async fn quick_reference<S1, S2, R>(
    mut single_svc: S1,
    mut svc1: S1,
    mut svc2: S2,
) where
    S1: Service<R> + Unpin,
    S2: Service<R> + Unpin,
{
    // ready: Single service readiness
    // - Borrows service (&mut)
    // - Waits until service can accept requests
    // - Returns &mut Service when ready
    single_svc.ready().await.unwrap();
    
    // ready_all / ready_and: Multiple services
    // - Borrows all services
    // - Waits concurrently until ALL are ready
    // - Returns tuple of &mut references
    // - Atomic: all ready at same instant
    let (r1, r2) = ServiceExt::ready_all((&mut svc1, &mut svc2)).await.unwrap();
    
    // Key trade-offs:
    // 1. Atomicity: ready_all guarantees atomic readiness
    // 2. Race conditions: ready_all eliminates races between services
    // 3. Performance: ready_all waits concurrently (max time, not sum)
    // 4. Complexity: ready_all requires tuple handling
}

Key insight: The trade-off centers on atomicity versus simplicity. ready is straightforward for single services—borrow the service, wait until ready, then call. But when multiple services must all be available simultaneously, sequential ready calls introduce race conditions: a service might become not-ready while waiting for another service, leading to failed calls or panics. ready_and (or ready_all) solves this by polling all services concurrently and only proceeding when all are simultaneously ready. This is critical for operations like distributed transactions, primary-fallback patterns, or any scenario where partial availability leads to inconsistent behavior. The performance benefit is secondary but real: concurrent waiting means total wait time is bounded by the slowest service rather than the sum of all waits. Use ready for single services or when services are truly independent; use ready_and when coordination across multiple services is required.