Loading pageâŚ
Rust walkthroughs
Loading pageâŚ
tower::layer::Layer::layer enable middleware chaining for service composition?tower::layer::Layer::layer provides a trait-based mechanism for wrapping services with middleware, enabling clean composition of multiple layers where each layer can add cross-cutting concerns like logging, retry logic, rate limiting, or authentication. The layer method transforms a service S into a new service Layer::Service, allowing layers to be chained in a type-safe, composable way. The key insight is that layers are factories for middlewareâthey don't directly handle requests, but instead wrap inner services with additional behavior, creating a stack where outer layers delegate to inner layers, and the innermost layer is the actual business logic.
use tower::Service;
use std::task::{Context, Poll};
use std::future::Future;
use http::{Request, Response};
// The Layer trait is defined as:
// pub trait Layer<S> {
// type Service;
// fn layer(&self, inner: S) -> Self::Service;
// }
// A Layer takes a service and returns a new service that wraps it
// This enables middleware to be added without changing the inner service's type
fn main() {
// The Layer trait abstracts over "service wrapping"
// layer(&self, inner: S) -> Self::Service
//
// Key properties:
// 1. Takes ownership of the inner service
// 2. Returns a new service type (Layer::Service)
// 3. Can be chained because the output is also a Service
// 4. Type transformation is compile-time known
println!("Layer<S> -> Service wraps S");
}The Layer trait abstracts over the transformation of one service into another.
use tower::Service;
use tower::layer::Layer;
use std::task::{Context, Poll};
use std::pin::Pin;
use std::future::Future;
use http::{Request, Response};
// A middleware service that wraps an inner service
pub struct LoggingService<S> {
inner: S,
name: &'static str,
}
impl<S, ReqBody> Service<Request<ReqBody>> for LoggingService<S>
where
S: Service<Request<ReqBody>>,
S::Future: Send + 'static,
{
type Response = S::Response;
type Error = S::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx)
}
fn call(&mut self, req: Request<ReqBody>) -> Self::Future {
println!("[{}] Request received", self.name);
let fut = self.inner.call(req);
Box::pin(async move {
let response = fut.await?;
println!("[{}] Response sent", self.name);
Ok(response)
})
}
}
// A Layer that creates LoggingService instances
pub struct LoggingLayer {
name: &'static str,
}
impl<S> Layer<S> for LoggingLayer {
type Service = LoggingService<S>;
fn layer(&self, inner: S) -> Self::Service {
LoggingService {
inner,
name: self.name,
}
}
}
fn main() {
// The Layer implementation separates configuration from service creation
let layer = LoggingLayer { name: "api" };
// layer() will be called with the inner service
// and return a wrapped LoggingService
}A Layer implementation creates middleware services that wrap inner services.
use tower::ServiceBuilder;
use tower::layer::Layer;
use std::time::Duration;
fn main() {
// Tower provides ServiceBuilder for layering multiple middleware
let _service = ServiceBuilder::new()
// Each layer() call wraps the previous service
.layer(tower::timeout::TimeoutLayer::new(Duration::from_secs(30)))
.layer(tower::limit::concurrency::ConcurrencyLimitLayer::new(100))
.layer(tower::retry::RetryLayer::new(tower::retry::policy::Standard::default()))
// Terminal service (the actual handler)
.service(MyHandler);
// This creates:
// Retry<ConcurrencyLimit<Timeout<MyHandler>>>
// Requests flow: Retry -> ConcurrencyLimit -> Timeout -> MyHandler
println!("Layers are chained outer-to-inner");
}
struct MyHandler;
impl<ReqBody> tower::Service<http::Request<ReqBody>> for MyHandler {
type Response = http::Response<String>;
type Error = String;
type Future = std::future::Ready<Result<Self::Response, Self::Error>>;
fn poll_ready(&mut self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<Result<(), Self::Error>> {
std::task::Poll::Ready(Ok(()))
}
fn call(&mut self, _req: http::Request<ReqBody>) -> Self::Future {
std::future::ready(Ok(http::Response::new("Hello".to_string())))
}
}ServiceBuilder::layer() chains middleware, each wrapping the previous service.
use tower::Service;
use std::fmt::Debug;
fn main() {
// Each layer transforms the service type
// Starting type: S
// After Layer1: Layer1::Service<S>
// After Layer2: Layer2::Service<Layer1::Service<S>>
// After Layer3: Layer3::Service<Layer2::Service<Layer1::Service<S>>>
// The type signature encodes the entire middleware stack
// This enables compile-time verification of the stack
// Example type transformation:
// MyHandler
// -> Timeout<MyHandler>
// -> ConcurrencyLimit<Timeout<MyHandler>>
// -> Retry<ConcurrencyLimit<Timeout<MyHandler>>>
// Each layer adds its type wrapper
// The innermost service is always the handler
println!("Type transformation: S -> L1<S> -> L2<L1<S>> -> L3<L2<L1<S>>>");
}Layers create nested types that encode the entire middleware stack at compile time.
use tower::ServiceBuilder;
use tower::layer::Layer;
use std::time::Duration;
fn main() {
// Layer order matters: request flows through layers in order,
// response flows back in reverse order
// Example: RateLimit -> Timeout -> Retry -> Handler
let _stack1 = ServiceBuilder::new()
.layer(tower::limit::RateLimitLayer::new(10, Duration::from_secs(1)))
.layer(tower::timeout::TimeoutLayer::new(Duration::from_secs(30)))
.layer(tower::retry::RetryLayer::new(tower::retry::policy::Standard::default()))
.service(MyHandler);
// Request flow:
// 1. RateLimit checks rate (may reject)
// 2. Timeout starts timer
// 3. Retry decides if retry is available
// 4. Handler processes request
// Response flow (reverse):
// 4. Handler returns response
// 3. Retry may retry on error
// 2. Timeout checks if expired
// 1. RateLimit updates counters
// Different order gives different semantics:
let _stack2 = ServiceBuilder::new()
.layer(tower::retry::RetryLayer::new(tower::retry::policy::Standard::default()))
.layer(tower::limit::RateLimitLayer::new(10, Duration::from_secs(1)))
.layer(tower::timeout::TimeoutLayer::new(Duration::from_secs(30)))
.service(MyHandler);
// In stack2:
// - Retries are NOT rate limited (retry outside rate limit)
// - Rate limited requests CAN retry (retry sees rate limit errors)
// - Timeout applies to each attempt individually
}
struct MyHandler;
impl<ReqBody> tower::Service<http::Request<ReqBody>> for MyHandler {
type Response = http::Response<String>;
type Error = String;
type Future = std::future::Ready<Result<Self::Response, Self::Error>>;
fn poll_ready(&mut self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<Result<(), Self::Error>> {
std::task::Poll::Ready(Ok(()))
}
fn call(&mut self, _req: http::Request<ReqBody>) -> Self::Future {
std::future::ready(Ok(http::Response::new("Hello".to_string())))
}
}Layer order determines the request/response flow and has semantic implications.
use tower::Service;
use tower::layer::Layer;
use std::task::{Context, Poll};
use std::pin::Pin;
use std::future::Future;
use http::{Request, Response};
// Middleware that adds a header to requests
pub struct AddHeaderService<S> {
inner: S,
header_name: &'static str,
header_value: &'static str,
}
impl<S, ReqBody> Service<Request<ReqBody>> for AddHeaderService<S>
where
S: Service<Request<ReqBody>>,
S::Future: Send + 'static,
{
type Response = S::Response;
type Error = S::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx)
}
fn call(&mut self, mut req: Request<ReqBody>) -> Self::Future {
req.headers_mut().insert(
self.header_name,
http::HeaderValue::from_static(self.header_value),
);
let fut = self.inner.call(req);
Box::pin(async move { fut.await })
}
}
// Layer that creates AddHeaderService instances
pub struct AddHeaderLayer {
name: &'static str,
value: &'static str,
}
impl<S> Layer<S> for AddHeaderLayer {
type Service = AddHeaderService<S>;
fn layer(&self, inner: S) -> Self::Service {
AddHeaderService {
inner,
header_name: self.name,
header_value: self.value,
}
}
}
fn main() {
// The Layer separates configuration from service wrapping
let add_auth = AddHeaderLayer {
name: "authorization",
value: "bearer token",
};
let add_trace = AddHeaderLayer {
name: "x-trace-id",
value: "12345",
};
// Chain layers
let _stack = tower::ServiceBuilder::new()
.layer(add_auth)
.layer(add_trace)
.service(MyHandler);
// Request will have both headers added
// Order: add_trace wraps add_auth wraps MyHandler
}
struct MyHandler;
impl<ReqBody> tower::Service<http::Request<ReqBody>> for MyHandler {
type Response = http::Response<String>;
type Error = String;
type Future = std::future::Ready<Result<Self::Response, Self::Error>>;
fn poll_ready(&mut self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<Result<(), Self::Error>> {
std::task::Poll::Ready(Ok(()))
}
fn call(&mut self, _req: http::Request<ReqBody>) -> Self::Future {
std::future::ready(Ok(http::Response::new("Hello".to_string())))
}
}Custom layers wrap services with configurable behavior.
use tower::layer::Layer;
fn main() {
// Layers are factories: they produce services from configuration
// This separates:
// 1. Layer configuration (done once at startup)
// 2. Service creation (done per-request or per-connection)
// Configuration phase
let timeout_layer = tower::timeout::TimeoutLayer::new(std::time::Duration::from_secs(30));
let rate_limit_layer = tower::limit::RateLimitLayer::new(100, std::time::Duration::from_secs(1));
// The layers are cheap to create and clone
// They hold only configuration, not state
// Service creation phase (when handler is ready)
// let service = ServiceBuilder::new()
// .layer(timeout_layer)
// .layer(rate_limit_layer)
// .service(my_handler);
// Each call to layer() produces a new service wrapping the inner one
println!("Layers are factories: config held separately from service");
}Layers separate configuration from service instantiation.
use tower::ServiceBuilder;
use std::time::Duration;
fn main() {
// The "tower" metaphor: services stacked vertically
// Request flows down, response flows up
// Imagine a request:
//
// âââââââââââââââââââ
// â RateLimit â <- Request enters here
// âââââââââââââââââââ¤
// â Timeout â <- Timer starts
// âââââââââââââââââââ¤
// â Retry â <- Retry policy checked
// âââââââââââââââââââ¤
// â Handler â <- Business logic
// âââââââââââââââââââ
//
// Response flows back up the tower
// Each layer can:
// - Reject the request (rate limit)
// - Modify the request (add headers)
// - Modify the response (transform body)
// - Retry on failure (retry policy)
// - Log/trace (observability)
let _stack = ServiceBuilder::new()
.layer(tower::limit::RateLimitLayer::new(10, Duration::from_secs(1)))
.layer(tower::timeout::TimeoutLayer::new(Duration::from_secs(5)))
.layer(tower::retry::RetryLayer::new(tower::retry::policy::Standard::default()))
.service(MyHandler);
println!("Tower: request flows down, response flows up");
}
struct MyHandler;
impl<ReqBody> tower::Service<http::Request<ReqBody>> for MyHandler {
type Response = http::Response<String>;
type Error = String;
type Future = std::future::Ready<Result<Self::Response, Self::Error>>;
fn poll_ready(&mut self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<Result<(), Self::Error>> {
std::task::Poll::Ready(Ok(()))
}
fn call(&mut self, _req: http::Request<ReqBody>) -> Self::Future {
std::future::ready(Ok(http::Response::new("OK".to_string())))
}
}The tower architecture creates a vertical stack where requests flow down and responses flow up.
use tower::layer::Layer;
use tower::ServiceBuilder;
fn main() {
// Layers can be composed and reused
// Tower provides Layer trait for custom composition
// Define a reusable stack of layers
fn common_layers() -> ServiceBuilder<tower::layer::util::Stack<
tower::timeout::TimeoutLayer,
tower::layer::util::Stack<
tower::limit::concurrency::ConcurrencyLimitLayer,
tower::identity::Identity,
>,
>> {
ServiceBuilder::new()
.layer(tower::timeout::TimeoutLayer::new(std::time::Duration::from_secs(30)))
.layer(tower::limit::concurrency::ConcurrencyLimitLayer::new(100))
}
// Can extend the stack
let _extended = common_layers()
.layer(tower::retry::RetryLayer::new(tower::retry::policy::Standard::default()));
// Or use directly
// let service = common_layers().service(my_handler);
// This enables:
// 1. Shared middleware configuration across handlers
// 2. Easy testing with different layer combinations
// 3. Modular service construction
println!("Layer composition enables reusable middleware stacks");
}Layers can be composed into reusable middleware stacks.
use tower::ServiceBuilder;
use tower_http::trace::TraceLayer;
use tower_http::compression::CompressionLayer;
use tower_http::limit::RequestBodyLimitLayer;
use std::time::Duration;
fn main() {
// Axum uses Tower layers for middleware
// Layers are applied via .layer() on Router
// Example middleware stack for an API:
let _middleware = ServiceBuilder::new()
// Request tracing
.layer(TraceLayer::new_for_http())
// Compression
.layer(CompressionLayer::new())
// Request body size limit
.layer(RequestBodyLimitLayer::new(1024 * 1024)) // 1MB
// Timeout
.layer(tower::timeout::TimeoutLayer::new(Duration::from_secs(30)))
// Concurrency limit
.layer(tower::limit::concurrency::ConcurrencyLimitLayer::new(100));
// In Axum:
// let app = Router::new()
// .route("/api", get(handler))
// .layer(middleware);
// The middleware applies to all routes in the router
// Each layer wraps the entire router
println!("Axum integrates with Tower layers for HTTP middleware");
}Web frameworks like Axum use Tower layers for HTTP middleware.
use tower::Service;
use tower::layer::Layer;
use std::task::{Context, Poll};
use std::pin::Pin;
use std::future::Future;
// Service that exposes inner service
pub struct InspectableService<S> {
inner: S,
name: &'static str,
}
impl<S, Request> Service<Request> for InspectableService<S>
where
S: Service<Request>,
S::Future: Send + 'static,
{
type Response = S::Response;
type Error = S::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
println!("[{}] poll_ready", self.name);
self.inner.poll_ready(cx)
}
fn call(&mut self, req: Request) -> Self::Future {
println!("[{}] call", self.name);
let fut = self.inner.call(req);
Box::pin(async move {
let resp = fut.await;
println!("[{}] response", self.name);
resp
})
}
}
impl<S> InspectableService<S> {
// Access inner service for inspection
pub fn inner(&self) -> &S {
&self.inner
}
pub fn inner_mut(&mut self) -> &mut S {
&mut self.inner
}
}
pub struct InspectLayer {
name: &'static str,
}
impl<S> Layer<S> for InspectLayer {
type Service = InspectableService<S>;
fn layer(&self, inner: S) -> Self::Service {
InspectableService {
inner,
name: self.name,
}
}
}
fn main() {
// Layers wrap inner services, which can be accessed
// This is useful for testing and introspection
println!("Layers preserve inner service access through wrapper methods");
}Layers can expose access to inner services for introspection.
use tower::ServiceBuilder;
use tower::layer::Layer;
use std::time::Duration;
fn main() {
// Layers can be conditionally applied based on configuration
struct Config {
enable_timeout: bool,
enable_rate_limit: bool,
timeout_secs: u64,
rate_limit_per_sec: u64,
}
fn build_service(config: Config) -> tower::layer::util::Stack<
tower::timeout::TimeoutLayer,
tower::layer::util::Stack<
tower::limit::RateLimitLayer,
tower::identity::Identity,
>,
> {
ServiceBuilder::new()
.layer(tower::timeout::TimeoutLayer::new(Duration::from_secs(config.timeout_secs)))
.layer(tower::limit::RateLimitLayer::new(config.rate_limit_per_sec, Duration::from_secs(1)))
}
// More dynamic approach using Option
fn maybe_timeout(enabled: bool, duration: Duration) -> impl Layer<() + Clone> {
if enabled {
tower::timeout::TimeoutLayer::new(duration)
} else {
// Identity layer that passes through
tower::identity::Identity::new()
}
}
// Configuration-driven middleware stack
let config = Config {
enable_timeout: true,
enable_rate_limit: true,
timeout_secs: 30,
rate_limit_per_sec: 100,
};
println!("Layers can be conditionally applied based on config");
}Layers support conditional application based on configuration.
use tower::Service;
use tower::layer::Layer;
use std::task::{Context, Poll};
use std::pin::Pin;
use std::future::Future;
use http::{Request, Response};
// Layer that transforms errors
pub struct ErrorTransformService<S> {
inner: S,
}
impl<S, ReqBody> Service<Request<ReqBody>> for ErrorTransformService<S>
where
S: Service<Request<ReqBody>>,
S::Error: std::fmt::Debug,
{
type Response = S::Response;
type Error = String; // Transformed error type
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx).map_err(|e| format!("Service not ready: {:?}", e))
}
fn call(&mut self, req: Request<ReqBody>) -> Self::Future {
let fut = self.inner.call(req);
Box::pin(async move {
fut.await.map_err(|e| format!("Request failed: {:?}", e))
})
}
}
pub struct ErrorTransformLayer;
impl<S> Layer<S> for ErrorTransformLayer {
type Service = ErrorTransformService<S>;
fn layer(&self, inner: S) -> Self::Service {
ErrorTransformService { inner }
}
}
fn main() {
// Layers can transform error types
// This allows standardizing errors across a stack
// Without transformation, each layer may have its own error type
// With transformation, all errors can be unified
println!("Layers can transform error types for error standardization");
}Layers can transform error types, enabling error standardization across the stack.
Layer concepts:
| Concept | Description |
|---------|-------------|
| Layer<S> | Factory that wraps service S |
| layer(&self, inner: S) | Creates wrapped service |
| ServiceBuilder | Composes multiple layers |
| Type transformation | S -> L1<S> -> L2<L1<S>> |
Layer application order:
| Order | Semantics |
|-------|-----------|
| Retry -> RateLimit -> Timeout | Retry not rate-limited, each attempt timed |
| RateLimit -> Retry -> Timeout | Retry consumes rate limit, each attempt timed |
| Timeout -> Retry -> RateLimit | Timeout applies to all retries combined |
Common layer types:
| Layer | Purpose |
|-------|---------|
| TimeoutLayer | Limit request duration |
| RateLimitLayer | Limit requests per time window |
| ConcurrencyLimitLayer | Limit concurrent requests |
| RetryLayer | Retry failed requests |
| TraceLayer | Log/trace requests |
| CompressionLayer | Compress responses |
Key properties:
| Property | Implication | |----------|-------------| | Zero-cost abstraction | Layers compile to efficient code | | Type-safe composition | Compiler verifies stack types | | Separation of concerns | Each layer handles one concern | | Composable | Layers can be stacked arbitrarily |
Key insight: tower::layer::Layer::layer enables middleware chaining by implementing the factory pattern for service transformation. Each Layer is a configuration holder that produces a wrapping Service when layer() is called, transforming the service type in the process. This design enables composable middleware stacks where each layer adds a specific cross-cutting concernâtimeouts, rate limiting, retry logic, tracingâwithout the inner service knowing about or handling these concerns. The ServiceBuilder provides a fluent API for composing layers, where each .layer() call wraps the previous service, creating a nested type structure that encodes the entire middleware stack. The order of layer application matters: outer layers see requests first and responses last, allowing layers like Retry to retry failed requests from inner layers, or RateLimit to enforce quotas before requests reach expensive operations. This architecture separates middleware configuration from service creation, enabling reusable layer stacks that can be applied to multiple handlers with different underlying implementations.