Loading pageā¦
Rust walkthroughs
Loading pageā¦
hyper::Client, how does connection reuse work and when are connections closed?hyper::Client maintains a connection pool internally that keeps established TCP connections open for reuse across multiple requests. When a request completes, the underlying connection returns to the pool if the response indicates HTTP keep-alive semantics (the default for HTTP/1.1). Subsequent requests to the same host reuse pooled connections, avoiding TCP handshake overhead. Connections close when the pool reaches capacity, when the server sends a Connection: close header, when idle timeout expires, or when the Client is dropped. Understanding pool behavior helps optimize latency and resource usage in high-throughput applications.
use hyper::Client;
use hyper::body::Body;
use hyper::http::Request;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// First request: establishes new connection
let resp1 = client.get("http://example.com/api/data".parse()?).await?;
println!("Response 1 status: {}", resp1.status());
drop(resp1); // Connection returns to pool
// Second request: reuses connection from pool
let resp2 = client.get("http://example.com/api/other".parse()?).await?;
println!("Response 2 status: {}", resp2.status());
// Both requests used the same underlying TCP connection
// (assuming server supports keep-alive)
Ok(())
}The client automatically pools connections for reuse.
use hyper::Client;
use std::time::Duration;
#[tokio::main]
async fn main() {
// Default client with pool enabled
let client: Client<hyper::client::HttpConnector> = Client::new();
// Custom client with pool configuration
let client = Client::builder()
.pool_idle_timeout(Duration::from_secs(30)) // Close idle after 30s
.pool_max_idle_per_host(10) // Max 10 idle connections per host
.build::<hyper::client::HttpConnector, Body>();
// The pool is per-host: different hosts have separate connection pools
// Connections to example.com won't be reused for other.com
}The pool manages connections per host with configurable limits.
use hyper::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// HTTP/1.1 defaults to keep-alive
// Connection stays open after response completes
// Request 1
let res1 = client.get("http://example.com/first".parse()?).await?;
// After consuming the body, connection returns to pool
let body_bytes1 = hyper::body::to_bytes(res1.into_body()).await?;
// At this point, connection is in the pool
// Request 2 can reuse it
let res2 = client.get("http://example.com/second".parse()?).await?;
// If server sent "Connection: close" header,
// the connection would NOT be pooled
Ok(())
}HTTP/1.1 keep-alive keeps connections open for reuse by default.
use hyper::Client;
use std::time::Duration;
fn pool_configuration() {
// Create a builder with custom pool settings
let builder = Client::builder()
// How long idle connections stay in pool before closing
.pool_idle_timeout(Duration::from_secs(90))
// Max idle connections per host (not total)
.pool_max_idle_per_host(5)
// Enable/disable pooling (enabled by default)
.pool_connection(true); // Set false to disable pooling
// Note: pool_connection(false) creates a new connection for each request
// Useful in specific scenarios but generally not recommended
let client = builder.build::<hyper::client::HttpConnector, hyper::body::Body>();
}Configure pool size and timeout to match your workload.
use hyper::Client;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
.pool_idle_timeout(Duration::from_secs(10)) // Short timeout for demo
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// Make a request
let res = client.get("http://example.com/api".parse()?).await?;
drop(res);
// Connection is now in pool
// If we wait longer than idle timeout...
tokio::time::sleep(Duration::from_secs(15)).await;
// The connection has been closed by the pool
// Next request will establish a new connection
let res2 = client.get("http://example.com/api".parse()?).await?;
Ok(())
}Idle connections are closed after pool_idle_timeout expires.
use hyper::Client;
use std::time::Duration;
#[tokio::main]
async fn main() {
let client = Client::builder()
.pool_max_idle_per_host(2) // Only keep 2 idle connections per host
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// If we make 5 concurrent requests to example.com
// After they complete, only 2 connections stay in pool
// The other 3 are closed
// This prevents unbounded connection accumulation
}pool_max_idle_per_host limits pool size to prevent resource exhaustion.
use hyper::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// If server sends "Connection: close" header
// hyper will NOT pool the connection
// Example server response:
// HTTP/1.1 200 OK
// Content-Type: text/plain
// Connection: close
//
// [body]
// After this response, connection is closed
// Next request requires new connection
// This is transparent to client code
let res = client.get("http://example.com/api".parse()?).await?;
println!("Status: {}", res.status());
// Connection handling happens automatically
Ok(())
}Servers can request connection close via Connection: close header.
use hyper::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// Make requests
let res = client.get("http://example.com/api".parse()?).await?;
drop(res);
// Connection is pooled
// When client is dropped, all pooled connections are closed
drop(client);
// At this point:
// - All idle connections are closed
// - Any in-flight connections are closed when requests complete
// - No connections remain in pool
Ok(())
}Dropping the Client closes all pooled connections.
use hyper::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// Requests to different hosts use different connections
let res1 = client.get("http://example.com/api".parse()?).await?;
drop(res1);
// Connection to example.com pooled
let res2 = client.get("http://other.com/data".parse()?).await?;
drop(res2);
// Connection to other.com pooled (separate from example.com)
let res3 = client.get("http://example.com/other".parse()?).await?;
// Reuses connection to example.com
// NOT connection to other.com
// Each host has its own connection pool
// pool_max_idle_per_host applies per host, not total
Ok(())
}Connection pools are separate per destination host.
use hyper::Client;
use hyper_tls::HttpsConnector;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// HTTPS requires HttpsConnector
let https = HttpsConnector::new();
let client: Client<_, hyper::body::Body> = Client::builder()
.pool_idle_timeout(Duration::from_secs(60))
.pool_max_idle_per_host(10)
.build(https);
// Connection reuse for HTTPS is even more valuable
// TLS handshake overhead is significant
// First request: TCP connect + TLS handshake
let res1 = client.get("https://example.com/api".parse()?).await?;
drop(res1);
// Second request: reuses existing connection
// No TCP connect, no TLS handshake
let res2 = client.get("https://example.com/other".parse()?).await?;
// Pooling saves: TCP handshake time + TLS handshake time
Ok(())
}HTTPS connection reuse avoids expensive TLS handshakes.
use hyper::Client;
use std::time::{Duration, Instant};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// Measure first request (establishes connection)
let start = Instant::now();
let res1 = client.get("http://example.com/api".parse()?).await?;
let _ = hyper::body::to_bytes(res1.into_body()).await?;
println!("First request: {:?}", start.elapsed());
// Measure second request (likely reuses connection)
let start = Instant::now();
let res2 = client.get("http://example.com/api".parse()?).await?;
let _ = hyper::body::to_bytes(res2.into_body()).await?;
println!("Second request: {:?}", start.elapsed());
// Second request typically faster due to:
// - No TCP handshake
// - No TLS handshake (for HTTPS)
// - Connection already established
Ok(())
}Connection reuse is observable through latency measurements.
use hyper::Client;
#[tokio::main]
async fn main() {
// Disable pooling entirely
let client = Client::builder()
.pool_connection(false) // Disable pooling
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// Every request creates a new connection
// Connection is closed after response
// Use cases for disabling:
// - Connecting to servers that don't support keep-alive
// - Testing/debugging connection establishment
// - Very low-traffic clients where pooling overhead isn't worth it
}pool_connection(false) disables pooling entirely.
use hyper::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// Pooled connections can become stale
// Server might close connection while it's in pool
// Scenario:
// 1. Connection established, request made, connection pooled
// 2. Server times out connection while in pool
// 3. Client tries to reuse stale connection
// 4. Request fails with connection error
let result = client.get("http://example.com/api".parse()?).await;
match result {
Ok(res) => {
println!("Success: {}", res.status());
}
Err(e) => {
// Connection might have been stale
// hyper may or may not automatically retry
// For idempotent methods (GET, HEAD), retry is safe
eprintln!("Error: {}", e);
// Application-level retry:
// let result = client.get("http://example.com/api".parse()?).await?;
}
}
Ok(())
}Stale pooled connections cause errors; application-level retries may be needed.
use hyper::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client: Client<hyper::client::HttpConnector> = Client::new();
// Make some requests
let res1 = client.get("http://example.com/api".parse()?).await?;
drop(res1);
// When shutting down, you might want to:
// 1. Stop making new requests
// 2. Wait for in-flight requests to complete
// 3. Drop the client to close connections
// hyper doesn't have explicit "drain pool" method
// Dropping the client closes all connections
// For graceful shutdown:
drop(client); // All pooled connections closed
// In a real application:
// - Track in-flight requests
// - Wait for them to complete
// - Then drop client
Ok(())
}Graceful shutdown involves dropping the client after requests complete.
use hyper::Client;
use std::time::Duration;
fn pool_memory() {
// Each pooled connection uses memory:
// - TCP socket buffers
// - TLS state (for HTTPS)
// - Internal tracking structures
// For high-throughput services with many destinations:
let client = Client::builder()
// Lower idle timeout releases connections faster
.pool_idle_timeout(Duration::from_secs(30))
// Limit per-host connections
.pool_max_idle_per_host(5)
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// Memory usage roughly:
// pool_max_idle_per_host * (hosts) * (connection_memory)
// For 100 hosts with 5 connections each:
// If each connection uses ~50KB, total ~25MB
// Tune based on your memory constraints
}Pooled connections consume memory; tune limits for your constraints.
use hyper::Client;
#[tokio::main]
async fn main() {
// HTTP/2 handles multiple requests on single connection
// No need for connection pool in traditional sense
let client: Client<hyper::client::HttpConnector> = Client::new();
// With HTTP/2:
// - Multiple requests multiplexed over one connection
// - Connection pool less important
// - But still pooled for multiple hosts
// hyper negotiates HTTP/2 automatically if server supports it
// Connection reuse still beneficial for:
// - Multiple requests to same host
// - Avoiding connection establishment overhead
}HTTP/2 reduces the need for connection pooling through multiplexing.
use hyper::Client;
use std::time::Duration;
fn recommendations() {
// Default settings are reasonable for most use cases
// High-throughput client (many requests to few hosts):
let high_throughput = Client::builder()
.pool_max_idle_per_host(50) // More connections for parallelism
.pool_idle_timeout(Duration::from_secs(120)) // Longer idle
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// Low-memory environment:
let low_memory = Client::builder()
.pool_max_idle_per_host(2) // Minimize idle connections
.pool_idle_timeout(Duration::from_secs(10)) // Quick cleanup
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// Many destinations (e.g., web crawler):
let many_destinations = Client::builder()
.pool_max_idle_per_host(1) // Min connections per host
.pool_idle_timeout(Duration::from_secs(30)) // Moderate timeout
.build::<hyper::client::HttpConnector, hyper::body::Body>();
// HTTPS-heavy workload:
// Default settings work well; connection reuse critical for TLS
}Tune pool settings based on traffic patterns and constraints.
| Configuration | Effect |
|--------------|--------|
| pool_idle_timeout | How long idle connections stay in pool |
| pool_max_idle_per_host | Max idle connections per destination host |
| pool_connection(false) | Disable pooling entirely |
| pool_connection(true) | Enable pooling (default) |
| Trigger | Behavior |
|---------|----------|
| Request completion | Connection returns to pool (if keep-alive) |
| Idle timeout | Connection closed |
| Pool capacity reached | Oldest connection closed |
| Connection: close header | Connection not pooled |
| Client dropped | All connections closed |
| Server closes | Connection closed (error on reuse) |
hyper::Client connection pooling provides automatic reuse that's transparent to application code:
How reuse works: After a request completes, the underlying TCP connection returns to a per-host pool. Subsequent requests to the same host check for an available pooled connection before establishing a new one. This avoids TCP handshake latency and, for HTTPS, expensive TLS handshakes.
When connections close: Multiple triggers close connections:
pool_idle_timeout are closedpool_max_idle_per_host is reached, oldest connections closeConnection: close headers signal non-pooled connectionsClient closes all pooled connectionsKey insight: The pool optimizes for latency (connection reuse) while bounding resources (limits and timeouts). The defaults work well for most cases, but tuning matters for:
pool_max_idle_per_host for parallel requestsThe pooling behavior is automaticāyou make requests, and hyper handles connection lifecycle. Understanding it helps diagnose latency issues (stale connections causing errors), memory issues (too many idle connections), and throughput issues (insufficient connections for parallelism).