Loading pageā¦
Rust walkthroughs
Loading pageā¦
reqwest, what are the performance implications of connection pooling and how can you configure it?HTTP connection pooling dramatically affects the performance of clients making repeated requests to the same servers. reqwest provides connection pooling by default through its Client type, but understanding the configuration options helps you optimize for your specific workload.
Creating a reqwest::Client enables connection pooling automatically:
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Client includes a connection pool by default
let client = Client::new();
// These requests to the same host reuse connections
let resp1 = client.get("https://api.example.com/users").send().await?;
let resp2 = client.get("https://api.example.com/posts").send().await?;
let resp3 = client.get("https://api.example.com/comments").send().await?;
// Only one TCP connection + TLS handshake was needed
// Connections are kept alive and reused
Ok(())
}Each request reuses an existing connection if one is available, avoiding TCP connection establishment and TLS handshake overhead.
Without connection pooling, each request incurs significant overhead:
use reqwest::Client;
use std::time::Instant;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url = "https://api.example.com/endpoint";
// With connection pooling (default)
let client = Client::new();
let start = Instant::now();
for _ in 0..10 {
let _resp = client.get(url).send().await?;
}
println!("With pooling: {:?}", start.elapsed());
// ~100-200ms for 10 requests (connection reused)
// Without connection pooling
let start = Instant::now();
for _ in 0..10 {
// New client for each request = new connection each time
let client = Client::new();
let _resp = client.get(url).send().await?;
}
println!("Without pooling: {:?}", start.elapsed());
// ~1-3 seconds for 10 requests (10 connections)
Ok(())
}The overhead comes from TCP three-way handshake, TLS negotiation, and TCP slow-start on each new connection.
reqwest uses hyper's connection pool, which maintains idle connections per host:
// Conceptual structure
struct ConnectionPool {
// Map from (scheme, host, port) -> Vec<IdleConnection>
idle_connections: HashMap<(Scheme, Host, Port), Vec<IdleConnection>>,
// Configuration
max_idle_per_host: usize,
idle_timeout: Duration,
}When a request is made:
The pool_max_idle_per_host setting controls how many idle connections are kept per host:
use reqwest::Client;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
// Keep up to 10 idle connections per host
.pool_max_idle_per_host(10)
.build()?;
// If you make 20 concurrent requests to the same host,
// only 10 connections will be kept in the pool after completion
Ok(())
}The trade-off:
// Lower pool size: less memory, but more reconnections
let client = Client::builder()
.pool_max_idle_per_host(2)
.build()?;
// Higher pool size: more memory, better for bursty traffic
let client = Client::builder()
.pool_max_idle_per_host(50)
.build()?;Idle connections are closed after a timeout to free resources:
use reqwest::Client;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
// Close idle connections after 30 seconds
.pool_idle_timeout(Some(Duration::from_secs(30)))
.build()?;
// If no requests are made for 30 seconds,
// pooled connections are closed
Ok(())
}Setting the timeout appropriately balances resource usage against reconnection costs:
// Short timeout: saves resources, more reconnections
let client = Client::builder()
.pool_idle_timeout(Some(Duration::from_secs(5)))
.build()?;
// Long timeout: keeps connections ready, uses more resources
let client = Client::builder()
.pool_idle_timeout(Some(Duration::from_secs(300)))
.build()?;
// None: connections kept indefinitely (until server closes)
let client = Client::builder()
.pool_idle_timeout(None)
.build()?;For some workloads, connection pooling isn't desirable:
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
// Disable connection pooling entirely
.pool_max_idle_per_host(0)
.build()?;
// Every request creates a new connection
// Useful when connecting to many different hosts once
Ok(())
}This is appropriate when:
Timeouts interact with the connection pool:
use reqwest::Client;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
// Time to establish a new connection
.connect_timeout(Duration::from_secs(5))
// Total time for the request (including pooled connection acquisition)
.timeout(Duration::from_secs(30))
// How long to keep connections in the pool
.pool_idle_timeout(Some(Duration::from_secs(60)))
.build()?;
// Pooled connections skip connect_timeout
// (already established)
Ok(())
}Pooled connections bypass the connect_timeout since they're already established.
TLS handshake overhead makes pooling especially valuable for HTTPS:
use reqwest::Client;
use std::time::Instant;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url = "https://api.example.com/endpoint";
let client = Client::new();
let start = Instant::now();
// First request: TCP handshake + TLS handshake + request
let _resp1 = client.get(url).send().await?;
println!("First request: {:?}", start.elapsed());
// ~50-150ms (new connection + TLS)
let start = Instant::now();
// Second request: Just the request (reused connection)
let _resp2 = client.get(url).send().await?;
println!("Second request: {:?}", start.elapsed());
// ~10-50ms (pooled connection)
// The TLS session is reused along with the connection
Ok(())
}HTTPS connection reuse also enables TLS session resumption, further reducing handshake overhead.
The pool has limits that can cause waits:
use reqwest::Client;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
.pool_max_idle_per_host(5)
.build()?;
// If you make 20 concurrent requests to the same host
// Only ~5 connections will be pooled
// The rest will be created on-demand and discarded after use
// This can cause connection churn:
let handles: Vec<_> = (0..20)
.map(|i| {
let client = client.clone();
tokio::spawn(async move {
client.get("https://api.example.com/endpoint").send().await
})
})
.collect();
// After completion: 5 connections in pool, 15 discarded
Ok(())
}For high-concurrency workloads, tune the pool size:
use reqwest::Client;
fn client_for_high_concurrency() -> Result<Client, reqwest::Error> {
Client::builder()
// Match pool size to expected concurrency
.pool_max_idle_per_host(100)
// Keep connections around for bursty traffic
.pool_idle_timeout(Some(std::time::Duration::from_secs(120)))
.build()
}HTTP/2 multiplexes requests over a single connection, reducing pool needs:
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
// HTTP/2 is enabled by default for HTTPS
.build()?;
// With HTTP/2, many concurrent requests use one connection
// The pool size matters less because multiplexing handles concurrency
let handles: Vec<_> = (0..100)
.map(|i| {
let client = client.clone();
tokio::spawn(async move {
client.get("https://api.example.com/endpoint").send().await
})
})
.collect();
// All 100 requests might use a single HTTP/2 connection
// No connection pool pressure
Ok(())
}HTTP/2 changes the calculusāpool size matters less because multiplexing allows many concurrent streams per connection.
You can observe pool behavior through connection headers:
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
// First request
let resp1 = client.get("https://httpbin.org/get").send().await?;
println!("Connection: {:?}", resp1.headers().get("connection"));
// Often "keep-alive" indicating the connection will be pooled
// Second request reuses connection
let resp2 = client.get("https://httpbin.org/get").send().await?;
// This request was faster due to connection reuse
Ok(())
}Cloning a Client shares the connection pool:
use reqwest::Client;
use std::sync::Arc;
struct ApiClient {
http: Client, // Cloned Client shares the pool
base_url: String,
}
impl ApiClient {
fn new(base_url: &str) -> Self {
ApiClient {
http: Client::new(), // Create once
base_url: base_url.to_string(),
}
}
fn clone_http(&self) -> Client {
self.http.clone() // Shares the underlying pool
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api = ApiClient::new("https://api.example.com");
// All these share the same connection pool
let client1 = api.clone_http();
let client2 = api.clone_http();
// Requests through client1 and client2 reuse connections
Ok(())
}The Client is an Arc wrapper around the actual client state, so clones are cheap and share the pool.
use reqwest::Client;
use std::time::Duration;
fn high_throughput_client() -> Result<Client, reqwest::Error> {
Client::builder()
.pool_max_idle_per_host(50) // Large pool for many hosts
.pool_idle_timeout(Some(Duration::from_secs(300))) // Keep connections
.connect_timeout(Duration::from_secs(10))
.timeout(Duration::from_secs(60))
.build()
}use reqwest::Client;
use std::time::Duration;
fn memory_constrained_client() -> Result<Client, reqwest::Error> {
Client::builder()
.pool_max_idle_per_host(2) // Small pool
.pool_idle_timeout(Some(Duration::from_secs(30))) // Aggressive cleanup
.build()
}use reqwest::Client;
use std::time::Duration;
fn many_hosts_client() -> Result<Client, reqwest::Error> {
Client::builder()
.pool_max_idle_per_host(1) // One per host
.pool_idle_timeout(Some(Duration::from_secs(10))) // Quick cleanup
.build()
}Connection pooling in reqwest provides significant performance benefits by reusing TCP connections and TLS sessions. The default settings work well for most use cases, but tuning is valuable for specific workloads:
Increase pool_max_idle_per_host when you have high concurrent traffic to few hosts. This avoids connection churn during traffic bursts.
Decrease pool_idle_timeout when memory is constrained or when connecting to many hosts that won't be revisited soon.
Use pool_max_idle_per_host(0) only when you don't want connection reuseāsuch as single requests to many different hosts.
HTTP/2 reduces pool importance because multiplexing allows many concurrent requests over a single connection.
The Client type's clone-on-Arc design means you can cheaply clone clients to share a single pool across your application, which is the recommended pattern for most use cases. Create one configured Client and share it throughout your codebase.