What are the trade-offs between reqwest::Client and blocking::Client for synchronous vs asynchronous HTTP requests?
The async reqwest::Client enables concurrent request handling without blocking threads, making it suitable for high-throughput applications and async runtimes, while blocking::Client provides a simpler synchronous API that blocks the current thread during requests, appropriate for simple scripts, CLI tools, or codebases without async runtimes. The choice involves trading off complexity for scalability: async clients scale better under concurrent load but require async/await throughout your code, while blocking clients are straightforward but consume one thread per in-flight request.
Basic Usage Comparison
use reqwest::Client as AsyncClient;
use reqwest::blocking::Client as BlockingClient;
#[tokio::main]
async fn async_example() -> Result<(), reqwest::Error> {
let client = AsyncClient::new();
// Returns a Future that must be awaited
let response = client.get("https://httpbin.org/get").send().await?;
let text = response.text().await?;
println!("Response: {}", text);
Ok(())
}
fn blocking_example() -> Result<(), reqwest::Error> {
let client = BlockingClient::new();
// Blocks until complete, returns Result directly
let response = client.get("https://httpbin.org/get").send()?;
let text = response.text()?;
println!("Response: {}", text);
Ok(())
}The async client returns Futures that must be .awaited; the blocking client returns results directly.
Concurrency Model
use reqwest::Client;
use std::time::Duration;
#[tokio::main]
async fn concurrent_requests() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
let urls = vec![
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
];
// Async: All requests in parallel on ONE thread
let start = std::time::Instant::now();
let handles: Vec<_> = urls
.into_iter()
.map(|url| client.get(url).send())
.collect();
let responses = futures::future::join_all(handles).await;
let async_duration = start.elapsed();
// Takes ~1 second total (all concurrent)
println!("Async completed in {:?}", async_duration);
Ok(())
}
fn blocking_concurrent() -> Result<(), Box<dyn std::error::Error>> {
use reqwest::blocking::Client;
use std::thread;
let client = Client::new();
let urls = vec![
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
];
// Blocking: Need threads for concurrency
let start = std::time::Instant::now();
let handles: Vec<_> = urls
.into_iter()
.map(|url| {
let client = client.clone();
thread::spawn(move || {
client.get(url).send()
})
})
.collect();
for handle in handles {
handle.join().unwrap()?;
}
let blocking_duration = start.elapsed();
// Takes ~1 second if 3 threads, but uses 3 threads
// If only 1 thread, would take ~3 seconds
println!("Blocking completed in {:?}", blocking_duration);
Ok(())
}Async achieves concurrency with a single thread; blocking requires one thread per concurrent request.
Thread Pool Exhaustion Risk
use reqwest::blocking::Client;
use std::thread;
fn thread_pool_risk() {
let client = Client::new();
// Imagine a server with 4 threads handling requests
// Each request makes a blocking HTTP call
// If all 4 threads are waiting for HTTP responses:
// - No threads available for new requests
// - Server appears "stuck"
// - Thread pool is exhausted
// This is the classic "thread-per-request" scaling problem
// Async avoids this by not consuming threads while waiting
}
// Demonstration: with limited threads
fn limited_thread_scenario() {
// Suppose we have a thread pool of 2 threads
// And need to make 10 HTTP requests
// Blocking approach: 2 threads = 2 concurrent requests max
// Total time = 5 sequential batches
// Async approach: 1 thread handles all 10 concurrently
// Total time = 1 batch (limited by server, not threads)
}Blocking clients can exhaust thread pools when many requests are in flight.
When Blocking Is Appropriate
use reqwest::blocking::Client;
// CLI tools: Simple, linear execution
fn cli_tool() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
// Get user input
let url = std::env::args().nth(1).expect("URL required");
// Make request
let response = client.get(&url).send()?;
let body = response.text()?;
// Print output
println!("{}", body);
// No async complexity needed
// One request, one thread, straightforward
Ok(())
}
// Scripts: Quick and dirty
fn simple_script() {
let client = Client::new();
// Sequential operations are fine here
let data1 = client.get("https://api.example.com/data1").send().unwrap();
let data2 = client.get("https://api.example.com/data2").send().unwrap();
// Process results
println!("Got both responses");
}
// Synchronous codebases: No async runtime available
fn legacy_codebase() {
// If the codebase doesn't use async/await
// Adding reqwest::blocking is simpler than
// introducing an async runtime throughout
}Blocking clients suit CLI tools, scripts, and synchronous codebases.
When Async Is Appropriate
use reqwest::Client;
// Web servers: Handle many connections
#[tokio::main]
async fn web_server_example() {
let client = Client::new();
// Each incoming request makes HTTP calls
// Async allows handling thousands of concurrent requests
// with minimal threads
// axum / actix-web / warp all use async
}
// Microservices: Gateway pattern
#[tokio::main]
async fn api_gateway() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
// Gateway aggregates multiple backend services
let user_url = "https://users.example.com/profile";
let orders_url = "https://orders.example.com/history";
let recommendations_url = "https://rec.example.com/items";
// Fetch all in parallel
let (user, orders, recs) = tokio::try_join!(
client.get(user_url).send(),
client.get(orders_url).send(),
client.get(recommendations_url).send(),
)?;
// Combine responses
Ok(())
}
// High-throughput applications
#[tokio::main]
async fn high_throughput() {
let client = Client::new();
// Streaming large amounts of data
// Concurrent processing pipeline
// Event-driven architecture
}Async is appropriate for web servers, microservices, and high-throughput applications.
Error Handling Comparison
use reqwest::Client as AsyncClient;
use reqwest::blocking::Client as BlockingClient;
#[tokio::main]
async fn async_error_handling() -> Result<(), Box<dyn std::error::Error>> {
let client = AsyncClient::new();
// Errors are propagated through Future
match client.get("https://invalid-url-12345.com").send().await {
Ok(response) => {
println!("Status: {}", response.status());
}
Err(e) => {
// Could be: DNS error, connection refused, timeout, etc.
if e.is_timeout() {
eprintln!("Request timed out");
} else if e.is_connect() {
eprintln!("Connection failed");
} else {
eprintln!("Error: {}", e);
}
}
}
Ok(())
}
fn blocking_error_handling() -> Result<(), Box<dyn std::error::Error>> {
let client = BlockingClient::new();
// Same error types, synchronous context
match client.get("https://invalid-url-12345.com").send() {
Ok(response) => {
println!("Status: {}", response.status());
}
Err(e) => {
if e.is_timeout() {
eprintln!("Request timed out");
} else if e.is_connect() {
eprintln!("Connection failed");
} else {
eprintln!("Error: {}", e);
}
}
}
Ok(())
}Error handling is similar, but async requires .await on each fallible operation.
Configuration Options
use reqwest::Client as AsyncClient;
use reqwest::blocking::Client as BlockingClient;
use std::time::Duration;
fn configure_clients() -> Result<(), Box<dyn std::error::Error>> {
// Both clients support similar configuration
let async_client = AsyncClient::builder()
.timeout(Duration::from_secs(30))
.connect_timeout(Duration::from_secs(10))
.user_agent("MyApp/1.0")
.default_headers({
let mut headers = reqwest::header::HeaderMap::new();
headers.insert("X-Custom", "value".parse()?);
headers
})
.pool_max_idle_per_host(10)
.pool_idle_timeout(Duration::from_secs(60))
.build()?;
let blocking_client = BlockingClient::builder()
.timeout(Duration::from_secs(30))
.connect_timeout(Duration::from_secs(10))
.user_agent("MyApp/1.0")
.default_headers({
let mut headers = reqwest::header::HeaderMap::new();
headers.insert("X-Custom", "value".parse()?);
headers
})
.pool_max_idle_per_host(10)
.pool_idle_timeout(Duration::from_secs(60))
.build()?;
Ok(())
}Configuration options are nearly identical between async and blocking clients.
Connection Pooling
use reqwest::Client;
#[tokio::main]
async fn connection_pooling() -> Result<(), Box<dyn std::error::Error>> {
// Both clients maintain connection pools
let client = Client::builder()
.pool_max_idle_per_host(5) // Keep 5 idle connections per host
.pool_idle_timeout(std::time::Duration::from_secs(60))
.build()?;
// First request: establishes connection
let resp1 = client.get("https://httpbin.org/get").send().await?;
// Second request: reuses connection (faster)
let resp2 = client.get("https://httpbin.org/get").send().await?;
// Connection pooling works identically for blocking client
// Reusing the client is important for performance
// Don't create a new client for each request!
Ok(())
}Both clients support connection pooling; reuse clients to benefit from it.
Streaming Response Bodies
use reqwest::Client;
use tokio::io::AsyncReadExt;
#[tokio::main]
async fn async_streaming() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
let mut response = client.get("https://httpbin.org/stream-bytes/10000").send().await?;
// Stream response body without loading all into memory
while let Some(chunk) = response.chunk().await? {
println!("Received {} bytes", chunk.len());
// Process chunk immediately
}
Ok(())
}
fn blocking_streaming() -> Result<(), Box<dyn std::error::Error>> {
use reqwest::blocking::Client;
use std::io::Read;
let client = Client::new();
let mut response = client.get("https://httpbin.org/stream-bytes/10000").send()?;
// Stream response body
let mut buffer = [0u8; 1024];
loop {
let bytes_read = response.read(&mut buffer)?;
if bytes_read == 0 {
break;
}
println!("Received {} bytes", bytes_read);
}
Ok(())
}Both clients support streaming; async integrates with tokio::io, blocking with std::io.
Timeout Behavior
use reqwest::Client;
use std::time::Duration;
#[tokio::main]
async fn async_timeout() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder()
.timeout(Duration::from_secs(5))
.build()?;
// Timeout is async: other tasks can continue
match client.get("https://httpbin.org/delay/10").send().await {
Ok(_) => println!("Completed"),
Err(e) if e.is_timeout() => println!("Timed out after 5s"),
Err(e) => return Err(e.into()),
}
// During timeout, other async tasks run
// No thread is blocked waiting
Ok(())
}
fn blocking_timeout() -> Result<(), Box<dyn std::error::Error>> {
use reqwest::blocking::Client;
let client = Client::builder()
.timeout(Duration::from_secs(5))
.build()?;
// Timeout blocks this thread for 5 seconds
match client.get("https://httpbin.org/delay/10").send() {
Ok(_) => println!("Completed"),
Err(e) if e.is_timeout() => println!("Timed out after 5s"),
Err(e) => return Err(e.into()),
}
// During timeout, this thread does nothing
// Cannot process other work on this thread
Ok(())
}Timeouts work similarly, but async allows other tasks to proceed during the wait.
Mixing Async and Blocking
// DON'T: Use blocking client in async context
#[tokio::main]
async fn bad_example() -> Result<(), Box<dyn std::error::Error>> {
let client = reqwest::blocking::Client::new();
// This blocks the async runtime thread!
// The runtime cannot make progress on other tasks
let response = client.get("https://httpbin.org/delay/1").send()?;
// This is an anti-pattern that defeats the purpose of async
Ok(())
}
// DO: Use async client in async context
#[tokio::main]
async fn good_example() -> Result<(), Box<dyn std::error::Error>> {
let client = reqwest::Client::new();
// Proper async
let response = client.get("https://httpbin.org/delay/1").send().await?;
Ok(())
}
// If you MUST use blocking in async context, use spawn_blocking
#[tokio::main]
async fn if_you_must() -> Result<(), Box<dyn std::error::Error>> {
let response = tokio::task::spawn_blocking(|| {
let client = reqwest::blocking::Client::new();
client.get("https://httpbin.org/delay/1").send()
}).await??;
// spawn_blocking moves the blocking call to a separate thread pool
// The async runtime can continue on its threads
Ok(())
}Never use blocking clients in async contexts; use spawn_blocking if absolutely necessary.
Client Cloning
use reqwest::Client;
use std::sync::Arc;
#[tokio::main]
async fn client_cloning() {
// Client is cheaply cloneable (Arc internally)
let client = Client::new();
// Cloning is cheap - shares the connection pool
let client_clone = client.clone();
// Use in multiple tasks
let handle1 = tokio::spawn(async move {
client_clone.get("https://httpbin.org/get").send().await
});
let handle2 = tokio::spawn(async move {
client.get("https://httpbin.org/get").send().await
});
// Both tasks share the same connection pool
// Same applies to blocking client
}Both clients are cheaply cloneable; they share connection pools internally.
Feature Flags
# Cargo.toml
# Async client (default)
reqwest = { version = "0.11", features = ["json"] }
# Blocking client requires explicit feature
reqwest = { version = "0.11", features = ["json", "blocking"] }
# Both can be enabled simultaneously
# (useful for mixed codebases transitioning)
reqwest = { version = "0.11", features = ["json", "blocking"] }The blocking feature must be explicitly enabled; it's not included by default.
Performance Characteristics
use reqwest::Client;
use std::time::Instant;
#[tokio::main]
async fn performance_characteristics() {
// Async client:
// - Low memory per concurrent request (just state machine)
// - Can handle thousands of concurrent requests
// - Context switching is in userspace (fast)
// - Overhead from Future machinery
// - Better for I/O-bound workloads
// Blocking client:
// - Higher memory per concurrent request (full stack)
// - Limited by number of threads
// - OS-level context switching (slower)
// - Simpler execution model
// - Better for CPU-bound workloads (but HTTP is I/O-bound)
// In practice:
// - For < 10 concurrent requests: both are fine
// - For > 100 concurrent requests: async wins
// - For > 1000 concurrent requests: async essential
}Async scales better for high concurrency; blocking is fine for low concurrency.
Synthesis
Comparison table:
| Aspect | reqwest::Client (async) |
blocking::Client |
|---|---|---|
| Return type | Future (must .await) |
Result (direct) |
| Concurrency | Single thread, many requests | One thread per request |
| Runtime | Requires async runtime (tokio) | No runtime needed |
| Scalability | Excellent (thousands of requests) | Limited by thread count |
| Complexity | Higher (async/await throughout) | Lower (synchronous) |
| Use case | Web servers, microservices | CLI tools, scripts |
| Thread safety | Send + Sync | Send + Sync |
| Connection pooling | Yes | Yes |
When to use async Client:
// Web servers handling many concurrent requests
// Microservices making multiple backend calls
// High-throughput applications
// Applications already using async/await
// Event-driven architectures
// Streaming large responsesWhen to use blocking::Client:
// CLI tools with sequential execution
// Simple scripts
// Codebases without async runtime
// Low concurrency requirements (< 10 requests)
// Quick prototypes
// Integration tests in synchronous codeKey insight: The choice between async and blocking reqwest::Client is primarily about your application's concurrency model, not HTTP-specific concerns. Async clients shine when you need to handle many concurrent requests efficientlyâweb servers, API gateways, microservicesâwhere blocking clients would exhaust thread pools. Blocking clients are appropriate for simple, sequential use casesâCLI tools, scripts, batch jobsâwhere the simplicity of synchronous code outweighs scalability concerns. The critical anti-pattern is using blocking::Client inside an async context: it blocks the runtime thread and prevents other tasks from making progress. If you must make blocking calls in async code, use tokio::task::spawn_blocking to move them off the async runtime threads.
