Loading pageβ¦
Rust walkthroughs
Loading pageβ¦
reqwest::Client and reqwest::blocking::Client in terms of underlying implementation?reqwest::Client and reqwest::blocking::Client provide the same HTTP client functionality but through fundamentally different execution models. The async reqwest::Client is built on tokio and hyper, using non-blocking I/O that yields control during network operations, allowing a single thread to manage many concurrent requests. The blocking reqwest::blocking::Client wraps the same underlying hyper infrastructure but executes synchronously, blocking the current thread until each request completes. Internally, the blocking client spawns a single-threaded tokio runtime on a background thread to execute requests, then waits for completionβa design that maintains API simplicity while reusing the async implementation.
use reqwest::Client;
use std::error::Error;
#[tokio::main]
async fn async_client_example() -> Result<(), Box<dyn Error>> {
// Async client - must be used in async context
let client = Client::new();
// Request returns a Future that must be awaited
let response = client
.get("https://httpbin.org/get")
.send()
.await?;
let body = response.text().await?;
println!("Response: {}", body);
Ok(())
}
fn main() {
async_client_example().unwrap();
}The async client requires an async runtime (tokio) and uses .await for all operations.
use reqwest::blocking::Client;
use std::error::Error;
fn blocking_client_example() -> Result<(), Box<dyn Error>> {
// Blocking client - works in synchronous code
let client = Client::new();
// Request blocks until complete
let response = client
.get("https://httpbin.org/get")
.send()?;
let body = response.text()?;
println!("Response: {}", body);
Ok(())
}
fn main() {
blocking_client_example().unwrap();
}The blocking client works in standard synchronous Rust without an async runtime.
use reqwest::Client;
use std::sync::Arc;
// The async Client architecture:
//
// Client
// βββ Arc<ClientRef>
// βββ hyper::Client<HttpConnector, ReqwestBody>
// βββ tokio::runtime (provided by caller)
// βββ Non-blocking TCP connections
// βββ Async DNS resolution
// βββ Connection pooling
#[tokio::main]
async fn architecture_async() {
let client = Client::new();
// Internally:
// 1. Uses hyper::Client for HTTP protocol
// 2. Relies on tokio for async I/O
// 3. Connection pooling happens automatically
// 4. DNS resolution is async
// Multiple concurrent requests share the client
let handles: Vec<_> = (0..3)
.map(|i| {
let client = client.clone();
tokio::spawn(async move {
client
.get(&format!("https://httpbin.org/get?id={}", i))
.send()
.await
})
})
.collect();
for handle in handles {
let _ = handle.await;
}
}The async client integrates directly with the caller's tokio runtime.
use reqwest::blocking::Client;
// The blocking Client architecture:
//
// blocking::Client
// βββ Internal async Client
// βββ Thread-local tokio runtime
// βββ Background thread for execution
// βββ Same hyper::Client underneath
// βββ Connection pooling (per-runtime)
fn architecture_blocking() {
let client = Client::new();
// Internally:
// 1. Wraps the async Client implementation
// 2. Creates a thread-local tokio runtime on first use
// 3. Spawns request on runtime, blocks until complete
// 4. Connection pooling limited to the internal runtime
let response = client.get("https://httpbin.org/get").send().unwrap();
println!("Status: {}", response.status());
}The blocking client internally uses a tokio runtime but hides it from the caller.
use reqwest::Client;
use reqwest::blocking::Client as BlockingClient;
// Async client: one connection pool per runtime
#[tokio::main]
async fn async_pooling() {
let client = Client::builder()
.pool_max_idle_per_host(10) // Pool up to 10 idle connections
.pool_idle_timeout(std::time::Duration::from_secs(30))
.build()
.unwrap();
// All requests share the same pool
for _ in 0..5 {
client.get("https://httpbin.org/get").send().await.unwrap();
}
// Connections are reused efficiently
}
// Blocking client: one connection pool per Client
fn blocking_pooling() {
let client = BlockingClient::builder()
.pool_max_idle_per_host(10)
.pool_idle_timeout(std::time::Duration::from_secs(30))
.build()
.unwrap();
// Pool is managed by the internal runtime
for _ in 0..5 {
client.get("https://httpbin.org/get").send().unwrap();
}
}Both support connection pooling, but pooling scope differs based on runtime architecture.
use reqwest::Client;
use reqwest::blocking::Client as BlockingClient;
use std::thread;
// Async: true concurrency on shared runtime
#[tokio::main]
async fn async_concurrency() {
let client = Client::new();
// 100 concurrent requests on single thread (with tokio)
let handles: Vec<_> = (0..100)
.map(|i| {
let client = client.clone();
tokio::spawn(async move {
client
.get(&format!("https://httpbin.org/delay/1?id={}", i))
.send()
.await
})
})
.collect();
let start = std::time::Instant::now();
for handle in handles {
let _ = handle.await;
}
println!("Async 100 requests: {:?}", start.elapsed());
// Takes ~1 second total (parallel execution)
}
// Blocking: requires threads for concurrency
fn blocking_concurrency() {
// Each thread needs its own client or careful synchronization
let handles: Vec<_> = (0..100)
.map(|i| {
thread::spawn(move || {
let client = BlockingClient::new();
client
.get(&format!("https://httpbin.org/delay/1?id={}", i))
.send()
})
})
.collect();
let start = std::time::Instant::now();
for handle in handles {
let _ = handle.join();
}
println!("Blocking 100 requests: {:?}", start.elapsed());
// Takes ~1 second total (parallel via threads)
}Async achieves concurrency without threads; blocking requires threads for parallel requests.
// Async client requires tokio runtime
fn async_runtime_required() {
// This would panic - no runtime!
// let client = reqwest::Client::new();
// let response = tokio::runtime::Runtime::new()
// .block_on(client.get("https://example.com").send());
}
// Blocking client creates its own runtime
fn blocking_runtime_internal() {
use reqwest::blocking::Client;
// Works in plain main - no tokio attribute needed
let client = Client::new();
let response = client.get("https://httpbin.org/get").send().unwrap();
println!("Status: {}", response.status());
}
// Async can also work with block_on
fn async_with_manual_runtime() {
let rt = tokio::runtime::Runtime::new().unwrap();
let client = reqwest::Client::new();
rt.block_on(async {
let response = client.get("https://httpbin.org/get").send().await.unwrap();
println!("Status: {}", response.status());
});
}The blocking client manages its own runtime; async requires the caller to provide one.
use reqwest::blocking::Client;
// The blocking client uses a thread-local runtime
// This has implications for usage patterns
fn blocking_thread_local_behavior() {
// Each thread that uses a blocking client gets its own
// thread-local tokio runtime
let handle1 = std::thread::spawn(|| {
let client = Client::new();
// Runtime created on this thread
client.get("https://httpbin.org/get").send().unwrap()
});
let handle2 = std::thread::spawn(|| {
let client = Client::new();
// Different runtime on this thread
client.get("https://httpbin.org/get").send().unwrap()
});
handle1.join().unwrap();
handle2.join().unwrap();
// Note: Connection pools are NOT shared between threads
// Each thread's runtime has its own pool
}Each thread using the blocking client gets its own runtime and connection pool.
use reqwest::Client;
use reqwest::blocking::Client as BlockingClient;
use std::time::Instant;
#[tokio::main]
async fn performance_async() {
let client = Client::new();
// Async: low overhead per request
let start = Instant::now();
for _ in 0..10 {
client.get("https://httpbin.org/get").send().await.unwrap();
}
println!("Async sequential: {:?}", start.elapsed());
// Async: excellent for concurrent requests
let start = Instant::now();
let handles: Vec<_> = (0..10)
.map(|_| client.get("https://httpbin.org/get").send())
.collect();
futures::future::join_all(handles).await;
println!("Async concurrent: {:?}", start.elapsed());
}
fn performance_blocking() {
let client = BlockingClient::new();
// Blocking: similar latency for sequential
let start = Instant::now();
for _ in 0..10 {
client.get("https://httpbin.org/get").send().unwrap();
}
println!("Blocking sequential: {:?}", start.elapsed());
// Blocking: thread overhead for concurrency
let start = Instant::now();
let handles: Vec<_> = (0..10)
.map(|_| {
std::thread::spawn(|| {
BlockingClient::new()
.get("https://httpbin.org/get")
.send()
.unwrap()
})
})
.collect();
for h in handles {
h.join().unwrap();
}
println!("Blocking concurrent: {:?}", start.elapsed());
}Async excels at concurrent workloads; blocking has thread overhead for parallelism.
# Cargo.toml
# Async client (default)
[dependencies]
reqwest = { version = "0.12", features = ["json"] }
# Blocking client requires explicit feature
[dependencies]
reqwest = { version = "0.12", features = ["blocking", "json"] }
# Both can be enabled
[dependencies]
reqwest = { version = "0.12", features = ["blocking", "json"] }The blocking client is a separate feature that compiles in the background runtime machinery.
use reqwest::Client;
use reqwest::blocking::Client as BlockingClient;
use std::time::Duration;
// Both clients support similar builder options
fn builder_options() {
// Async client builder
let async_client = Client::builder()
.timeout(Duration::from_secs(30))
.connect_timeout(Duration::from_secs(10))
.user_agent("my-app/1.0")
.default_headers({
let mut headers = reqwest::header::HeaderMap::new();
headers.insert("X-Custom", "value".parse().unwrap());
headers
})
.pool_max_idle_per_host(5)
.build()
.unwrap();
// Blocking client builder (same options)
let blocking_client = BlockingClient::builder()
.timeout(Duration::from_secs(30))
.connect_timeout(Duration::from_secs(10))
.user_agent("my-app/1.0")
.default_headers({
let mut headers = reqwest::header::HeaderMap::new();
headers.insert("X-Custom", "value".parse().unwrap());
headers
})
.pool_max_idle_per_host(5)
.build()
.unwrap();
}Builder APIs are nearly identical; both clients support the same configuration options.
use reqwest::Client;
use reqwest::blocking::Client as BlockingClient;
use std::sync::Arc;
// Async client: cheap to clone, share freely
#[tokio::main]
async fn share_async_client() {
let client = Client::new();
// Clone is cheap - just increments Arc ref count
let client1 = client.clone();
let client2 = client.clone();
// All clones share the same connection pool
let handle1 = tokio::spawn(async move {
client1.get("https://httpbin.org/get").send().await
});
let handle2 = tokio::spawn(async move {
client2.get("https://httpbin.org/get").send().await
});
let _ = handle1.await;
let _ = handle2.await;
}
// Blocking client: can share via Arc, but pools per thread
fn share_blocking_client() {
let client = Arc::new(BlockingClient::new());
let client1 = Arc::clone(&client);
let client2 = Arc::clone(&client);
let h1 = std::thread::spawn(move || {
client1.get("https://httpbin.org/get").send().unwrap()
});
let h2 = std::thread::spawn(move || {
client2.get("https://httpbin.org/get").send().unwrap()
});
h1.join().unwrap();
h2.join().unwrap();
// Note: Each thread still uses its own internal runtime
// Connection pool is per-runtime, not per-client
}Both can be shared, but async clients share connection pools more effectively.
use reqwest::Client;
use reqwest::blocking::Client as BlockingClient;
#[tokio::main]
async fn async_error_handling() {
let client = Client::new();
match client.get("https://nonexistent.invalid").send().await {
Ok(response) => println!("Status: {}", response.status()),
Err(e) => {
// Errors are async-aware
if e.is_timeout() {
println!("Request timed out");
} else if e.is_connect() {
println!("Connection failed");
} else {
println!("Error: {}", e);
}
}
}
}
fn blocking_error_handling() {
let client = BlockingClient::new();
match client.get("https://nonexistent.invalid").send() {
Ok(response) => println!("Status: {}", response.status()),
Err(e) => {
// Same error types, just synchronous
if e.is_timeout() {
println!("Request timed out");
} else if e.is_connect() {
println!("Connection failed");
} else {
println!("Error: {}", e);
}
}
}
}Error types are shared between both clients; only the delivery method differs.
// Use async client when:
// 1. Already using tokio/async in your application
// 2. Need to make many concurrent requests
// 3. Want efficient resource usage (fewer threads)
// 4. Integrating with other async code
// Use blocking client when:
// 1. Writing a CLI tool or simple script
// 2. Already synchronous codebase
// 3. Few requests, concurrency not needed
// 4. Simplicity is more important than performanceChoose based on your application's architecture and requirements.
| Aspect | reqwest::Client | reqwest::blocking::Client |
|--------|-------------------|----------------------------|
| Execution model | Async (non-blocking) | Sync (blocking) |
| Runtime | Caller provides tokio | Internal thread-local tokio |
| Concurrency | Native async concurrency | Requires threads |
| Connection pool | Shared across tasks | Per-thread runtime |
| Memory usage | Lower (no thread per request) | Higher (thread overhead) |
| Code complexity | Requires async/await | Simple synchronous |
| Best for | High-concurrency services | Scripts, CLI tools |
reqwest::Client and reqwest::blocking::Client share the same underlying HTTP implementation through hyper, but differ fundamentally in execution model:
Async Client:
Blocking Client:
Key insight: The blocking client is essentially a wrapper around the async implementation, spawning an internal tokio runtime to execute requests. This design allows reqwest to maintain a single HTTP implementation while supporting both async and sync APIs. For applications already using tokio, the async client provides better resource utilization. For simple scripts or synchronous applications, the blocking client offers convenience without requiring async runtime setup. The performance difference is negligible for sequential requests but significant for high-concurrency scenarios.