Loading pageā¦
Rust walkthroughs
Loading pageā¦
tracing, what's the difference between #[instrument] and manually creating spans with Span::current()?Structured logging and distributed tracing require context to flow through your application, connecting disparate operations into coherent traces. The tracing crate provides two approaches for creating this context: the declarative #[instrument] attribute macro and the imperative Span::current() API. Understanding their differences helps you choose the right tool for each situation.
Spans represent units of work in your application. They carry contextual information that downstream subscribers can use for logging, metrics, and distributed tracing:
use tracing::{span, Level, info};
fn process_order(order_id: u64) {
let span = span!(Level::INFO, "process_order", order_id = %order_id);
let _enter = span.enter();
info!("Starting order processing");
// ... processing logic
info!("Order processed successfully");
}Every event logged within this span automatically includes the order_id field, creating a consistent context without manual propagation.
The #[instrument] attribute automates span creation and entry:
use tracing::{info, instrument};
#[instrument]
fn process_order(order_id: u64) {
info!("Starting order processing");
// ... processing logic
info!("Order processed successfully");
}This generates code equivalent to the manual span creation above. The macro captures function arguments as span fields, handles span lifetime, and ensures proper cleanup.
The macro transforms your function into instrumented code:
#[instrument]
fn process_order(order_id: u64) {
// Function body
}
// Expands to approximately:
fn process_order(order_id: u64) {
let span = tracing::span!(
tracing::Level::INFO,
"process_order",
order_id = tracing::field::debug(&order_id),
);
let __tracing_attr_guard = span.enter();
// Function body
}The span enters at function start and exits when the function returns, even if that return happens early due to ? or explicit return.
The macro provides fine-grained control over which arguments become fields:
use tracing::instrument;
// Capture all arguments
#[instrument]
fn fetch_user(user_id: u64, tenant: String) -> User {
// Fields: user_id, tenant
}
// Skip sensitive data
#[instrument(skip(password))]
fn authenticate(username: String, password: String) -> Result<Token, Error> {
// Fields: username (password is skipped)
}
// Skip self for methods
#[instrument(skip(self))]
impl Database {
fn query(&self, sql: &str) -> Result<Rows, Error> {
// Fields: sql (self is skipped)
}
}
// Custom field names and formatting
#[instrument(
fields(
user = %user_id,
request_id = %req.id,
)
)]
fn handle_request(user_id: u64, req: Request) {
// Custom fields with specific formatting
}Span::current() returns the currently active span, allowing you to add fields or create child spans:
use tracing::{Span, span, Level, info};
fn process_order(order_id: u64) {
let span = span!(Level::INFO, "process_order", order_id = %order_id);
let _enter = span.enter();
// Later, add contextual information
Span::current().record("customer_id", &12345);
info!("Processing");
}This approach gives explicit control over when spans are created, entered, and modified.
The most significant difference lies in span lifecycle management:
use tracing::instrument;
#[instrument]
async fn fetch_data(url: &str) -> Result<Data, Error> {
// Span is active for the entire async operation
let response = http_get(url).await?;
let data = parse_response(response).await?;
Ok(data)
}The macro correctly handles async boundariesāthe span is entered at each .await point and remains associated with the task.
use tracing::{span, Level, Span};
async fn fetch_data(url: &str) -> Result<Data, Error> {
// WRONG: Span exits when _enter is dropped, before awaits complete
let span = span!(Level::INFO, "fetch_data", url = %url);
let _enter = span.enter(); // Exits after this statement!
let response = http_get(url).await?; // Span is NOT active here
let data = parse_response(response).await?;
Ok(data)
}The guard returned by enter() is dropped at the end of the statement, not at function end. This is a common mistake.
For manual span management with async, use in_scope or instrument the future:
use tracing::{span, Level, Instrument};
async fn fetch_data(url: &str) -> Result<Data, Error> {
let span = span!(Level::INFO, "fetch_data", url = %url);
// Instrument the entire async block
async move {
let response = http_get(url).await?;
let data = parse_response(response).await?;
Ok(data)
}
.instrument(span)
.await
}Or use in_scope for synchronous sections:
use tracing::{span, Level, info, Instrument};
async fn fetch_data(url: &str) -> Result<Data, Error> {
let span = span!(Level::INFO, "fetch_data", url = %url);
span.in_scope(|| {
info!("Starting fetch");
});
let response = http_get(url)
.instrument(span.clone())
.await?;
span.in_scope(|| {
info!("Parsing response");
});
let data = parse_response(response)
.instrument(span)
.await?;
Ok(data)
}The macro excels for most function-level instrumentation:
use tracing::instrument;
#[instrument(skip(conn))]
async fn save_user(conn: &DbConnection, user: &User) -> Result<(), Error> {
conn.execute(
"INSERT INTO users (id, name) VALUES (?, ?)",
&[&user.id, &user.name],
).await?;
Ok(())
}
#[instrument(err)]
async fn process_request(req: Request) -> Result<Response, Error> {
let user = authenticate(&req.token).await?;
let data = fetch_data(&req.query).await?;
Ok(build_response(user, data))
}The err attribute captures returned errors as span fields:
#[instrument(err)]
fn might_fail() -> Result<(), Error> {
// If this returns Err, the error is recorded
// Fields: error=<error message>, error.level=<error level>
}Manual spans are required for scenarios the macro can't handle:
use tracing::{span, Level, Span};
use std::sync::Arc;
struct RequestContext {
span: Span,
// ... other fields
}
impl RequestContext {
fn new(request_id: String) -> Self {
let span = span!(Level::INFO, "request", %request_id);
RequestContext { span }
}
// Span persists across multiple method calls
}use tracing::{span, Level, Span, info};
fn process_batch(items: &[Item]) {
let span = span!(Level::INFO, "batch_process", total = items.len());
let _enter = span.enter();
for (idx, item) in items.iter().enumerate() {
// Update span fields dynamically
Span::current().record("current_index", idx);
Span::current().record("item_id", &item.id);
info!("Processing item");
}
}use tracing::{span, Level, enabled};
fn debug_operation(data: &Data) {
if enabled!(Level::DEBUG) {
let span = span!(Level::DEBUG, "debug_operation", data = ?data);
let _enter = span.enter();
// Expensive debug-only operations
detailed_trace(data);
}
}use tracing::{span, Level, info, Instrument};
async fn handle_websocket_connection(stream: TcpStream) {
let connection_span = span!(Level::INFO, "ws_connection",
remote_addr = %stream.peer_addr().unwrap()
);
// Main connection span persists for entire connection
async {
let mut ws = WebSocket::from_tcp(stream);
while let Some(msg) = ws.next().await {
// Each message gets its own child span
let msg_span = span!(Level::INFO, "ws_message",
kind = ?msg.kind
);
async {
handle_message(msg).await;
}
.instrument(msg_span)
.await;
}
}
.instrument(connection_span)
.await;
}Real applications use both patterns:
use tracing::{instrument, span, Level, Span, info, Instrument};
#[instrument(skip(db))]
async fn process_order(db: &Database, order: Order) -> Result<Receipt, Error> {
let order_span = Span::current();
// Add dynamic context as processing progresses
order_span.record("customer_id", &order.customer_id);
validate_order(&order)?;
// Create child span for sub-operation
let inventory_span = span!(parent: &order_span, Level::INFO, "check_inventory");
let available = check_inventory(db, &order.items)
.instrument(inventory_span)
.await?;
if !available {
order_span.record("error", "insufficient_inventory");
return Err(Error::OutOfStock);
}
let payment_span = span!(parent: &order_span, Level::INFO, "process_payment");
let payment = process_payment(db, &order.payment_info)
.instrument(payment_span)
.await?;
order_span.record("payment_id", &payment.id);
Ok(Receipt::new(order, payment))
}The #[instrument] macro has minimal overhead, but there are optimizations:
use tracing::instrument;
// Skip expensive-to-format arguments
#[instrument(skip(large_data))]
fn process(large_data: &Vec<u8>) {
// large_data won't be formatted
}
// Use debug level for high-frequency operations
#[instrument(level = "debug")]
fn hot_path(id: u64) {
// Only creates span if debug logging is enabled
}
// Fields level for conditional capture
#[instrument(
fields(
user_id = tracing::field::Empty, // Don't capture immediately
)
)]
fn deferred_capture(user_id: u64) {
// Record later when needed
tracing::Span::current().record("user_id", user_id);
}Both approaches create structured data, but differ in how fields appear:
use tracing::{instrument, span, Level, info};
#[instrument(fields(user_id = %user_id, action = "login"))]
fn login(user_id: u64) {
// Fields are part of the span, not the message
info!("User logged in");
// Log output: user_id=123 action=login "User logged in"
}
fn login_manual(user_id: u64) {
let span = span!(Level::INFO, "login", user_id = %user_id, action = "login");
let _enter = span.enter();
info!("User logged in");
// Same structured output
}#[instrument] provides a declarative, ergonomic way to create function-scoped spans with automatic argument capture and correct async handling. It's the right choice for most function-level instrumentation where the span lifetime matches the function's execution.
Span::current() and manual span management are necessary when spans need to cross function boundaries, persist across async operations in complex ways, require dynamic field updates, or need conditional creation based on runtime state.
The two approaches complement each other: use #[instrument] for the common case of function-scoped tracing, and reach for manual spans when you need the additional control they provide. In production applications, you'll typically use bothā#[instrument] on most functions, with manual spans for complex workflows that need precise context management across async boundaries or long-lived operations.