The Rust Expression Guide

Filtering and Transforming with filter_map

Last Updated: 2026-04-04

What this page is about

filter_map is one of the highest-value iterator methods in everyday Rust because it captures a very common pattern cleanly: for each input item, either produce one output item or produce nothing.

That makes it a natural replacement for many manual loops with conditional push, and for many two-step iterator pipelines where code first filters and then transforms.

This page explains the mental model behind filter_map, shows where it improves clarity, and also shows when plain filter(...).map(...) is clearer. The goal is not to treat filter_map as automatically superior. It is to use it when its shape matches the work being done.

The core mental model

A good way to think about filter_map is this: each input item gets one chance to become an output item.

  • return Some(output) if the item should produce a value
  • return None if the item should produce nothing

That is why the right mental model is not "filter, then map" in the abstract. It is "produce zero or one output item from each input item."

A tiny example:

fn parse_numbers(values: &[&str]) -> Vec<u32> {
    values.iter().filter_map(|s| s.parse::<u32>().ok()).collect()
}

Each string either yields one parsed number or yields nothing. That is exactly the shape filter_map is designed for.

Why `filter_map` matters

Without filter_map, this pattern often appears as a loop with conditional push.

fn parse_numbers(values: &[&str]) -> Vec<u32> {
    let mut out = Vec::new();
 
    for value in values {
        if let Ok(n) = value.parse::<u32>() {
            out.push(n);
        }
    }
 
    out
}

This is fine, but the real idea is simple: from each input, maybe produce a number.

With filter_map:

fn parse_numbers(values: &[&str]) -> Vec<u32> {
    values.iter().filter_map(|s| s.parse::<u32>().ok()).collect()
}

The improvement is not only shorter code. It is that the structure now matches the task directly.

A first comparison: loop versus `filter_map`

Consider a function that extracts non-empty trimmed usernames.

Loop form:

fn usernames(values: &[&str]) -> Vec<String> {
    let mut out = Vec::new();
 
    for value in values {
        let trimmed = value.trim();
        if !trimmed.is_empty() {
            out.push(trimmed.to_string());
        }
    }
 
    out
}

filter_map form:

fn usernames(values: &[&str]) -> Vec<String> {
    values
        .iter()
        .filter_map(|value| {
            let trimmed = value.trim();
            if trimmed.is_empty() {
                None
            } else {
                Some(trimmed.to_string())
            }
        })
        .collect()
}

The second version makes the per-item rule explicit: blank items produce nothing, non-blank items produce one normalized output.

This is the core use case for filter_map.

How `filter_map` relates to `Option`

filter_map works especially well when the closure naturally produces an Option.

That includes situations such as:

  • parsing where invalid input should be skipped
  • lookups where missing entries should be ignored
  • conditional extraction where only some variants produce output
  • normalization where blank or invalid content should disappear

For example:

fn first_words(lines: &[&str]) -> Vec<&str> {
    lines
        .iter()
        .filter_map(|line| line.split_whitespace().next())
        .collect()
}

Each line either has a first word or does not. filter_map fits that shape exactly.

A very common real use: parsing while skipping failures

One of the most common real-world uses of filter_map is turning parseable items into values while ignoring the rest.

fn ports(values: &[&str]) -> Vec<u16> {
    values
        .iter()
        .filter_map(|value| value.trim().parse::<u16>().ok())
        .collect()
}

This is concise, but more importantly, it is honest about the behavior:

  • each input is tried once
  • successful parses produce one output item
  • failed parses produce nothing

That is a better fit than map because not every input produces an output.

Replacing `filter(...).map(...)` when the transform decides inclusion

A key judgment point is whether the filtering condition depends on the transformed result.

If the transformation itself determines whether the item survives, filter_map is often the clearest shape.

For example:

fn even_numbers(values: &[&str]) -> Vec<u32> {
    values
        .iter()
        .filter_map(|value| {
            let n = value.trim().parse::<u32>().ok()?;
            if n % 2 == 0 {
                Some(n)
            } else {
                None
            }
        })
        .collect()
}

This is more natural than trying to separate filtering and mapping into two stages, because the parse result and the inclusion decision are tied together.

When `filter(...).map(...)` is clearer

filter_map is not automatically better. If the filtering condition is conceptually separate from the transformation, then filter(...).map(...) is often clearer.

For example:

fn long_names(values: &[&str]) -> Vec<String> {
    values
        .iter()
        .map(|s| s.trim())
        .filter(|s| s.len() >= 5)
        .map(|s| s.to_uppercase())
        .collect()
}

This is readable because the pipeline has clear stages:

  • normalize
  • keep only long items
  • transform the survivors

A filter_map version could be written, but it would blur those distinct steps into one closure. That would usually be worse.

A practical rule is this: if filtering and transformation are conceptually one decision, filter_map fits well. If they are distinct stages, keep them distinct.

Comparing three shapes directly

It helps to compare three different ways to express the same kind of logic.

Manual loop:

fn ids_loop(values: &[&str]) -> Vec<u32> {
    let mut out = Vec::new();
    for value in values {
        if let Some(id) = value.strip_prefix("id:") {
            if let Ok(id) = id.trim().parse::<u32>() {
                out.push(id);
            }
        }
    }
    out
}

filter(...).map(...) pipeline:

fn ids_split(values: &[&str]) -> Vec<u32> {
    values
        .iter()
        .filter_map(|value| value.strip_prefix("id:"))
        .filter_map(|id| id.trim().parse::<u32>().ok())
        .collect()
}

Single filter_map:

fn ids_filter_map(values: &[&str]) -> Vec<u32> {
    values
        .iter()
        .filter_map(|value| {
            let id = value.strip_prefix("id:")?;
            id.trim().parse::<u32>().ok()
        })
        .collect()
}

The best version depends on what you want the reader to see. The last version is compact and coherent because the whole rule is "from each input, maybe extract one id." The middle version is also good if you want to emphasize the two stages separately.

Using `?` inside `filter_map` closures

One reason filter_map is pleasant to use is that closures returning Option<_> often pair nicely with ?.

fn valid_ids(values: &[&str]) -> Vec<u32> {
    values
        .iter()
        .filter_map(|value| {
            let body = value.strip_prefix("id:")?;
            let id = body.trim().parse::<u32>().ok()?;
            Some(id)
        })
        .collect()
}

This reads naturally:

  • if the prefix is missing, produce nothing
  • if parsing fails, produce nothing
  • otherwise produce the parsed id

That is often cleaner than deeply nested if let or match inside a manual loop.

Meaningful examples

Example 1: keep only parseable retry counts.

fn retry_counts(values: &[&str]) -> Vec<u8> {
    values
        .iter()
        .filter_map(|value| value.trim().parse::<u8>().ok())
        .collect()
}

Example 2: extract first words from non-empty lines.

fn first_words(lines: &[&str]) -> Vec<&str> {
    lines
        .iter()
        .filter_map(|line| line.split_whitespace().next())
        .collect()
}

Example 3: keep only non-empty normalized usernames.

fn usernames(values: &[&str]) -> Vec<String> {
    values
        .iter()
        .filter_map(|value| {
            let trimmed = value.trim();
            if trimmed.is_empty() {
                None
            } else {
                Some(trimmed.to_lowercase())
            }
        })
        .collect()
}

Example 4: extract numeric suffixes from tagged strings.

fn tagged_ids(values: &[&str]) -> Vec<u32> {
    values
        .iter()
        .filter_map(|value| {
            let suffix = value.strip_prefix("item-")?;
            suffix.parse::<u32>().ok()
        })
        .collect()
}

A practical parsing example

Suppose you are processing a simple configuration-like input where each line may or may not contain a valid port assignment.

fn assigned_ports(lines: &[&str]) -> Vec<u16> {
    lines
        .iter()
        .filter_map(|line| {
            let (_, value) = line.split_once('=')?;
            value.trim().parse::<u16>().ok()
        })
        .collect()
}

This works well because the rule for each line is unified:

  • if there is no = sign, skip the line
  • if the value is not a valid port, skip the line
  • otherwise produce the parsed port

That is exactly a zero-or-one-output-item rule.

A request-processing example

Request-processing code often has lists of optional or noisy fields that need to become a clean collection.

fn normalized_tags(tags: &[Option<String>]) -> Vec<String> {
    tags
        .iter()
        .filter_map(|tag| {
            let tag = tag.as_deref()?.trim();
            if tag.is_empty() {
                None
            } else {
                Some(tag.to_lowercase())
            }
        })
        .collect()
}

This is a realistic use of filter_map because each raw tag may produce one normalized tag or none at all. There is no need to first build an intermediate stream of blank strings and then filter them out separately unless that intermediate stage is conceptually important.

When a manual loop is still better

filter_map is powerful, but it is not mandatory. A manual loop is often better when:

  • the closure becomes long enough to hide the rule
  • the code needs side effects such as logging or metrics for skipped items
  • several outputs may be produced from one input item
  • the operation is easier to explain step by step

For example, if you need to log every rejected input, a loop may be clearer:

fn parse_with_logging(values: &[&str]) -> Vec<u32> {
    let mut out = Vec::new();
 
    for value in values {
        match value.trim().parse::<u32>() {
            Ok(n) => out.push(n),
            Err(_) => eprintln!("skipping invalid value: {value}"),
        }
    }
 
    out
}

filter_map is best when the closure stays small and the rule remains easy to see.

When `filter(...).map(...)` says more

Sometimes a two-stage pipeline communicates intent better than a single filter_map closure.

fn uppercase_keywords(values: &[&str]) -> Vec<String> {
    values
        .iter()
        .map(|s| s.trim())
        .filter(|s| s.starts_with("key:"))
        .map(|s| s.to_uppercase())
        .collect()
}

This is easy to read because the stages are conceptually separate:

  • normalize whitespace
  • keep keyword lines
  • transform to uppercase

A filter_map version would collapse those stages into one closure, but that would not help the reader. This is one of the most important judgment calls around filter_map.

A small CLI example

Here is a small command-line example where filter_map turns raw arguments into parsed levels.

use std::env;
 
fn main() {
    let levels: Vec<u8> = env::args()
        .skip(1)
        .filter_map(|arg| arg.trim().parse::<u8>().ok())
        .collect();
 
    println!("parsed levels: {levels:?}");
}

You can try it with:

cargo run -- 1 2 nope 4
cargo run -- 3 x 7 9
cargo run

This example is simple, but it shows the essence of the method clearly: each argument either becomes one parsed value or disappears.

A small project file for experimentation

You can experiment with the examples in a small project like this:

[package]
name = "filtering-and-transforming-with-filter-map"
version = "0.1.0"
edition = "2024"

Then place one example at a time in src/main.rs and run:

cargo run
cargo fmt
cargo clippy
cargo test

cargo fmt helps reveal the shape of iterator closures. cargo clippy is useful for spotting manual loops that may be expressing a filter_map-style pattern.

Common mistakes

There are a few recurring mistakes around filter_map.

First, treating it as automatically superior to filter(...).map(...) even when the two stages are conceptually separate.

Second, packing too much work into the closure so the reader can no longer see the zero-or-one-output rule clearly.

Third, using it when one input can yield many outputs. In those cases, flat_map or a different structure is a better fit.

Fourth, using it when skipped items need explicit handling such as logging or metrics, where a loop may be clearer.

Fifth, forgetting the core mental model and reading it as a vague optimization rather than as a clear data-shaping operation.

Refactoring patterns to watch for

When reviewing code, these are strong signals that filter_map may help:

  1. a loop pushes into a Vec only inside one conditional branch
  2. the code parses or extracts values and silently skips failures or absences
  3. each input item naturally yields either one output or nothing
  4. filtering depends on the result of the attempted transformation
  5. several nested if let or match checks inside a loop are only there to decide whether one value should be emitted

Typical before-and-after examples look like this:

fn before(values: &[&str]) -> Vec<u32> {
    let mut out = Vec::new();
    for value in values {
        if let Ok(n) = value.parse::<u32>() {
            out.push(n);
        }
    }
    out
}
 
fn after(values: &[&str]) -> Vec<u32> {
    values.iter().filter_map(|value| value.parse::<u32>().ok()).collect()
}

And with extraction:

fn before(values: &[&str]) -> Vec<&str> {
    let mut out = Vec::new();
    for value in values {
        if let Some(rest) = value.strip_prefix("id:") {
            out.push(rest);
        }
    }
    out
}
 
fn after(values: &[&str]) -> Vec<&str> {
    values.iter().filter_map(|value| value.strip_prefix("id:")).collect()
}

Key takeaways

filter_map is one of the most useful iterator methods in real Rust code because it expresses a common rule directly: each input item may produce one output item, or none.

The main ideas from this page are:

  • think of filter_map as "produce zero or one output item from each input"
  • it is especially useful for parsing, extraction, optional lookups, and normalization
  • it often replaces manual loops with conditional push
  • it is often better than filter(...).map(...) when the inclusion decision depends on the transformation itself
  • filter(...).map(...) is often clearer when filtering and transformation are conceptually separate stages
  • a manual loop is still better when the closure becomes too dense or when skipped items need side effects

Good iterator code is not about using the fanciest method. It is about choosing the method whose shape matches the work being done.