Menu

LangGraph Map-Reduce: Parallel Execution with Send API

Learn how to run LangGraph branches in parallel using the Send API and map-reduce pattern to process multiple items at once and merge results cleanly.

Written by Selva Prabhakaran | 21 min read

Fan out work to parallel branches, process each item at the same time, and merge the results — all with the Send API.

Say you have 50 documents that each need an LLM summary. If you run them one by one, it takes 10 minutes. Run them side by side? About twelve seconds. That gap is what turns a demo into a real product.

LangGraph’s Send API makes this kind of parallel work simple. In this post, I will show you how it works — with code you can copy and run in your own projects.

Let me paint the big picture before we touch any code. Your graph starts at one node. That node looks at the current state — maybe a list of docs, topics, or search queries — and picks how many branches to spin up.

It does not know the count when you build the graph. It finds out at runtime. Each branch runs the same node but gets a different slice of the state. When every branch is done, their outputs merge back through a reducer.

That is map-reduce in LangGraph: fan out on the fly, run in parallel, fan back in on its own. The “map” step hands out work. The “reduce” step gathers results. The Send object ties them together.

What Is the LangGraph Map-Reduce Pattern?

The idea is simple: take a big task, break it into small pieces, run the pieces at the same time, and then join the answers. In coding terms, map sends a task to each item. reduce gathers all the answers into one.

Why does LangGraph need a dedicated tool for this? The core problem is that the number of items is not fixed. Your graph might produce five topics in one run and twelve in the next. Hard-coded edges have no way to adapt to that.

text
Static edges (compile time):      Dynamic Send (runtime):

  Node A ──→ Node B                  Node A ──→ Send() ──→ Node B (x3)
  Node A ──→ Node C                                   ──→ Node B (x3)
                                                      ──→ Node B (x3)
  (fixed at build time)              (count decided at runtime)

That is where Send steps in. Inside a routing function, you build a list of Send objects — one for each item. Each object says which node to target and what state to hand it. LangGraph launches them all at once in a single superstep, and only moves forward after every branch reports back.

KEY INSIGHT: Send creates graph edges while your code runs, not when you compile. Normal conditional edges choose among routes you set up in advance. Send invents fresh routes — as many as the data demands — right then and there.

Prerequisites

  • Python version: 3.10+
  • Required libraries: langgraph (0.4+), langchain-openai (0.3+), langchain-core (0.3+)
  • Install: pip install langgraph langchain-openai langchain-core
  • API key: An OpenAI API key set as OPENAI_API_KEY. See OpenAI’s docs to create one.
  • Time to complete: ~30 minutes
  • Prior knowledge: LangGraph basics (nodes, edges, state, conditional edges) from earlier posts in this series.

Below is the import block. Notice Send from langgraph.types — that is the star of this post. We also grab operator.add, which will serve as the reducer that stitches parallel outputs together.

python
import operator
from typing import Annotated, TypedDict

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, START, END
from langgraph.types import Send

Why Does Parallel Execution Matter in LangGraph?

Imagine you need to sum up 20 research papers. The LLM takes about 3 seconds per paper.

Do them in a line and you wait a full minute. Fire all 20 at once, though, and you are done in roughly 3 seconds plus a small overhead. Same logic, same prompts — but 20 times faster.

The gains go past speed:

  • Throughput: You process more items in the same window of time.
  • User feel: Your app seems fast, not sluggish.
  • Cost use: API rate limits often allow many calls at once. You put your quota to work instead of leaving it idle.

You might wonder: why not just use asyncio.gather() and call it a day? Because every branch writes to shared state. If two branches touch the same field without a plan, you get race bugs and lost data.

LangGraph solves this with its reducer system. Reducers define how parallel writes merge. I will show you exactly how they work in the next section.

How Does the Send API Work — The Core Idea?

The place where you create Send objects is a routing function — the same kind you use for conditional edges. The twist is that instead of returning a single node name, you return a whole list of Send objects. Each object carries two pieces of info: which node to run, and what data to give it.

python
# Send signature: Send(node_name: str, state: dict)
# Each Send fires one parallel branch:
Send("process_item", {"item": "document_1", "content": "..."})
Send("process_item", {"item": "document_2", "content": "..."})

LangGraph groups all those Send objects into a superstep. Think of a superstep as a “go” signal for every branch at once. Nothing moves forward until the slowest branch wraps up.

One thing that makes this really flexible: each branch can receive state that looks nothing like the main graph state. You might send just an ID and a chunk of text. Or a whole new set of keys. The branch works with whatever you packed into its Send object.

TIP: Create a dedicated TypedDict for your branch nodes. It does not need to mirror the main graph schema. Keeping branch state small and focused makes the code much easier to read and debug.

How Do You Build Your First LangGraph Map-Reduce Graph?

Time to write real code. Let me walk you through a small but complete graph.

The plan: give the graph a topic, have it brainstorm three subtopics, write a joke about each subtopic in parallel, and then let the LLM choose the funniest one. I picked jokes because (a) the three LLM calls are fully independent, which is ideal for parallel work, and (b) the fan-out/fan-in shape is easy to follow.

Two state classes set the stage. OverallState covers the full pipeline. JokeState is the tiny packet each branch receives — nothing more than a single subject string.

python
class JokeState(TypedDict):
    subject: str


class OverallState(TypedDict):
    topic: str
    subjects: list[str]
    jokes: Annotated[list[str], operator.add]
    best_joke: str

Pay attention to Annotated[list[str], operator.add] on jokes. That tag is the reducer — it tells LangGraph how to combine writes from several branches. Each branch hands back {"jokes": ["some joke"]}, and the reducer chains those tiny lists into one long list. Drop the tag and only the final branch’s joke survives. Every other joke vanishes without a warning.

Quick check — guess the output: Three branches return {"jokes": ["joke_A"]}, {"jokes": ["joke_B"]}, and {"jokes": ["joke_C"]}. What does state["jokes"] look like after all three finish? Answer: ["joke_A", "joke_B", "joke_C"]. The operator.add reducer joined them.

How Do You Build the Map Phase — Fanning Out?

Fanning out takes two pieces working together. First, a node that asks the LLM for subtopics. Second, a routing function that turns those subtopics into Send objects.

Let me show you the node first. It prompts the LLM for three subtopics and chops the comma-based reply into a Python list.

python
model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)


def generate_subjects(state: OverallState) -> dict:
    """Generate subtopics related to the main topic."""
    prompt = (
        f"Generate exactly 3 short subtopics related to '{state['topic']}'. "
        f"Return them as a comma-separated list, nothing else."
    )
    response = model.invoke([HumanMessage(content=prompt)])
    subjects = [s.strip() for s in response.content.split(",")]
    return {"subjects": subjects}

Now the routing function — the true map step. It loops over state["subjects"] and builds one Send per entry. Each Send targets the "generate_joke" node and carries a JokeState with that single subject.

python
def map_to_jokes(state: OverallState) -> list[Send]:
    """Create one parallel branch per subject."""
    return [
        Send("generate_joke", {"subject": subject})
        for subject in state["subjects"]
    ]

Three subjects? Three branches launch side by side. Ten subjects? Ten branches. The data itself sets the count — nothing is hard-coded.

KEY INSIGHT: That routing function IS your map step. It controls three things: which items go parallel, how to split the data, and what each branch sees. Master this function and you master the pattern.

How Do You Build the Reduce Phase — Pulling Results Together?

Every branch fires the same generate_joke function. The function takes a JokeState — not the full OverallState — and hands back one joke wrapped in a list.

python
def generate_joke(state: JokeState) -> dict:
    """Generate a joke about the given subject."""
    prompt = f"Write a short, funny one-liner joke about {state['subject']}."
    response = model.invoke([HumanMessage(content=prompt)])
    return {"jokes": [response.content]}

Watch the return value closely. The joke sits inside a one-item list: [response.content]. Why? Because operator.add works on lists. Hand back a raw string and Python will glue individual letters together instead of appending items — a subtle, silent disaster.

Notice too that the function signature says JokeState. A branch has tunnel vision: it sees only the subject it was given. Other branches might as well not exist.

After every branch reports back, the reducer has stitched their lists into one. The reduce node can now scan all the jokes and pick a winner.

python
def pick_best_joke(state: OverallState) -> dict:
    """Select the best joke from all generated jokes."""
    jokes_text = "\n".join(
        f"{i+1}. {joke}" for i, joke in enumerate(state["jokes"])
    )
    prompt = (
        f"Here are some jokes:\n{jokes_text}\n\n"
        f"Pick the funniest one. Return ONLY the joke text, nothing else."
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"best_joke": response.content}

By the time this node runs, state["jokes"] holds every joke from every branch, already merged. The reducer did the heavy lifting in the background.

How Do You Wire the Graph Together?

Time to connect the dots. The line that matters most is add_conditional_edges — it hooks the routing function to the graph, which triggers the parallel fan-out. On the other side, add_edge from generate_joke to pick_best_joke is the fan-in point. LangGraph will not call pick_best_joke until every last branch is done.

python
builder = StateGraph(OverallState)

builder.add_node("generate_subjects", generate_subjects)
builder.add_node("generate_joke", generate_joke)
builder.add_node("pick_best_joke", pick_best_joke)

builder.add_edge(START, "generate_subjects")
builder.add_conditional_edges("generate_subjects", map_to_jokes)
builder.add_edge("generate_joke", "pick_best_joke")
builder.add_edge("pick_best_joke", END)

graph = builder.compile()

Run it and check the results:

python
result = graph.invoke({"topic": "animals"})
print("Topic:", result["topic"])
print("Subjects:", result["subjects"])
print("\nAll jokes:")
for i, joke in enumerate(result["jokes"], 1):
    print(f"  {i}. {joke}")
print(f"\nBest joke: {result['best_joke']}")

You will see three subjects drawn from “animals,” three jokes (one per subject, all made in parallel), and the LLM’s pick for the best joke. Your exact subjects and jokes will change each run.

How Do You Use Send() with Different Target Nodes?

So far, every Send has gone to the same node. But what if items need different handling?

Say you have a mix of short notes and long reports. A short note can go straight to a summary node. A long report should first be split into chunks, then each chunk gets its own summary. Your routing function inspects the length and picks the right destination. Both target nodes still fire within the same superstep — LangGraph treats them as equals.

python
def route_by_length(state: OverallState) -> list[Send]:
    """Route items to different nodes based on their length."""
    sends = []
    for item in state["items"]:
        if len(item) < 500:
            sends.append(Send("quick_summary", {"text": item}))
        else:
            sends.append(Send("chunk_and_summarize", {"text": item}))
    return sends

I use this pattern often in research agents. One user query might need Google, another should hit a SQL database, and a third belongs in a vector store. The routing function reads each query and funnels it to the matching tool node — all in one superstep.

WARNING: Every node targeted by Send in the same routing function must write to state fields that have reducers. If two branches both write to a plain str field, one wipes out the other. Always use Annotated[list, operator.add] for fields that get writes from parallel branches.

Real-World Example — Parallel Document Summaries

Let me build something closer to a real product. This graph accepts a batch of documents, produces an individual summary for each one in parallel, and then drafts a single overview from all the summaries.

We follow the same two-schema approach. DocState is the small packet each branch receives — just an ID and some text. SummaryState covers the full pipeline, including the merged output.

python
class DocState(TypedDict):
    doc_id: str
    content: str


class SummaryState(TypedDict):
    documents: list[dict]
    summaries: Annotated[list[str], operator.add]
    executive_summary: str

Each branch runs summarize_doc. The function takes a DocState, prompts the LLM for a 2-3 sentence recap, and wraps the answer in a list. I add the doc ID as a prefix so you can trace which summary belongs to which source.

python
def summarize_doc(state: DocState) -> dict:
    """Summarize a single document."""
    prompt = (
        f"Summarize this document in 2-3 sentences:\n\n"
        f"Document {state['doc_id']}:\n{state['content']}"
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"summaries": [f"[{state['doc_id']}] {response.content}"]}

The fan-out function below loops over state["documents"] and builds one Send per entry.

python
def fan_out_docs(state: SummaryState) -> list[Send]:
    """Create one summarization branch per document."""
    return [
        Send("summarize_doc", {
            "doc_id": doc["id"],
            "content": doc["content"],
        })
        for doc in state["documents"]
    ]

After every branch finishes, the reducer has joined all the short summaries into one list. The final node reads that list and asks the LLM for a combined overview.

python
def write_executive_summary(state: SummaryState) -> dict:
    """Combine all summaries into an executive summary."""
    all_summaries = "\n\n".join(state["summaries"])
    prompt = (
        f"Based on these document summaries, write a brief "
        f"executive summary (3-4 sentences):\n\n{all_summaries}"
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"executive_summary": response.content}

The wiring mirrors what we did for jokes. Only the functions inside the nodes changed.

python
summary_builder = StateGraph(SummaryState)

summary_builder.add_node("summarize_doc", summarize_doc)
summary_builder.add_node("write_executive_summary", write_executive_summary)

summary_builder.add_conditional_edges(START, fan_out_docs)
summary_builder.add_edge("summarize_doc", "write_executive_summary")
summary_builder.add_edge("write_executive_summary", END)

summary_graph = summary_builder.compile()
python
docs = [
    {"id": "doc-1", "content": "LangGraph is a framework for building stateful AI agents using graph-based orchestration."},
    {"id": "doc-2", "content": "The Send API enables parallel execution by creating runtime-determined branches in a graph."},
    {"id": "doc-3", "content": "Reducers in LangGraph safely merge state updates from concurrent node executions."},
]

result = summary_graph.invoke({"documents": docs})
print("Individual summaries:")
for s in result["summaries"]:
    print(f"  {s}")
print(f"\nExecutive summary:\n{result['executive_summary']}")

All three summaries happen at once. The overview waits until every individual summary lands. Scale this to 50 documents and nothing in the graph changes — Send just spawns more branches automatically.

How Do You Mix Map-Reduce with Conditional Edges?

You can blend map-reduce with standard conditional edges to build richer workflows.

A pattern I find useful: after the reduce step, run a quality gate. If the results look good, move on. If not, loop back and let the graph try again.

python
def quality_check(state: OverallState) -> str:
    """Route based on result quality."""
    if len(state["jokes"]) >= 3:
        return "accept"
    return "retry"

Wire it as a conditional edge after the reduce node:

python
builder.add_conditional_edges(
    "pick_best_joke",
    quality_check,
    {"accept": END, "retry": "generate_subjects"},
)

Now you have a feedback loop. The graph brainstorms subjects, fans out jokes, picks a winner, runs the gate, and circles back if the results fall short. Send powers the parallel leg. Conditional edges power the loop logic. They mix well because LangGraph treats Send as just another way to route.

TIP: Always set a max loop count when you pair map-reduce with loops. Without one, a bad quality check will loop forever. Pass {"recursion_limit": 10} in the config: graph.invoke(input, config={"recursion_limit": 10}).

When Should You NOT Use Map-Reduce?

Not every batch job needs Send. Here are cases where it does more harm than good:

  • Fixed branch count known at build time. If you always work on exactly 3 items, use normal parallel edges. Send shines when the count is not known ahead of time.
  • Branches that depend on each other. Each Send branch runs alone. If branch 2 needs output from branch 1, you need them in order — not map-reduce.
  • CPU-heavy tasks. LangGraph runs on asyncio, which helps with I/O tasks (API calls, database queries). CPU-heavy jobs will not gain much unless you push them to a process pool.

NOTE: For fewer than 3 items, the cost of Send routing may be more than the time you save. Use step-by-step runs for tiny batches. Save map-reduce for dynamic or large item counts.

What About Speed Limits and Memory?

Parallel runs have real limits. Here is what counts most in practice.

API rate limits are the biggest wall. If you fan out 200 LLM calls and your limit is 60 per minute, you will get 429 errors. Break your Send calls into batches, or add retry logic with growing wait times inside your node functions.

Memory use grows in step with branch count. Each branch keeps its own copy of state while it runs. Keep the Send payload small — send IDs and let each branch fetch its own data.

Superstep "all or nothing" means if one branch in a superstep fails, ALL branches lose their state updates. With checkpoints on, LangGraph retries just the ones that failed. Without checkpoints, the whole batch runs again.

Common Mistakes and How to Fix Them

Mistake 1: No reducer on fields that get parallel writes

This is the most common map-reduce bug. You define a state field as list[str] instead of Annotated[list[str], operator.add]. The code runs fine — but only the last branch's result stays.

python
# WRONG: No reducer — last write wins
class BadState(TypedDict):
    results: list[str]
python
# CORRECT: Concatenates results from all branches
class GoodState(TypedDict):
    results: Annotated[list[str], operator.add]

Mistake 2: Sending back a plain value instead of a list

Each branch must return a list for operator.add to work. If you return a bare string, operator.add joins the characters, not the items. That is a silent, sneaky bug.

python
# WRONG: String, not list!
def bad_node(state):
    return {"results": "some result"}
python
# CORRECT: Single-item list
def good_node(state):
    return {"results": ["some result"]}

Mistake 3: Counting on branch order

Branches finish in whatever order the I/O wraps up. Do not assume state["results"][0] matches the first Send you made. Add an ID to each branch's output if order matters.

python
# WRONG: Assumes first result = first Send
first_result = state["results"][0]
python
# CORRECT: Label each result with its source
def labeled_node(state: DocState) -> dict:
    result = process(state["content"])
    return {"results": [f"{state['doc_id']}|{result}"]}

Mistake 4: Bloated Send payloads

python
# WRONG: Ships entire document collection to every branch
Send("process", {"all_docs": all_documents, "index": i})
python
# CORRECT: Send only what this branch needs
Send("process", {"content": all_documents[i]["text"], "doc_id": i})

Each Send payload gets copied. Big payloads times many branches means heavy memory use.

Quick check: You define results: list[str] (no reducer) and fan out to 5 branches. Each returns {"results": ["item"]}. How many items end up in state["results"]? Answer: just 1. Only the last branch's result stays. You need Annotated[list[str], operator.add] to keep all 5.

Practice Exercise

Build a map-reduce graph that takes a list of city names, looks up a mock weather report for each city in parallel, then writes a travel tip based on all the reports.

Click to see the solution
python
import operator
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.types import Send
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

class CityState(TypedDict):
    city: str

class TravelState(TypedDict):
    cities: list[str]
    forecasts: Annotated[list[str], operator.add]
    recommendation: str

model = ChatOpenAI(model="gpt-4o-mini", temperature=0)

MOCK_WEATHER = {
    "tokyo": "Sunny, 75F",
    "london": "Rainy, 55F",
    "paris": "Cloudy, 62F",
    "sydney": "Clear, 82F",
}

def get_forecast(state: CityState) -> dict:
    city = state["city"]
    weather = MOCK_WEATHER.get(city.lower(), "Unknown")
    return {"forecasts": [f"{city}: {weather}"]}

def fan_out_cities(state: TravelState) -> list[Send]:
    return [Send("get_forecast", {"city": c}) for c in state["cities"]]

def recommend(state: TravelState) -> dict:
    forecasts = "\n".join(state["forecasts"])
    prompt = (
        f"Based on these weather forecasts:\n{forecasts}\n\n"
        f"Which city is best for a vacation? Explain in 2 sentences."
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"recommendation": response.content}

builder = StateGraph(TravelState)
builder.add_node("get_forecast", get_forecast)
builder.add_node("recommend", recommend)
builder.add_conditional_edges(START, fan_out_cities)
builder.add_edge("get_forecast", "recommend")
builder.add_edge("recommend", END)

travel_graph = builder.compile()

result = travel_graph.invoke({
    "cities": ["Tokyo", "London", "Paris", "Sydney"]
})
print("Forecasts:")
for f in result["forecasts"]:
    print(f"  {f}")
print(f"\nRecommendation: {result['recommendation']}")

Complete Code

Click to expand the full script (copy-paste and run)
python
# Complete code from: Map-Reduce and Parallel Execution in LangGraph
# Requires: pip install langgraph langchain-openai langchain-core
# Python 3.10+

import operator
from typing import Annotated, TypedDict

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, START, END
from langgraph.types import Send

# --- Model setup ---
model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)

# --- State schemas ---
class JokeState(TypedDict):
    subject: str

class OverallState(TypedDict):
    topic: str
    subjects: list[str]
    jokes: Annotated[list[str], operator.add]
    best_joke: str

# --- Node functions ---
def generate_subjects(state: OverallState) -> dict:
    """Generate subtopics related to the main topic."""
    prompt = (
        f"Generate exactly 3 short subtopics related to '{state['topic']}'. "
        f"Return them as a comma-separated list, nothing else."
    )
    response = model.invoke([HumanMessage(content=prompt)])
    subjects = [s.strip() for s in response.content.split(",")]
    return {"subjects": subjects}

def generate_joke(state: JokeState) -> dict:
    """Generate a joke about the given subject."""
    prompt = f"Write a short, funny one-liner joke about {state['subject']}."
    response = model.invoke([HumanMessage(content=prompt)])
    return {"jokes": [response.content]}

def pick_best_joke(state: OverallState) -> dict:
    """Select the best joke from all generated jokes."""
    jokes_text = "\n".join(
        f"{i+1}. {joke}" for i, joke in enumerate(state["jokes"])
    )
    prompt = (
        f"Here are some jokes:\n{jokes_text}\n\n"
        f"Pick the funniest one. Return ONLY the joke text, nothing else."
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"best_joke": response.content}

# --- Routing (map step) ---
def map_to_jokes(state: OverallState) -> list[Send]:
    """Create one parallel branch per subject."""
    return [
        Send("generate_joke", {"subject": subject})
        for subject in state["subjects"]
    ]

# --- Build and run the graph ---
builder = StateGraph(OverallState)

builder.add_node("generate_subjects", generate_subjects)
builder.add_node("generate_joke", generate_joke)
builder.add_node("pick_best_joke", pick_best_joke)

builder.add_edge(START, "generate_subjects")
builder.add_conditional_edges("generate_subjects", map_to_jokes)
builder.add_edge("generate_joke", "pick_best_joke")
builder.add_edge("pick_best_joke", END)

graph = builder.compile()

result = graph.invoke({"topic": "animals"})
print("Topic:", result["topic"])
print("Subjects:", result["subjects"])
print("\nAll jokes:")
for i, joke in enumerate(result["jokes"], 1):
    print(f"  {i}. {joke}")
print(f"\nBest joke: {result['best_joke']}")

# --- Document Summarization Example ---
class DocState(TypedDict):
    doc_id: str
    content: str

class SummaryState(TypedDict):
    documents: list[dict]
    summaries: Annotated[list[str], operator.add]
    executive_summary: str

def summarize_doc(state: DocState) -> dict:
    """Summarize a single document."""
    prompt = (
        f"Summarize this document in 2-3 sentences:\n\n"
        f"Document {state['doc_id']}:\n{state['content']}"
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"summaries": [f"[{state['doc_id']}] {response.content}"]}

def fan_out_docs(state: SummaryState) -> list[Send]:
    """Create one summarization branch per document."""
    return [
        Send("summarize_doc", {
            "doc_id": doc["id"],
            "content": doc["content"],
        })
        for doc in state["documents"]
    ]

def write_executive_summary(state: SummaryState) -> dict:
    """Combine all summaries into an executive summary."""
    all_summaries = "\n\n".join(state["summaries"])
    prompt = (
        f"Based on these document summaries, write a brief "
        f"executive summary (3-4 sentences):\n\n{all_summaries}"
    )
    response = model.invoke([HumanMessage(content=prompt)])
    return {"executive_summary": response.content}

summary_builder = StateGraph(SummaryState)
summary_builder.add_node("summarize_doc", summarize_doc)
summary_builder.add_node("write_executive_summary", write_executive_summary)
summary_builder.add_conditional_edges(START, fan_out_docs)
summary_builder.add_edge("summarize_doc", "write_executive_summary")
summary_builder.add_edge("write_executive_summary", END)

summary_graph = summary_builder.compile()

docs = [
    {"id": "doc-1", "content": "LangGraph is a framework for building stateful AI agents using graph-based orchestration."},
    {"id": "doc-2", "content": "The Send API enables parallel execution by creating runtime-determined branches in a graph."},
    {"id": "doc-3", "content": "Reducers in LangGraph safely merge state updates from concurrent node executions."},
]

result = summary_graph.invoke({"documents": docs})
print("\n--- Document Summarization ---")
print("Individual summaries:")
for s in result["summaries"]:
    print(f"  {s}")
print(f"\nExecutive summary:\n{result['executive_summary']}")

print("\nScript completed successfully.")

Frequently Asked Questions

Does Send() use threads behind the scenes?

No threads, no extra processes. LangGraph relies on Python's asyncio. When you call invoke(), it spins up an async event loop and runs each branch as a coroutine on one thread. Because LLM API calls are I/O-bound, you get near-perfect parallel speed. If your nodes do heavy math or data crunching, offload that work to a process pool from within the node.

How do I limit how many branches fire at once?

There is no built-in throttle. Every Send from one routing call joins the same superstep. To batch them, have the routing function emit only the first N items. After the reduce step, check if more items wait in the queue. If so, loop back and send the next batch.

What happens when a single branch crashes?

The entire superstep rolls back. Even branches that finished lose their state updates. If you have checkpoints turned on, LangGraph replays only the failed branch. Without checkpoints, every branch starts over from zero.

Can Send() target a subgraph?

Absolutely. Point a Send at a node that wraps a compiled subgraph. Each branch receives its own fresh instance. This shines when a branch needs several internal steps — for example, chunk a long text, summarize each chunk, then stitch the pieces together.

Is it possible to nest map-reduce — a fan-out inside another fan-out?

Yes, through subgraphs. Put the inner fan-out inside a subgraph, and have the outer Send point at that subgraph. The result is a tree-shaped run. I would keep nesting to two levels at most — beyond that, tracking state becomes a headache.

How should I handle errors in one branch without killing the rest?

Guard the core work with try/except. When something goes wrong, return a marker like {"results": ["ERROR: doc-5 failed"]} instead of raising the error. The superstep still completes, and your reduce node can decide whether to skip, retry, or report the bad items. For hands-free retries, enable checkpoints via MemorySaver or a database backend.

Summary

You now have the full toolkit for running dynamic work in parallel with LangGraph. Let me recap the moving parts:

  • Send(node_name, state) — spawns a branch at runtime with its own data payload.
  • Annotated[list, operator.add] — the reducer that safely stitches together writes from many branches.
  • Routing functions — return a list of Send objects to launch the fan-out.
  • Supersteps — the execution group that holds all branches; the graph waits until every one completes.

Use this pattern for document summaries, batch labeling, parallel tool calls, multi-source research — anywhere the item count is not known in advance. The graph skeleton stays fixed; you only swap the logic inside each node.

In the next post, I will cover dynamic breakpoints and time travel — techniques that let you pause, inspect, and replay your LangGraph agents mid-run.

References

  1. LangGraph documentation — How to create map-reduce branches for parallel execution. Link
  2. LangGraph documentation — Send API reference. Link
  3. LangGraph documentation — Branching (fan-out / fan-in). Link
  4. LangChain documentation — How to summarize text through parallelization. Link
  5. LangGraph Academy — Map-Reduce Pattern module. Link
  6. Dean, J. & Ghemawat, S. — "MapReduce: Simplified Data Processing on Large Clusters." OSDI 2004. Link
  7. LangGraph documentation — Concepts: Supersteps and execution model. Link
  8. LangGraph GitHub repository — Source code and examples. Link
Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Free Callback - Limited Slots
Not Sure Which Course to Start With?
Talk to our AI Counsellors and Practitioners. We'll help you clear all your questions for your background and goals, bridging the gap between your current skills and a career in AI.
10-digit mobile number
📞
Thank You!
We'll Call You Soon!
Our learning advisor will reach out within 24 hours.
(Check your inbox too — we've sent a confirmation)
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science