Menu

LangGraph State Management: TypedDict & Reducers

Master LangGraph state management with TypedDict schemas, reducer functions, and add_messages. Learn how nodes share data, merge updates, and track message history.

Written by Selva Prabhakaran | 21 min read

LangGraph state is the shared data layer that lets every node read, update, and merge information as your graph runs — and getting it right is key to building reliable agents.

You built a LangGraph graph with a few nodes. Each one does its job. But when Node B runs, it has no clue what Node A found. Even worse — Node B might wipe out Node A’s data, and your whole chat history goes missing.

This happens when you skip telling LangGraph how to handle your state. That “how” makes all the difference.

In a previous post on graph concepts, we looked at nodes, edges, and state at a high level. Now, let’s dig deep into state — the part that holds your graph together.

What Is State in LangGraph?

Think of state as a shared whiteboard in a meeting room. Each person (node) walks in, reads what’s on the board, does their work, and writes their findings back.

But here’s the key part — nodes don’t change state on their own. They hand back updates, and LangGraph folds those updates into the current state.

python
from typing import TypedDict
from langgraph.graph import StateGraph, START, END

class MyState(TypedDict):
    user_input: str
    response: str
    step_count: int

def greet(state: MyState) -> dict:
    name = state["user_input"]
    return {"response": f"Hello, {name}!", "step_count": 1}

graph = StateGraph(MyState)
graph.add_node("greet", greet)
graph.add_edge(START, "greet")
graph.add_edge("greet", END)

app = graph.compile()
result = app.invoke({"user_input": "Alice", "response": "", "step_count": 0})
print(result)
python
{'user_input': 'Alice', 'response': 'Hello, Alice!', 'step_count': 1}

Notice what happened: greet received all three fields, but it only returned response and step_count. LangGraph folded just those two updates into the existing state. Since the node left user_input alone, it kept its original value.

Key Insight: Nodes don’t touch state directly — they hand back a dict of changes, and LangGraph takes care of merging. This design makes debugging easy because you can always trace what each node contributed.

Why TypedDict for State Schemas?

You might wonder why TypedDict and not a plain dict or a Pydantic model. The short answer: it gives you the best balance of safety and speed.

A plain dict offers no type safety at all. A Pydantic model brings runtime checks you often don’t need. TypedDict sits in the middle — your IDE and type checker can catch errors, but there’s no extra cost at runtime.

python
from typing import TypedDict
from langgraph.graph import StateGraph

class AgentState(TypedDict):
    messages: list
    current_tool: str
    iteration: int

# LangGraph validates keys against this schema
graph = StateGraph(AgentState)
print(f"Graph created with state keys: {list(AgentState.__annotations__.keys())}")
python
Graph created with state keys: ['messages', 'current_tool', 'iteration']

Once you hand AgentState to StateGraph, LangGraph knows which fields are allowed. Think of your schema as a shared agreement between every node in the graph.

LangGraph also works with Pydantic BaseModel and Python dataclass. Here’s a side-by-side look:

python
from typing import TypedDict
from pydantic import BaseModel
from dataclasses import dataclass

# Option 1: TypedDict (most common -- no runtime overhead)
class StateA(TypedDict):
    value: str

# Option 2: Pydantic (adds runtime validation)
class StateB(BaseModel):
    value: str

# Option 3: Dataclass (mutable by default)
@dataclass
class StateC:
    value: str

print("All three work as state schemas in LangGraph")
python
All three work as state schemas in LangGraph

My advice: stick with TypedDict unless you truly need Pydantic’s runtime checks. It’s what the official docs and most tutorials use.

The Default Behavior: Overwrite

Here’s the thing that trips up most beginners. If you don’t set a reducer, LangGraph just overwrites the old value with the new one.

Picture this: Node A returns {"count": 5}, then Node B returns {"count": 10}. After both run, the state has count = 10. What Node A wrote is gone. Here’s a quick demo:

python
from typing import TypedDict
from langgraph.graph import StateGraph, START, END

class CounterState(TypedDict):
    count: int
    label: str

def node_a(state: CounterState) -> dict:
    return {"count": 5, "label": "from A"}

def node_b(state: CounterState) -> dict:
    return {"count": 10}  # Only updates count, not label

graph = StateGraph(CounterState)
graph.add_node("a", node_a)
graph.add_node("b", node_b)
graph.add_edge(START, "a")
graph.add_edge("a", "b")
graph.add_edge("b", END)

app = graph.compile()
result = app.invoke({"count": 0, "label": ""})
print(f"count: {result['count']}")
print(f"label: {result['label']}")
python
count: 10
label: from A

See the result: count jumped to 10 because Node B wrote over it. But label still says "from A" — LangGraph only touches keys that appear in what the node returned. Leave a key out, and it stays as-is.

Warning: If two nodes run side by side and both send back the same key without a reducer, LangGraph throws an `InvalidUpdateError`. Overwrite only works when nodes run one after another. For parallel nodes, you need a reducer — which we’ll cover next.

What Are Reducers and Why Do They Matter?

A reducer tells LangGraph how to blend the old value with a new value, rather than just replacing it. The idea comes from functional programming — a reducer takes two inputs (the current value and the new update) and returns the merged result.

Here’s the syntax. You wrap your type with Annotated and attach a reducer function:

python
from typing import Annotated, TypedDict
import operator

class AgentState(TypedDict):
    messages: Annotated[list, operator.add]  # Concatenate lists
    count: int                                # Overwrite (no reducer)

print("messages uses operator.add reducer")
print("count uses default overwrite")
python
messages uses operator.add reducer
count uses default overwrite

That Annotated[list, operator.add] tells LangGraph: “When a node sends a new list for messages, join it to the end of what’s already there.”

Let me show you how this changes things in a real graph:

python
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
import operator

class ListState(TypedDict):
    items: Annotated[list, operator.add]

def add_fruits(state: ListState) -> dict:
    return {"items": ["apple", "banana"]}

def add_veggies(state: ListState) -> dict:
    return {"items": ["carrot", "spinach"]}

graph = StateGraph(ListState)
graph.add_node("fruits", add_fruits)
graph.add_node("veggies", add_veggies)
graph.add_edge(START, "fruits")
graph.add_edge("fruits", "veggies")
graph.add_edge("veggies", END)

app = graph.compile()
result = app.invoke({"items": []})
print(result)
python
{'items': ['apple', 'banana', 'carrot', 'spinach']}

Without the reducer, only ["carrot", "spinach"] would be left. With operator.add, both lists get joined. That’s the whole point — reducers let you control how LangGraph state management works across nodes.


python
type: exercise
id: reducer-basics
title: "Predict the State After Two Nodes"
difficulty: intermediate
exerciseType: predict-output
instructions: |
  Given this state schema and two nodes, predict the final state after both nodes run sequentially.

  ```python
  from typing import Annotated, TypedDict
  import operator

  class QuizState(TypedDict):
      scores: Annotated[list[int], operator.add]
      player: str
      total: int

  def round_one(state):
      return {"scores": [10], "player": "Alice", "total": 10}

  def round_two(state):
      return {"scores": [20], "total": 30}
  ```

  What is the final state after both nodes run with initial state `{"scores": [], "player": "", "total": 0}`?
starterCode: |
  # What will the final state be?
  # Fill in the expected values:
  expected_scores = ___
  expected_player = ___
  expected_total = ___
  print(f"scores: {expected_scores}")
  print(f"player: {expected_player}")
  print(f"total: {expected_total}")
testCases:
  - id: test-scores
    input: "print([10, 20])"
    expectedOutput: "[10, 20]"
    description: "scores should concatenate via operator.add"
  - id: test-player
    input: "print('Alice')"
    expectedOutput: "Alice"
    description: "player stays 'Alice' because round_two doesn't return it"
  - id: test-total
    input: "print(30)"
    expectedOutput: "30"
    description: "total gets overwritten to 30 (no reducer)"
hints:
  - "scores has a reducer (operator.add), so both lists get concatenated."
  - "player is not returned by round_two, so it keeps the value from round_one. total has no reducer, so round_two's value overwrites round_one's."
solution: |
  expected_scores = [10, 20]
  expected_player = "Alice"
  expected_total = 30
  print(f"scores: {expected_scores}")
  print(f"player: {expected_player}")
  print(f"total: {expected_total}")
solutionExplanation: |
  `scores` uses `operator.add`, so `[10] + [20] = [10, 20]`. `player` stays `"Alice"` because `round_two` doesn't return a `player` key. `total` has no reducer, so it gets overwritten from 10 to 30.
xpReward: 10

The add_messages Reducer — Your Go-To for Chat Workflows

If you’re building a chatbot or agent, you need a message list that grows as the chat goes on. LangGraph ships with a built-in reducer called add_messages for exactly this.

Why not just use operator.add? Because add_messages does something extra. It checks message IDs. If you send a message with the same ID as one already in the list, it swaps in the new one instead of making a copy.

python
from typing import Annotated, TypedDict
from langchain_core.messages import HumanMessage, AIMessage, AnyMessage
from langgraph.graph.message import add_messages

class ChatState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

# Simulating what happens across nodes
existing = [HumanMessage(content="Hello", id="msg-1")]
update = [AIMessage(content="Hi there!", id="msg-2")]

result = add_messages(existing, update)
print(f"Message count: {len(result)}")
for msg in result:
    print(f"  {msg.__class__.__name__}: {msg.content}")
python
Message count: 2
  HumanMessage: Hello
  AIMessage: Hi there!

This ID-based matching becomes very handy when you need to correct a response. Look at what happens when we push a message whose ID matches one already stored:

python
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.graph.message import add_messages

existing = [
    HumanMessage(content="What's 2+2?", id="msg-1"),
    AIMessage(content="It's 5", id="msg-2"),
]

# Same ID as the wrong answer -- triggers replacement
correction = [AIMessage(content="It's 4", id="msg-2")]
result = add_messages(existing, correction)

for msg in result:
    print(f"  {msg.__class__.__name__} (id={msg.id}): {msg.content}")
python
  HumanMessage (id=msg-1): What's 2+2?
  AIMessage (id=msg-2): It's 4

The bad answer got replaced in place. No duplicate. Had you used operator.add, both the wrong and right answers would sit in the list side by side.

Tip: You can remove messages outright by sending `RemoveMessage(id=”msg-2″)`. This helps when you need to trim chat history to stay within token limits. Import it from `langchain_core.messages`.

MessagesState — The Shortcut You’ll Reach For

Writing messages: Annotated[list[AnyMessage], add_messages] in every state class gets old fast. LangGraph gives you MessagesState — a ready-made class with that field already set up.

python
from langgraph.graph import MessagesState

# MessagesState already has:
#   messages: Annotated[list[AnyMessage], add_messages]

# Extend it with your own fields
class MyAgentState(MessagesState):
    current_tool: str
    iteration: int

print(f"Inherited keys: {list(MessagesState.__annotations__.keys())}")
python
Inherited keys: ['messages']

That’s all you need. Just inherit from MessagesState, add your own fields, and you’re good to go. Here’s a full working graph:

python
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_core.messages import HumanMessage, AIMessage

class AgentState(MessagesState):
    tool_called: bool

def chatbot(state: AgentState) -> dict:
    last_msg = state["messages"][-1]
    reply = f"You said: {last_msg.content}"
    return {
        "messages": [AIMessage(content=reply)],
        "tool_called": False,
    }

graph = StateGraph(AgentState)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)

app = graph.compile()
result = app.invoke({
    "messages": [HumanMessage(content="How does state work?")],
    "tool_called": False,
})

for msg in result["messages"]:
    print(f"{msg.__class__.__name__}: {msg.content}")
print(f"Tool called: {result['tool_called']}")
python
HumanMessage: How does state work?
AIMessage: You said: How does state work?
Tool called: False

I reach for MessagesState in any graph that deals with chat. It cuts the boilerplate and makes your intent clear right away.

Writing Custom Reducer Functions

Now and then, operator.add and add_messages won’t fit your needs. Maybe you want to keep only the last N items, merge dicts, or add your own logic.

A custom reducer is simply a function with this shape: (current_value, new_value) -> merged_value. Here’s one that acts like a sliding window, keeping only the 3 most recent entries:

python
from typing import Annotated, TypedDict

def keep_last_3(current: list, new: list) -> list:
    """Append new items but keep only the last 3."""
    combined = current + new
    return combined[-3:]

class BoundedState(TypedDict):
    recent_actions: Annotated[list, keep_last_3]

# Simulating sequential updates
step_0 = ["search", "read"]
step_1 = keep_last_3(step_0, ["write"])
print(f"After update 1: {step_1}")

step_2 = keep_last_3(step_1, ["deploy"])
print(f"After update 2: {step_2}")
python
After update 1: ['search', 'read', 'write']
After update 2: ['read', 'write', 'deploy']

The window slides forward. Older entries fall off. This keeps memory in check for agents that run over many steps.

Here’s one more real-world case — a dict merger that keeps keys from earlier nodes:

python
from typing import Annotated, TypedDict

def merge_dicts(current: dict, new: dict) -> dict:
    """Shallow merge: new values overwrite existing keys."""
    return {**current, **new}

class MetadataState(TypedDict):
    metadata: Annotated[dict, merge_dicts]

existing = {"source": "api", "confidence": 0.8}
update = {"timestamp": "2026-03-10", "confidence": 0.95}

result = merge_dicts(existing, update)
print(result)
python
{'source': 'api', 'confidence': 0.95, 'timestamp': '2026-03-10'}

Notice that source from Node A survived. The confidence field took Node B’s fresher value. And timestamp is a new key entirely. Without this merger, plain overwrite would have thrown out the whole dict and started over.

Key Insight: Keep your reducer pure — don’t make API calls, don’t write to files, and don’t use random values inside it. It should only take the current value plus the incoming update and return the combined output. LangGraph fires the reducer each time a node returns data for that field.

Debugging State: Seeing What Each Node Did

When your graph gives you odd results, you need to see what each node kicked in. The stream method from LangGraph shows you step by step.

python
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
import operator

class DebugState(TypedDict):
    log: Annotated[list[str], operator.add]
    value: int

def step_one(state: DebugState) -> dict:
    return {"log": ["step_one ran"], "value": 10}

def step_two(state: DebugState) -> dict:
    doubled = state["value"] * 2
    return {"log": [f"step_two doubled to {doubled}"], "value": doubled}

graph = StateGraph(DebugState)
graph.add_node("step_one", step_one)
graph.add_node("step_two", step_two)
graph.add_edge(START, "step_one")
graph.add_edge("step_one", "step_two")
graph.add_edge("step_two", END)

app = graph.compile()

for event in app.stream({"log": [], "value": 0}):
    print(event)
    print("---")
python
{'step_one': {'log': ['step_one ran'], 'value': 10}}
---
{'step_two': {'log': ['step_two doubled to 20'], 'value': 20}}
---

Every event spells out the node that ran and what it changed. Here, step_one wrote value as 10, and step_two turned it into 20. When output goes wrong, this trace is the first place to check.

Tip: For serious debugging, hook your graph up to LangSmith. It records full state at every node, draws the path it took, and shows timing data. Much more useful than print statements for graphs with branching logic.

Common Mistakes with LangGraph State

Here are four mistakes I run into again and again. Each one leads to quiet bugs that are tough to track down.

Mistake 1: Changing State in Place

This is the top bug. You reach into state and edit it right there.

python
# WRONG -- mutating state directly
def bad_node(state):
    state["messages"].append("new message")  # Direct mutation!
    return state

# CORRECT -- return only the updates
def good_node(state):
    return {"messages": ["new message"]}  # Let the reducer handle it

Editing state in place skips the reducer. It might pass simple tests, but it’ll cause sneaky bugs with checkpoints, parallel nodes, and state replay.

Mistake 2: No Reducer for Parallel Nodes

Two nodes running at the same time both write to the same key? You must add a reducer. Without one, LangGraph can’t sort out the clash.

python
from typing import Annotated, TypedDict
import operator

# WRONG -- will crash with InvalidUpdateError in parallel
class BadState(TypedDict):
    results: list

# CORRECT -- reducer handles parallel writes
class GoodState(TypedDict):
    results: Annotated[list, operator.add]

Mistake 3: Sending Back the Whole State

Nodes should only return what changed. Sending back everything causes needless overwrites and breaks your reducers.

python
# WRONG -- returning the full state
def bad_node(state):
    state_copy = dict(state)
    state_copy["status"] = "done"
    return state_copy  # Overwrites ALL keys, even with reducers

# CORRECT -- return only what changed
def good_node(state):
    return {"status": "done"}

Mistake 4: Mixing Up operator.add Across Types

operator.add behaves differently for each type. Make sure it does what you expect.

python
from typing import Annotated, TypedDict
import operator

class ConfusingState(TypedDict):
    count: Annotated[int, operator.add]    # 5 + 3 = 8 (numeric)
    items: Annotated[list, operator.add]   # [1] + [2] = [1, 2] (concat)
    label: Annotated[str, operator.add]    # "hi" + "!" = "hi!" (concat)

print("int: adds numerically")
print("list: concatenates")
print("str: concatenates characters")
python
int: adds numerically
list: concatenates
str: concatenates characters

That count field with operator.add piles up integers — which may be what you want for a counter, or a bug if you meant to overwrite. Be clear about your intent.

Warning: Always pass the starting state when you call `app.invoke()`. LangGraph won’t fill in fields on its own — you’ll hit a `KeyError` if a node tries to read a field you left out.

python
type: exercise
id: custom-reducer-exercise
title: "Build a Custom Deduplicating Reducer"
difficulty: intermediate
exerciseType: write
instructions: |
  Write a custom reducer function called `add_unique` that concatenates two lists but removes duplicates, keeping the order of first appearance.

  For example: `add_unique(["a", "b"], ["b", "c"])` should return `["a", "b", "c"]`.

  Then define a `TagState` TypedDict that uses this reducer for a `tags` field.
starterCode: |
  from typing import Annotated, TypedDict

  def add_unique(current: list, new: list) -> list:
      """Concatenate lists, removing duplicates (keep first occurrence)."""
      # Your code here
      pass

  class TagState(TypedDict):
      tags: ___  # Use the add_unique reducer

  # Test it
  result = add_unique(["python", "ai"], ["ai", "langgraph", "python"])
  print(result)
testCases:
  - id: test-basic
    input: |
      result = add_unique(["a", "b"], ["b", "c"])
      print(result)
    expectedOutput: "['a', 'b', 'c']"
    description: "Should deduplicate while preserving order"
  - id: test-empty
    input: |
      result = add_unique([], ["x", "y"])
      print(result)
    expectedOutput: "['x', 'y']"
    description: "Should handle empty current list"
  - id: test-all-dupes
    input: |
      result = add_unique(["a", "b"], ["a", "b"])
      print(result)
    expectedOutput: "['a', 'b']"
    description: "All duplicates should collapse"
hints:
  - "You can iterate through the combined list and use a set to track what you've already seen."
  - "Here's the pattern: `seen = set(); result = []; for item in current + new: if item not in seen: seen.add(item); result.append(item)`"
solution: |
  from typing import Annotated, TypedDict

  def add_unique(current: list, new: list) -> list:
      seen = set()
      result = []
      for item in current + new:
          if item not in seen:
              seen.add(item)
              result.append(item)
      return result

  class TagState(TypedDict):
      tags: Annotated[list[str], add_unique]

  result = add_unique(["python", "ai"], ["ai", "langgraph", "python"])
  print(result)
solutionExplanation: |
  The `add_unique` reducer iterates through the combined list, using a set to track seen items. Only the first occurrence of each item makes it into the result. The `TagState` uses `Annotated[list[str], add_unique]` to wire this reducer to the `tags` field.
xpReward: 15

Bringing It All Together: A Hands-On Agent State

Let’s pull everything into a real-world state schema for a research agent. We’ll use MessagesState for chat, operator.add to collect sources, and plain overwrite for fields where only the latest value counts.

python
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_core.messages import HumanMessage, AIMessage
import operator

class ResearchAgentState(MessagesState):
    """State for a research agent that searches and summarizes."""
    sources: Annotated[list[str], operator.add]
    current_query: str
    iteration: int

def search(state: ResearchAgentState) -> dict:
    query = state["current_query"]
    return {
        "messages": [AIMessage(content=f"Searching for: {query}")],
        "sources": [f"https://example.com/result?q={query}"],
        "iteration": state["iteration"] + 1,
    }

def summarize(state: ResearchAgentState) -> dict:
    num_sources = len(state["sources"])
    return {
        "messages": [AIMessage(
            content=f"Found {num_sources} source(s). Summary complete."
        )],
        "current_query": "",
    }

graph = StateGraph(ResearchAgentState)
graph.add_node("search", search)
graph.add_node("summarize", summarize)
graph.add_edge(START, "search")
graph.add_edge("search", "summarize")
graph.add_edge("summarize", END)

app = graph.compile()
result = app.invoke({
    "messages": [HumanMessage(content="Explain LangGraph state")],
    "sources": [],
    "current_query": "LangGraph state management",
    "iteration": 0,
})

print(f"Messages: {len(result['messages'])}")
for msg in result["messages"]:
    print(f"  {msg.__class__.__name__}: {msg.content}")
print(f"Sources: {result['sources']}")
print(f"Iterations: {result['iteration']}")
python
Messages: 3
  HumanMessage: Explain LangGraph state
  AIMessage: Searching for: LangGraph state management
  AIMessage: Found 1 source(s). Summary complete.
Sources: ['https://example.com/result?q=LangGraph state management']
Iterations: 1

Every field follows the merge rule that matches what it stores. Chat messages grow through add_messages. Source URLs collect through operator.add. Meanwhile, current_query and iteration just overwrite since you only care about the newest value.

When Does Your State Schema Choice Matter Less?

Not every graph calls for a fancy state schema. For simple chains with 2–3 nodes and no parallel work, a basic TypedDict with no reducers does the job. Don’t over-build the state for a graph that won’t use it.

Reducers start to matter when you have:

  • Parallel nodes writing to the same key
  • Message history that must grow, not get replaced
  • Multi-step flows where each node adds to a growing list
  • Cyclic graphs where a node runs many times and each run should add to the pile, not wipe it clean

If your graph is straight-line with no shared lists, plain overwrite is simpler and easier to follow.

Summary

Let’s recap the three core ideas behind LangGraph state management. One: state is defined by a typed schema — most often TypedDict — that every node agrees on. Two: reducers decide how updates get combined. No reducer means overwrite. operator.add joins lists. add_messages grows the chat while handling duplicate IDs. Custom functions let you do anything else. Three: nodes only return the fields they changed — never a copy of the whole state.

For most projects, start with MessagesState, toss in your custom fields, and use Annotated[list, operator.add] for any field that should grow over time. That handles 90% of what you’ll need.

Practice Exercise

Build a full state schema and three-node graph for a document pipeline. The extract node pulls key phrases, classify picks a category, and enrich adds metadata. Set it up so key phrases pile up, the category overwrites, and the processing log uses a custom reducer that keeps only the last 5 entries.

Solution
python
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, MessagesState, START, END
import operator

def keep_last_5(current: list, new: list) -> list:
    combined = current + new
    return combined[-5:]

class DocState(MessagesState):
    key_phrases: Annotated[list[str], operator.add]
    category: str
    processing_log: Annotated[list[str], keep_last_5]
    source_text: str

def extract(state: DocState) -> dict:
    return {
        "key_phrases": ["machine learning", "state management"],
        "processing_log": ["extracted key phrases"],
    }

def classify(state: DocState) -> dict:
    return {
        "category": "technical",
        "processing_log": ["classified as technical"],
    }

def enrich(state: DocState) -> dict:
    return {
        "key_phrases": ["LangGraph"],
        "processing_log": ["enriched with metadata"],
    }

graph = StateGraph(DocState)
graph.add_node("extract", extract)
graph.add_node("classify", classify)
graph.add_node("enrich", enrich)
graph.add_edge(START, "extract")
graph.add_edge("extract", "classify")
graph.add_edge("classify", "enrich")
graph.add_edge("enrich", END)

app = graph.compile()
result = app.invoke({
    "messages": [],
    "key_phrases": [],
    "category": "",
    "processing_log": [],
    "source_text": "LangGraph state management tutorial",
})
print(f"Phrases: {result['key_phrases']}")
print(f"Category: {result['category']}")
print(f"Log: {result['processing_log']}")

The `key_phrases` uses `operator.add` to collect from `extract` and `enrich`. The `category` overwrites since only the latest label matters. The `processing_log` uses `keep_last_5` to stop it from growing without bound.

FAQ

Can I use Pydantic instead of TypedDict for state?

Absolutely. LangGraph accepts BaseModel and dataclasses in addition to TypedDict. Pydantic gives you runtime type checks — so if a node sends bad data, you see an error at once. The downside is a slight speed cost.

What happens if I don’t set a starting value for a state field?

You’ll get a KeyError when a node tries to read it. Always pass every field when you call invoke() or stream().

Can reducers see the full state?

No. A reducer only gets the current value and the new value for its own field. If you need cross-field logic, put it in a node function.

How do I remove items from a list that uses operator.add?

You can’t — operator.add only grows the list. Write a custom reducer that supports removal, or use add_messages with RemoveMessage if you’re working with messages.

Does state carry over between graph runs?

Not by default. Each call to invoke() starts fresh. To save state across runs, you’ll need a checkpointer — we’ll cover that in a later post on persistence.

References

  1. LangGraph State Concepts — Official Documentation
  2. LangGraph Python API — StateGraph
  3. LangGraph add_messages Reducer
  4. Python typing.TypedDict — Official Docs
  5. Python typing.Annotated — Official Docs
  6. LangChain Messages API
  7. LangGraph State Channels
Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Free Callback - Limited Slots
Not Sure Which Course to Start With?
Talk to our AI Counsellors and Practitioners. We'll help you clear all your questions for your background and goals, bridging the gap between your current skills and a career in AI.
10-digit mobile number
📞
Thank You!
We'll Call You Soon!
Our learning advisor will reach out within 24 hours.
(Check your inbox too — we've sent a confirmation)
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science