Menu

LangGraph ReAct Agent: Build from Scratch

Learn to build a ReAct agent from scratch in LangGraph with this hands-on guide — wire the think-act-observe loop, add tools, and debug agent cycles.

Written by Selva Prabhakaran | 27 min read

Wire up the Reason + Act loop one piece at a time — an agent that thinks before it acts, then learns from what it sees.

You ask your LLM something that needs live data. It tries to guess from training and gets it wrong. What if the model could stop, run a search tool, read the output, and then reply? That is what a ReAct agent does. By the end of this guide, you will build one from the ground up in LangGraph and know how every part works.

Let me walk you through the big picture before we touch any code.

A user types a question. It lands in the agent node — a thin wrapper around your LLM. The model reads the query, sees it needs more data, and fires off a tool call. That call travels to a tool node, which runs the right function and hands back a result. The result loops back to the agent node as a fresh fact.

The model then checks: do I know enough now? If not, it goes around again. If yes, it replies and the graph wraps up.

That is the full cycle: think, act, observe, repeat. One routing edge between the agent and tool nodes forms this loop. We will build each part — state, agent node, tool node, routing logic, and the final graph — one at a time.

What Does ReAct Mean and Why Does It Matter?

ReAct stands for Reasoning + Acting. The idea came from a 2022 paper by Yao et al., which showed that LLMs do much better when they take turns between thinking and doing.

Here is the core idea in plain terms: rather than reply in a single shot, the model asks itself what it still needs, calls a tool to grab it, reads the output, and keeps going until it can give a solid answer.

A plain LLM call works like this:

python
User question → LLM → Answer (possibly wrong)

A ReAct agent works like this:

python
User question → Think → Act (call tool) → Observe → Think → Act → ... → Final answer

Because the model gets many tries to gather facts, ReAct agents beat one-shot prompts on real-time lookups, multi-step research, and fact-heavy questions.

Key Insight: A ReAct agent does not map out the whole task in advance. It takes one step, looks at what came back, and picks the next move from there. This makes it nimble — it can recover from a bad tool result or change its plan halfway through.

What You Need Before Starting

  • Python: 3.10 or newer
  • Packages: langgraph 0.4+, langchain-openai 0.3+, langchain-core 0.3+
  • Install command: pip install langgraph langchain-openai langchain-core
  • API key: Set OPENAI_API_KEY in your shell. See OpenAI’s docs if you need one.
  • Time needed: About 35 minutes
  • Background: You should know basic LangGraph ideas (nodes, edges, state) from earlier posts in this series.

Our first code block pulls in every import we need. We grab the LLM wrapper, message types, the @tool tag, and the graph helpers from LangGraph.

python
import os
import json
from typing import Annotated, Literal

from langchain_openai import ChatOpenAI
from langchain_core.messages import (
    HumanMessage,
    AIMessage,
    ToolMessage,
    SystemMessage,
)
from langchain_core.tools import tool
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode, create_react_agent

How Do You Set Up the State?

Every LangGraph graph needs a state object. For a ReAct agent, state is just a list of messages — each one is a user query, an AI reply, or a tool result.

LangGraph ships a class called MessagesState that has one field: messages. It uses a special reducer (add_messages) that tacks new messages onto the list instead of wiping it out. That is just what a ReAct agent needs — the chat grows with each think-act-observe pass.

python
# MessagesState is equivalent to writing:
# class AgentState(TypedDict):
#     messages: Annotated[list, add_messages]
#
# We use MessagesState directly — no custom state needed.

Why not write your own state class? For a basic agent, the built-in one does the job. It tracks user inputs, AI thinking, tool calls, and tool outputs. You would only roll your own if you wanted extras like a step counter or a cost tracker.

Quick check: What happens if you use a plain assignment (messages = new_messages) instead of the reducer? Every update would wipe out the whole list. The reducer appends instead. Without it, the agent would forget its chat history after each step.

How Do You Create Tools the Agent Can Call?

The agent needs tools to reach the outside world. We will make two: a weather lookup and a math helper. In a real app these would hit live APIs. Here we use dummy data so the code runs with zero outside calls.

The @tool tag from LangChain turns any Python function into something the LLM can call on its own. The function’s docstring serves as the label the model reads when it picks which tool to use.

python
@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    weather_data = {
        "new york": "72°F, Partly Cloudy",
        "london": "58°F, Rainy",
        "tokyo": "80°F, Sunny",
        "paris": "65°F, Overcast",
    }
    city_lower = city.lower()
    if city_lower in weather_data:
        return f"Weather in {city}: {weather_data[city_lower]}"
    return f"Weather data not available for {city}"

This weather tool checks a small dict of fake data. If the city is listed, it sends back the forecast. If not, it says “not found.” The model can deal with both cases.

python
@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression. Use Python syntax."""
    try:
        result = eval(expression)
        return f"Result: {result}"
    except Exception as e:
        return f"Error evaluating '{expression}': {e}"


tools = [get_weather, calculator]
Warning: The `eval()` call here is just for the demo. In a real app, swap it for a safe math parser like `numexpr` or `asteval`. Running `eval()` on untrusted input can let it run any Python code at all.

How Do You Wire Up the Agent Node?

The agent node is the brain of the graph — this is where the LLM does its thinking. It takes the current state, passes the full message history to the model, and gets back a reply. If the model wants more data, that reply holds tool_calls: clear orders with a tool name and its inputs.

We hook tools into the model with bind_tools(). This tells the model which tools are on the table — their names, what they do, and what inputs they accept — so it can craft valid tool calls.

python
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
model_with_tools = model.bind_tools(tools)


def agent_node(state: MessagesState) -> dict:
    """Call the LLM with the current message history."""
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

Just four key lines of code. The model scans the full chat history and writes its next reply. The magic is in what comes back: either plain text (the model is done) or a reply with tool_calls (it still needs facts).

You can also shape the agent’s tone with a system prompt. Slot a SystemMessage into the agent node right before the LLM call.

python
SYSTEM_PROMPT = (
    "You are a concise research assistant. "
    "Use tools when you need facts. "
    "Answer in 2-3 sentences maximum."
)


def agent_node_with_prompt(state: MessagesState) -> dict:
    """Agent node with a system prompt prepended."""
    messages = [SystemMessage(content=SYSTEM_PROMPT)] + state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

We do not save the SystemMessage in state. We bolt it on fresh before each LLM call. That keeps the state lean — only user messages, AI replies, and tool results pile up.

Tip: Your system prompt shapes the agent’s style across every loop pass. Add rules like “never call the same tool twice with the same inputs” to keep it from going in circles.

How Does the Tool Node Work?

When the model asks to use a tool, something has to run that tool and pipe the output back. That job falls to the tool node.

LangGraph offers ToolNode — a plug-and-play node built for this. It reads tool_calls from the most recent AI message, runs the right function, and wraps the output in a ToolMessage.

python
tool_node = ToolNode(tools)

One line. Pass in the same tools list and you are done. When the graph hits this node, it grabs every pending tool call from the last AI message, runs each one, and adds the results to the chat.

Curious what happens under the hood? Here is a bare-bones view of the same logic:

python
def manual_tool_node(state: MessagesState) -> dict:
    """Execute tool calls from the last AI message."""
    last_message = state["messages"][-1]
    tool_results = []
    tool_map = {t.name: t for t in tools}

    for call in last_message.tool_calls:
        tool_fn = tool_map[call["name"]]
        result = tool_fn.invoke(call["args"])
        tool_results.append(
            ToolMessage(content=str(result), tool_call_id=call["id"])
        )
    return {"messages": tool_results}

Every ToolMessage carries a tool_call_id that ties it to the request it came from. The LLM reads this ID so it can match each result to the right call.

How Does the Routing Edge Form the Loop?

This is the heart of what makes a ReAct agent tick. After the agent node runs, we need to answer one question: did the model ask for a tool, or is it done?

If the last message holds tool_calls, we go to the tool node. If not, we go to END.

python
def should_continue(state: MessagesState) -> Literal["tools", "end"]:
    """Decide whether to call tools or finish."""
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return "end"

This tiny check builds the whole loop. Think, call a tool, read what comes back, think again — then either call one more tool or wrap up. The loop spins as long as the model keeps sending tool calls.

Key Insight: The routing edge IS the ReAct loop. Strip it out and you get a single LLM call. Put it in and you get an agent that reasons across many steps. The whole design hangs on this one small function.

How Do You Assemble and Compile the Graph?

We now have all four building blocks: state, agent node, tool node, and routing logic. Time to plug them into a StateGraph.

The graph mirrors the ReAct loop. START sends the first message to the agent. The routing edge picks what happens next. If tools are needed, the tool node runs and feeds results back to the agent. Otherwise the graph exits.

python
workflow = StateGraph(MessagesState)

# Add nodes
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)

# Add edges
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {"tools": "tools", "end": END},
)
workflow.add_edge("tools", "agent")

# Compile
react_agent = workflow.compile()

Three things to spot. First, START links to "agent" — every chat kicks off at the LLM. Second, add_conditional_edges maps each return value from our routing function to a node name. Third, "tools" always points back to "agent" — that is the cycle. Once a tool finishes, the agent reads the output and picks its next move.

python
type: 'exercise',
id: 'react-graph-assembly',
title: 'Exercise 1: Assemble a ReAct Graph',
difficulty: 'intermediate',
exerciseType: 'write',
instructions: 'Complete the graph assembly code below. Add the missing edges so the ReAct loop works: START → agent, agent → conditional routing, tools → agent.',
starterCode: 'workflow = StateGraph(MessagesState)\nworkflow.add_node("agent", agent_node)\nworkflow.add_node("tools", tool_node)\n\n# Add edges — fill in the three missing lines\n# 1. Connect START to the agent node\n# 2. Add conditional edges from agent (use should_continue)\n# 3. Connect tools back to agent\n\nreact_agent = workflow.compile()',
testCases: [
    { id: 'tc1', input: 'print(type(react_agent).__name__)', expectedOutput: 'CompiledStateGraph', description: 'Graph should compile successfully' },
    { id: 'tc2', input: 'print("tools" in [n for n in react_agent.nodes])', expectedOutput: 'True', description: 'Graph should contain tools node' },
],
hints: [
    'The three edge methods are: add_edge(START, "agent"), add_conditional_edges("agent", should_continue, {...}), and add_edge("tools", "agent")',
    'Full answer: workflow.add_edge(START, "agent")\nworkflow.add_conditional_edges("agent", should_continue, {"tools": "tools", "end": END})\nworkflow.add_edge("tools", "agent")',
],
solution: 'workflow = StateGraph(MessagesState)\nworkflow.add_node("agent", agent_node)\nworkflow.add_node("tools", tool_node)\n\nworkflow.add_edge(START, "agent")\nworkflow.add_conditional_edges(\n    "agent",\n    should_continue,\n    {"tools": "tools", "end": END},\n)\nworkflow.add_edge("tools", "agent")\n\nreact_agent = workflow.compile()',
solutionExplanation: 'The three edges create the ReAct loop: START → agent begins every conversation at the LLM node. The conditional edge checks for tool_calls and routes accordingly. The tools → agent edge completes the cycle so results flow back to the LLM.',
xpReward: 15,

Let’s Test It — Running the Agent

Time to try it out. We will ask a question that calls for a tool — the weather in Tokyo.

We pass a HumanMessage into the graph. The agent should spot that it needs get_weather, fire the call, read the output, and write a clear answer.

python
result = react_agent.invoke(
    {"messages": [HumanMessage(content="What's the weather in Tokyo?")]}
)

for msg in result["messages"]:
    print(f"{msg.type}: {msg.content[:80] if msg.content else '[tool call]'}")

You will see four messages in order:

python
human: What's the weather in Tokyo?
ai: [tool call]
tool: Weather in Tokyo: 80°F, Sunny
ai: The weather in Tokyo is currently 80°F and sunny!

Follow the trail. The human message goes in. The AI’s first reply is a tool call — no text, just an action. The tool sends back “80°F, Sunny.” Then the AI writes its final answer from that data.

What about a harder question? Let’s try one that needs several tool calls.

python
result = react_agent.invoke(
    {"messages": [HumanMessage(
        content="What's the weather in New York and London? "
                "Also, what's 72 minus 58?"
    )]}
)

for msg in result["messages"]:
    if msg.content:
        print(f"{msg.type}: {msg.content[:100]}")

The agent calls get_weather for both cities and calculator for the subtraction. It might batch the calls or run them one by one. Either way, it gathers all the facts before giving you the final answer. The gap should come out to 14°F.

How Can You Trace Each Step the Agent Takes?

When your agent returns odd results, you need to peek at what happened inside. The stream method shows each node’s output in real time.

python
inputs = {"messages": [HumanMessage(content="What's the weather in Paris?")]}

for step in react_agent.stream(inputs, stream_mode="updates"):
    for node_name, node_output in step.items():
        print(f"\n--- {node_name} ---")
        for msg in node_output["messages"]:
            if hasattr(msg, "tool_calls") and msg.tool_calls:
                for tc in msg.tool_calls:
                    print(f"  Tool call: {tc['name']}({tc['args']})")
            if msg.content:
                print(f"  Content: {msg.content}")

The trace reveals three steps. First, the agent node fires a get_weather call for Paris. Second, the tools node runs that call and sends back “65°F, Overcast.” Third, the agent node reads the result and crafts the final answer. If the agent loops in a strange way, this trace points right to the problem.

Tip: For tracing in live apps, give LangSmith a try. Set `LANGCHAIN_TRACING_V2=true` and `LANGCHAIN_API_KEY` to log every node run with timing data. You can replay any session and inspect state at each point.

What Does the create_react_agent() Shortcut Do?

Every part we just hand-built — agent node, tool node, routing edge, graph wiring — is a pattern people repeat all the time. LangGraph wraps it into a single call: create_react_agent().

Hand it a model and a list of tools, and it hands back a compiled graph that behaves the same as our custom build. You can also pass a prompt to shape the agent’s tone.

python
prebuilt_agent = create_react_agent(
    model="openai:gpt-4o-mini",
    tools=tools,
    prompt="You are a helpful assistant. Be concise.",
)

result = prebuilt_agent.invoke(
    {"messages": [HumanMessage(content="What's 15 * 23?")]}
)
print(result["messages"][-1].content)

The agent calls the math tool with 15 * 23 and gives back 345. Three lines to spin up a working ReAct agent (import, create, invoke). The built-in version packs the same routing, tool running, and message handling.

python
type: 'exercise',
id: 'react-prebuilt-agent',
title: 'Exercise 2: Build a ReAct Agent with create_react_agent',
difficulty: 'intermediate',
exerciseType: 'write',
instructions: 'Use create_react_agent to build an agent with the get_weather and calculator tools. Add a system prompt telling it to always mention the data source. Then invoke it asking about weather in London.',
starterCode: 'from langgraph.prebuilt import create_react_agent\n\n# Create a ReAct agent with both tools and a system prompt\nagent = create_react_agent(\n    # Fill in: model, tools, and prompt\n)\n\n# Invoke with a London weather question\nresult = agent.invoke(\n    {"messages": [HumanMessage(content="What is the weather in London?")]}\n)\nprint(result["messages"][-1].content)',
testCases: [
    { id: 'tc1', input: 'print("58" in result["messages"][-1].content or "Rainy" in result["messages"][-1].content)', expectedOutput: 'True', description: 'Response should contain London weather data' },
    { id: 'tc2', input: 'print(len(result["messages"]) >= 3)', expectedOutput: 'True', description: 'Should have at least 3 messages (human, tool call, response)' },
],
hints: [
    'The model parameter accepts a string like "openai:gpt-4o-mini". The tools parameter takes a list. The prompt parameter takes a string.',
    'agent = create_react_agent(model="openai:gpt-4o-mini", tools=[get_weather, calculator], prompt="Always mention your data source.")',
],
solution: 'agent = create_react_agent(\n    model="openai:gpt-4o-mini",\n    tools=[get_weather, calculator],\n    prompt="You are a helpful assistant. Always mention your data source.",\n)\n\nresult = agent.invoke(\n    {"messages": [HumanMessage(content="What is the weather in London?")]}\n)\nprint(result["messages"][-1].content)',
solutionExplanation: 'create_react_agent handles all the graph wiring internally. The prompt parameter sets a system message. The agent calls get_weather for London, gets "58°F, Rainy", and formulates a response mentioning the source.',
xpReward: 15,

When Should You Build by Hand vs. Use the Shortcut?

Why go through all that work when a one-liner exists? Because the manual route gives you power the shortcut cannot.

FeatureManual Buildcreate_react_agent()
Custom state fieldsAdd what you wantStuck with MessagesState plus schema
Node-level logicTotal control at every stepHooks via pre_model_hook / post_model_hook
Routing logicAny custom rule you writeHardwired: tool_calls go to tools, else end
Lines of code~30~3
Best forCustom agents, live systemsFast tests, stock use cases

Grab create_react_agent() when a stock ReAct agent is all you need. Build by hand when you want custom routing, extra state fields, or one-off node logic.

Here is a concrete case: a manual build with a step counter that caps runaway loops.

python
class AgentStateWithCounter(MessagesState):
    tool_call_count: int


def should_continue_with_limit(
    state: AgentStateWithCounter,
) -> Literal["tools", "end"]:
    """Stop after 5 tool calls to prevent runaway loops."""
    last_message = state["messages"][-1]
    if last_message.tool_calls and state.get("tool_call_count", 0) < 5:
        return "tools"
    return "end"

This kind of safety net — counting calls, capping them — calls for a custom state. The built-in helper cannot do it on its own.

Note: LangGraph’s `create_react_agent` moves fast. Version 0.4+ brought `pre_model_hook`, `post_model_hook`, and `response_format`. Check the official docs for the latest API.

What Goes Wrong with Agent Loops (and How to Fix It)?

ReAct agents can act up. Here are the three most common bugs and their fixes.

The agent spins in circles

The model fires the same tool over and over with the same inputs, getting the same output each time.

Why: The system prompt does not tell the model to stop once it has what it needs. Or the tool sends back vague output the model cannot parse.

Fix: Add clear anti-loop rules to your system prompt.

python
system_prompt = """You are a helpful assistant with access to tools.
Rules:
- Call a tool ONLY if you need information you don't have.
- NEVER call the same tool with the same arguments twice.
- Once you have enough information, respond directly.
"""

The agent ignores its tools

You ask something that needs a tool, but the model answers from memory.

Why: The model’s training data already holds the answer (or it thinks so). It sees no point in calling a tool.

Fix: Be direct: “Always use get_weather for weather queries. Do not guess.”

A tool error kills the whole run

A tool throws an error and the graph crashes.

Why: APIs can fail — network drops, rate limits, bad inputs. With no error handling, one failure takes the graph down.

Fix: Wrap tool code in try/except and return an error string. The model reads the string and adjusts.

python
@tool
def safe_weather(city: str) -> str:
    """Get weather for a city, with error handling."""
    try:
        if not city.strip():
            raise ValueError("City name cannot be empty")
        weather_data = {"new york": "72°F, Partly Cloudy"}
        city_lower = city.lower()
        if city_lower in weather_data:
            return f"Weather in {city}: {weather_data[city_lower]}"
        return f"No weather data for '{city}'"
    except Exception as e:
        return f"Error looking up weather: {e}"
Warning: LangGraph caps each run at 25 steps by default. After that it throws `GraphRecursionError`. You can raise the cap with `config={“recursion_limit”: 50}`. But if your agent hits 50+ steps, the real issue is likely your prompt or tool design — not the cap.

Hands-On: A Research Helper That Uses Many Tools

Let’s build something you could use in the real world: a research helper that finds facts and does math. This shows how a ReAct agent tackles multi-step questions where each step depends on the one before it.

We will add a mock search tool next to our math tool.

python
@tool
def search(query: str) -> str:
    """Search for information about a topic."""
    knowledge = {
        "python popularity": (
            "Python is #1 on the TIOBE index as of 2025, "
            "with a rating of approximately 23%."
        ),
        "javascript popularity": (
            "JavaScript is #6 on the TIOBE index as of 2025, "
            "with a rating of approximately 3.5%."
        ),
        "earth population": (
            "Earth's population is approximately 8.1 billion."
        ),
    }
    query_lower = query.lower()
    for key, value in knowledge.items():
        if key in query_lower:
            return value
    return f"No results found for: {query}"

Our research agent brings both tools to the table — search for facts, calculator for numbers. We pose a query that needs two lookups and one math step.

python
research_tools = [search, calculator]

research_agent = create_react_agent(
    model="openai:gpt-4o-mini",
    tools=research_tools,
    prompt=(
        "You are a research assistant. Use search for facts "
        "and calculator for math. Cite what you find."
    ),
)

result = research_agent.invoke(
    {"messages": [HumanMessage(
        content="How much more popular is Python than JavaScript "
                "according to TIOBE? Give me the ratio."
    )]}
)
print(result["messages"][-1].content)

The agent splits this up on its own. It looks up Python’s TIOBE score (23%), grabs JavaScript’s score (3.5%), divides 23 by 3.5 to get roughly 6.57, and reports that Python is about 6.6 times more popular. Each step feeds the next — the same way a human would work through it.

python
type: 'exercise',
id: 'react-research-agent',
title: 'Exercise 3: Build a Multi-Tool Research Agent',
difficulty: 'intermediate',
exerciseType: 'write',
instructions: 'Create a research agent using create_react_agent with the search and calculator tools. Ask it: "What percentage of Earth population would fit in the USA if the USA has 335 million people?" The agent should search for Earth population, then calculate 335000000 / 8100000000 * 100.',
starterCode: '# Create a research agent\nresearch_agent = create_react_agent(\n    # Fill in model, tools, and prompt\n)\n\n# Ask the population question\nresult = research_agent.invoke(\n    {"messages": [HumanMessage(\n        content="What percentage of Earth population lives in the USA "\n                "if the USA has 335 million people?"\n    )]}\n)\nprint(result["messages"][-1].content)',
testCases: [
    { id: 'tc1', input: 'print(len(result["messages"]) >= 4)', expectedOutput: 'True', description: 'Agent should make at least one search and one calculation' },
    { id: 'tc2', input: 'print("4" in result["messages"][-1].content)', expectedOutput: 'True', description: 'Answer should contain approximately 4%' },
],
hints: [
    'Use search and calculator as your tools list. The agent needs to search for "earth population" first.',
    'research_agent = create_react_agent(model="openai:gpt-4o-mini", tools=[search, calculator], prompt="Use search for facts, calculator for math.")',
],
solution: 'research_agent = create_react_agent(\n    model="openai:gpt-4o-mini",\n    tools=[search, calculator],\n    prompt="Use search for facts and calculator for math.",\n)\n\nresult = research_agent.invoke(\n    {"messages": [HumanMessage(\n        content="What percentage of Earth population lives in the USA "\n                "if the USA has 335 million people?"\n    )]}\n)\nprint(result["messages"][-1].content)',
solutionExplanation: 'The agent searches for Earth population (8.1 billion), then calculates 335000000 / 8100000000 * 100 ≈ 4.14%. It combines both tool results into a coherent answer.',
xpReward: 20,

Where Does ReAct Shine — and Where Does It Stumble?

ReAct is the most widely used agent design, but it is not the right tool for every job.

ReAct shines when you need:

  • Fact lookups — the answer lives inside a tool call
  • Multi-step thinking — the problem needs data from several sources
  • Open-ended tasks — you cannot know the step count in advance
  • Error recovery — the agent sees a failed call and tries a new path

ReAct stumbles when you have:

  • Fixed pipelines — if you know the exact steps (say, extract, clean, load), use a linear graph. ReAct adds needless LLM calls and makes the output less stable.
  • High-stakes choices — the model might skip a key step. Add a human review gate for any action with big impact.
  • Tight budgets — each loop pass costs one LLM call. A 5-pass loop costs 5x a single call. If most answers are simple, ReAct wastes money.
  • Long sessions — the message list grows each pass. After 10+ loops, you risk blowing through token limits.
Key Insight: Pick ReAct when you cannot predict the step count ahead of time. If you can draw the workflow as a fixed chart, skip ReAct. Its power is being flexible; its weak spot is being hard to predict.

Watch Out for These Common Coding Mistakes

Mistake 1: Calling the model without binding tools

python
def agent_node(state):
    response = model.invoke(state["messages"])  # No tools bound
    return {"messages": [response]}

Why it breaks: The model has no clue that tools exist. It answers from memory every time. No error pops up — you just get wrong answers with zero warning.

python
model_with_tools = model.bind_tools(tools)

def agent_node(state):
    response = model_with_tools.invoke(state["messages"])
    return {"messages": [response]}

Mistake 2: Leaving out the tools-to-agent edge

python
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
    "agent", should_continue, {"tools": "tools", "end": END}
)
# Missing: workflow.add_edge("tools", "agent")

Why it breaks: After the tool runs, there is no path back to the agent. The graph throws a runtime error. The ReAct loop demands the full cycle: agent to tools to agent.

python
workflow.add_edge("tools", "agent")

Mistake 3: Returning the wrong type from a node

python
def agent_node(state):
    response = model_with_tools.invoke(state["messages"])
    return response  # Returns AIMessage, not a dict

Why it breaks: Nodes must return a dict that fits the state schema. The messages key paired with add_messages expects a list. A bare message object throws a type error.

python
def agent_node(state):
    response = model_with_tools.invoke(state["messages"])
    return {"messages": [response]}
Tip: Quick debug trick: add a print line inside the agent node. Show the message count and whether tool calls are present. This lets you watch the loop without needing streaming or LangSmith.

Complete Code

Click to expand the full script (copy-paste and run)
python
# Complete code from: Build a ReAct Agent from Scratch with LangGraph
# Requires: pip install langgraph langchain-openai langchain-core
# Python 3.10+
# Set OPENAI_API_KEY environment variable before running

import os
import json
from typing import Annotated, Literal

from langchain_openai import ChatOpenAI
from langchain_core.messages import (
    HumanMessage,
    AIMessage,
    ToolMessage,
    SystemMessage,
)
from langchain_core.tools import tool
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode, create_react_agent

# --- Tools ---

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    weather_data = {
        "new york": "72°F, Partly Cloudy",
        "london": "58°F, Rainy",
        "tokyo": "80°F, Sunny",
        "paris": "65°F, Overcast",
    }
    city_lower = city.lower()
    if city_lower in weather_data:
        return f"Weather in {city}: {weather_data[city_lower]}"
    return f"Weather data not available for {city}"


@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression. Use Python syntax."""
    try:
        result = eval(expression)
        return f"Result: {result}"
    except Exception as e:
        return f"Error evaluating '{expression}': {e}"


tools = [get_weather, calculator]

# --- Model ---

model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
model_with_tools = model.bind_tools(tools)

# --- Nodes ---

def agent_node(state: MessagesState) -> dict:
    """Call the LLM with the current message history."""
    response = model_with_tools.invoke(state["messages"])
    return {"messages": [response]}


tool_node = ToolNode(tools)

# --- Routing ---

def should_continue(state: MessagesState) -> Literal["tools", "end"]:
    """Decide whether to call tools or finish."""
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return "end"

# --- Graph Assembly ---

workflow = StateGraph(MessagesState)
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {"tools": "tools", "end": END},
)
workflow.add_edge("tools", "agent")

react_agent = workflow.compile()

# --- Test ---

print("=== Single Tool Call ===")
result = react_agent.invoke(
    {"messages": [HumanMessage(content="What's the weather in Tokyo?")]}
)
for msg in result["messages"]:
    if msg.content:
        print(f"  {msg.type}: {msg.content}")

print("\n=== Multi-Step Query ===")
result = react_agent.invoke(
    {"messages": [HumanMessage(
        content="What's the weather in New York and London? "
                "Also, what's 72 minus 58?"
    )]}
)
for msg in result["messages"]:
    if msg.content:
        print(f"  {msg.type}: {msg.content}")

print("\n=== Prebuilt Agent ===")
prebuilt = create_react_agent(
    model="openai:gpt-4o-mini",
    tools=tools,
    prompt="You are a helpful assistant. Be concise.",
)
result = prebuilt.invoke(
    {"messages": [HumanMessage(content="What's 15 * 23?")]}
)
print(f"  Answer: {result['messages'][-1].content}")

print("\nScript completed successfully.")

What Did You Learn?

You built a ReAct agent from the ground up and saw how every part clicks together. The pattern is less complex than it sounds: an LLM node that can ask for tools, a tool node that runs them, and one routing edge that forms the loop.

The main takeaways:

  • ReAct = Reason + Act. The agent goes back and forth between thinking and doing, picking up facts along the way.
  • One routing edge makes the loop. It checks for tool_calls — if present, loop back; if not, exit.
  • MessagesState holds the chat. The add_messages reducer tacks on each message, keeping a full record.
  • create_react_agent() bundles it all. Reach for it when speed matters. Build by hand when you need custom behavior.
  • Stream to debug. Watch each node’s output to follow the agent’s choices in real time.

Practice exercise: Add a lookup_population(country: str) tool that returns population data. Ask a question that needs both this tool and the calculator (e.g., “What is the total population of India and China?”).

Solution
python
@tool
def lookup_population(country: str) -> str:
    """Look up the approximate population of a country."""
    populations = {
        "india": "1.44 billion",
        "china": "1.43 billion",
        "usa": "335 million",
        "brazil": "216 million",
    }
    country_lower = country.lower()
    if country_lower in populations:
        return f"Population of {country}: {populations[country_lower]}"
    return f"Population data not available for {country}"

extended_agent = create_react_agent(
    model="openai:gpt-4o-mini",
    tools=[get_weather, calculator, lookup_population],
    prompt="Use tools for facts. Use calculator for math.",
)

result = extended_agent.invoke(
    {"messages": [HumanMessage(
        content="What's the combined population of India and China?"
    )]}
)
print(result["messages"][-1].content)

The agent calls `lookup_population` twice, then uses `calculator` to add 1.44 + 1.43 billion, returning approximately 2.87 billion.

Frequently Asked Questions

Can a ReAct agent fire off many tools at once?

Yes. Models like GPT-4o can pack several tool_calls into one reply. LangGraph’s ToolNode runs them all and sends back every result in one batch. The agent sees all of them on the next pass. You do not need to change any code — it just works.

How do I cap the number of loops to save money?

Pass recursion_limit in the config: agent.invoke(inputs, config={"recursion_limit": 10}). This sets a hard ceiling on total node runs. For tighter control, track call counts in a custom state field and check that count inside your routing function.

Can I swap in Anthropic, Google, or open-source models?

Yes. Replace ChatOpenAI with ChatAnthropic, ChatGoogleGenerativeAI, ChatOllama, or any LangChain chat model. Everything else stays the same. The only must-have is tool calling support in the model.

How is LangGraph’s shortcut different from the one in LangChain?

They are two separate functions in two separate packages. The LangChain version (inside langchain.agents) builds a legacy AgentExecutor. The LangGraph version (inside langgraph.prebuilt) builds a StateGraph with streaming, state saving, and human-in-the-loop features. Choose LangGraph’s for any new work.

How do I give the agent long-term memory across chats?

Attach a checkpointer at compile time: workflow.compile(checkpointer=MemorySaver()). Then send a thread_id in the config: agent.invoke(inputs, config={"configurable": {"thread_id": "user-123"}}). The agent keeps every message in that thread. Our state-saving guide covers this topic in depth.

References

  1. Yao, S. et al. — “ReAct: Synergizing Reasoning and Acting in Language Models” (2022). arXiv:2210.03629
  2. LangGraph documentation — How to create a ReAct agent from scratch. Link
  3. LangGraph API Reference — create_react_agent. Link
  4. LangGraph documentation — StateGraph and MessagesState. Link
  5. LangChain documentation — Tool calling with chat models. Link
  6. LangGraph prebuilt module — ToolNode reference. Link
  7. IBM — What is a ReAct Agent? Link
  8. Wei, J. et al. — “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models” (2022). arXiv:2201.11903

Article tested with: langgraph 0.4.x, langchain-openai 0.3.x, langchain-core 0.3.x, Python 3.11. Last reviewed: March 2026.

Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Free Callback - Limited Slots
Not Sure Which Course to Start With?
Talk to our AI Counsellors and Practitioners. We'll help you clear all your questions for your background and goals, bridging the gap between your current skills and a career in AI.
10-digit mobile number
📞
Thank You!
We'll Call You Soon!
Our learning advisor will reach out within 24 hours.
(Check your inbox too — we've sent a confirmation)
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science