Menu

What Is LangGraph and Why Does It Exist?

Written by Selva Prabhakaran | 12 min read

You built a chatbot with LangChain. It pulls documents, calls an LLM, and sends back answers. Works great.

Then your boss says: “Make it check its own answers. If the answer looks wrong, retry with different tools.” Now your clean chain needs loops, branches, and runtime choices. LangChain wasn’t made for that.

That’s the exact gap LangGraph fills.

What Is LangGraph?

LangGraph is a Python library for building stateful, multi-step AI agent workflows as graphs. The same team behind LangChain built it, but it tackles a different problem.

LangChain gives you building blocks — model wrappers, prompt templates, retrievers, tool hooks. LangGraph gives you a way to wire those blocks into workflows that loop, branch, and decide on the fly.

Here’s the core idea. You lay out your workflow as a directed graph. Each node is a function that does one thing — call an LLM, check a result, run a tool. Each edge links nodes and controls the flow. A shared state object moves through the graph, getting updated at every step.

python
# Not runnable -- conceptual overview
# A LangGraph workflow follows this pattern:

# 1. Define state (what data flows through the graph)
# 2. Create nodes (functions that process state)
# 3. Connect nodes with edges (define the flow)
# 4. Compile and run

# graph = StateGraph(State)
# graph.add_node("retrieve", retrieve_docs)
# graph.add_node("generate", generate_answer)
# graph.add_node("check", check_quality)
# graph.add_edge("retrieve", "generate")
# graph.add_conditional_edges("check", decide_next)
# app = graph.compile()

That sketch captures the full mental model. You won’t memorize every API detail today — that’s what later posts are for. What matters now is why this design exists.

Key Insight: LangGraph uses graphs because real agent behavior isn’t a straight line. Agents loop back, retry, branch on results, and make choices. These patterns don’t fit a chain that only moves forward.

Why Chains Aren’t Enough

LangChain follows a simple pattern by design. Step A feeds into step B, then into step C. That’s a chain. It works well for clear-cut tasks.

Here’s a typical LangChain flow:

python
User Question → Retrieve Docs → Build Prompt → Call LLM → Return Answer

Every step runs once, in order. No going back. No branching. This is a DAG — a directed acyclic graph. “Acyclic” means no loops.

But real agents need loops. Think about these cases:

  • Self-correction: The agent checks its answer. If it’s wrong, it retries a different way.
  • Multi Tool routing: The agent picks one of five tools based on the question, then checks if the result was helpful.
  • Human-in-the=loop: The agent pauses for a person to approve, then moves on or rolls back.
  • Iterative Refinement: A coding agent writes code, runs it, reads the error, and loops until tests pass.

None of these fit a chain. They all need cycles — going back to an earlier step based on what just happened.

Warning: Don’t confuse “chains” with “simple.” LangChain’s LCEL can build rich pipelines. But those pipelines still move forward only. They can’t loop back based on runtime output. That’s the wall LangGraph breaks through.

LangGraph vs LangChain: The Real Difference

This is the first question people ask. The answer is simpler than most posts make it. The two tools work together, not against each other.

Aspect LangChain LangGraph
Core idea Chain (sequence) Graph (nodes + edges)
Flow Forward only (DAG) Loops allowed (cycles, retries)
State Passed through the chain Shared state object, explicit
Best for RAG, chatbots, simple tool use Agents with choices, loops, reasoning
Learning curve Lower Higher, but more control
Relation Base library Built on top of LangChain

The takeaway: LangGraph doesn’t replace LangChain. It sits on top of it. You still use LangChain’s model wrappers, prompt templates, and tool hooks. LangGraph adds a stronger way to direct the flow.

Think of it this way. LangChain gives you the Lego bricks. LangGraph gives you the manual that decides which brick to place next.

Tip: Start with LangChain, move to LangGraph when you need to. If your workflow is “retrieve, prompt, respond” — stay with LangChain. The moment you need loops, branching, or rich state — that’s when LangGraph earns its spot.

Think about it: If you build a plain RAG pipeline (retrieve → prompt → respond) with LangGraph instead of LangChain, does it work? Yes — but you’ve added weight for no gain. LangGraph shines only when your workflow needs cycles or conditional routing.

Graph-Based Design: The Core Concept

Why graphs? Why not just use if-else blocks in Python?

You could write agents with plain Python. For simple cases, you should. But graph-based design gives you three things raw Python doesn’t.

1. Visibility. A graph is easy to inspect. You can draw it, trace which nodes ran, and debug where things broke. Nested if-else blocks lose that clarity fast.

2. State saving. LangGraph can save the state at any point and resume later. Your agent can pause mid-task, wait for human input, and pick up right where it left off — even after a server restart.

3. Streaming. Each node is a separate unit, so LangGraph streams results node by node. You see partial output as it happens.

Here’s what this looks like in practice. Picture a research agent that searches, checks sources, and writes a summary:

python
┌──────────┐
                    │  START   │
                    └────┬─────┘
                         │
                    ┌────▼─────┐
                    │  Search  │
                    └────┬─────┘
                         │
                    ┌────▼─────┐
               ┌────│ Evaluate │────┐
               │    └──────────┘    │
          enough                not enough
          sources               sources
               │                    │
          ┌────▼─────┐         ┌────▼─────┐
          │ Summarize│         │ Refine   │
          └────┬─────┘         │ Query    │──→ (back to Search)
               │               └──────────┘
          ┌────▼─────┐
          │   END    │
          └──────────┘

The “Evaluate → Refine Query → Search” loop is something a chain can’t do. The graph handles it with ease.

Note: This post covers ideas only. The next post walks you through setting up LangGraph and building your first working graph from scratch. Today is about the why before the how.

The Core Parts: StateGraph, Nodes, and Edges

Three pieces make up every LangGraph workflow. Getting them now will make the coding posts click much faster.

StateGraph is the container. You make one by spelling out what data your workflow tracks. This state is a Python TypedDict or Pydantic model. Every node reads from it and writes to it.

python
# Not runnable -- illustrative only
from typing import TypedDict

class AgentState(TypedDict):
    question: str
    documents: list
    answer: str
    retry_count: int

The state is the single source of truth. Every node sees the same state object. When a node returns new values, LangGraph merges them into what’s already there.

Nodes are just functions. Each one takes the current state, does some work, and returns the parts it changed. A node might call an LLM, query a database, or check a flag.

python
# Not runnable -- illustrative only
def retrieve_documents(state):
    docs = search_vector_store(state["question"])
    return {"documents": docs}

def generate_answer(state):
    answer = call_llm(state["question"], state["documents"])
    return {"answer": answer}

Notice how each function returns only the fields it touched. It doesn’t need to send back the whole state — just the updates.

Edges define how nodes connect. Three types exist:

  • Normal edges: Always go from node A to node B.
  • Conditional edges: Run a function that returns which node to visit next. This is where branching and looping happen.
  • START and END: Special markers for entry and exit points.

Here’s how you wire nodes together. First, a plain forward-only graph:

python
# Not runnable -- illustrative only
from langgraph.graph import StateGraph, START, END

graph = StateGraph(AgentState)
graph.add_node("retrieve", retrieve_documents)
graph.add_node("generate", generate_answer)

graph.add_edge(START, "retrieve")
graph.add_edge("retrieve", "generate")
graph.add_edge("generate", END)

That’s the same as a chain. The real power kicks in when you add conditional edges:

python
# Not runnable -- illustrative only
def should_retry(state):
    if state["answer"] == "" and state["retry_count"] < 3:
        return "retrieve"    # Loop back
    return END               # Done

graph.add_conditional_edges("generate", should_retry)

That one add_conditional_edges call turns a straight chain into a self-fixing agent. The function looks at the state and picks where to go next.

Key Insight: Conditional edges are what make LangGraph special. They let you write small functions that check the state and route the flow on the fly. This is the piece that enables loops, retries, and branching.

Real Use Cases for Graph-Based Agents

Where does LangGraph shine in practice? These five patterns show up most often in real systems.

1. Self-fixing RAG. The agent grabs documents, writes an answer, then checks if the answer holds up against the source. If not, it re-fetches with a better query. The loop runs until quality passes or retries run out.

2. Multi-agent teams. A manager node hands tasks to expert agents — a researcher, a coder, a reviewer. Each expert runs as its own subgraph. The manager routes work and gathers results.

3. Code writing with tests. The agent writes code, runs it in a sandbox, reads errors, fixes the code, and loops until all tests pass. This is the pattern behind coding assistant agents.

4. Human-in-the-loop workflows. The agent drafts a response and waits for a person to review it. LangGraph’s checkpoint system saves the full state during the wait. After approval, the workflow picks up right where it paused.

5. Smart tool use. The agent picks a tool based on the question. After the tool responds, the agent checks if the result was useful. If not, it tries a different tool.

Each pattern needs at least one loop or branch. That’s why graphs work where chains don’t.

When NOT to Use LangGraph

Let me be clear — LangGraph isn’t always the right pick. Reaching for it too early is the top mistake I see beginners make.

Simple Q&A or chatbots. If your flow is “user asks, model answers” — skip LangGraph. A direct API call or LangChain is lighter and faster.

Basic RAG pipelines. Retrieve, prompt, respond. That’s a straight line. LangChain handles it fine. Adding LangGraph just adds weight with no upside.

Early tests. When you’re just seeing if an idea works, keep it simple. Write plain Python. Once you find yourself building your own retry loops and state tracking, then look at LangGraph.

Lean setups. LangGraph pulls in LangChain as a dependency. If you only need basic LLM calls, the OpenAI SDK or Anthropic SDK on its own may be all you need.

Warning: Don’t use LangGraph just because it’s newer. Some folks treat it as a blanket upgrade. It’s a tool for stateful, branching, looping workflows. Using it for a simple chain adds work for zero payoff.

Common Mistakes When Starting with LangGraph

Mistake 1: Skipping LangChain basics

LangGraph is built on LangChain. If you don’t know prompt templates, chat models, and tool calling, you’ll get stuck. The graph part is new — but the nodes themselves use LangChain pieces.

Mistake 2: Making every step a node

If two tasks always run one after the other with no branching, keep them in one function. Splitting them into two nodes adds clutter and makes the graph harder to follow.

Mistake 3: No retry limits on loops

Conditional edges that loop back can spin forever without a counter. Always add retry_count or max_iterations to your state.

python
# Not runnable -- illustrative only

# BAD: infinite loop risk
def should_retry(state):
    if not state["answer"]:
        return "retrieve"
    return END

# GOOD: bounded loop
def should_retry(state):
    if not state["answer"] and state["retry_count"] < 3:
        return "retrieve"
    return END

The fix is easy: one extra field in your state, one extra check in your routing function.

Quick Check: Test What You Learned

Before moving on, try to answer these without scrolling back:

  • What kind of flow does LangChain support? What does LangGraph add?
  • Name the three main parts of a LangGraph workflow.
  • Give two cases where LangChain is a better choice than LangGraph.
Check your answers

1. LangChain supports forward-only (DAG) flow. LangGraph adds cycles — looping back to prior nodes based on state.
2. StateGraph (container), Nodes (functions that process state), Edges (links that define flow, with conditional edges for branching).
3. Simple RAG pipelines and basic chatbots. Any workflow that runs forward with no loops or branching is simpler with LangChain.

Practice Exercises

Exercise 1: Pick the Right Tool

You have three scenarios. For each, decide: does it need LangGraph, or is LangChain enough?

  1. A chatbot that translates user text to French.
  2. An agent that writes SQL, runs it, checks for errors, and retries up to 3 times.
  3. A doc summarizer that takes a PDF and returns a summary.

Starter code:

python
# For each scenario, decide: 'langgraph' or 'langchain'?

scenario_1 = ___  # fill in
scenario_2 = ___  # fill in
scenario_3 = ___  # fill in

print(scenario_1)
print(scenario_2)
print(scenario_3)

Hints:
– Ask yourself: does the workflow need to loop back? If yes, that’s LangGraph.
– Scenario 1 is input → output (no loop). Scenario 2 has a retry loop. Scenario 3 is input → output.

Click to see the solution
python
scenario_1 = 'langchain'
scenario_2 = 'langgraph'
scenario_3 = 'langchain'

print(scenario_1)
print(scenario_2)
print(scenario_3)

Scenarios 1 and 3 are straight lines: input goes in, output comes out, no loops needed. LangChain handles these well. Scenario 2 needs a loop — the agent must check for SQL errors and retry — which is what LangGraph’s conditional edges are for.

Exercise 2: Design a Graph State

Design the state for a customer support agent that:
1. Sorts the question (billing, technical, general)
2. Routes to the right handler
3. Writes a response
4. Checks quality — if the response misses the mark, retries (max 2 times)

Define the state as a list of key names and print them sorted.

Starter code:

python
# Define the state keys for a customer support agent graph.
# Think about what data needs to flow between nodes.

state_keys = [
    ___,  # fill in the required state keys
]

state_keys.sort()
for key in state_keys:
    print(key)

Hints:
– You need to track: what the user asked, the category, what answer was given, whether it passed the check, and how many retries happened.
– The five keys: 'question', 'category', 'response', 'resolved', 'retry_count'

Click to see the solution
python
state_keys = [
    'question',
    'category',
    'response',
    'retry_count',
    'resolved',
]

state_keys.sort()
for key in state_keys:
    print(key)

Each key maps to a node’s job: `’question’` is the input, `’category’` comes from the classify node, `’response’` from the generate node, `’resolved’` from quality check, and `’retry_count’` stops the loop from running forever.

Summary

LangGraph exists because real AI agents don’t follow straight lines. They loop, branch, retry, and make choices. LangChain gives you the building blocks. LangGraph gives you a way to wire them into workflows that handle real-world messiness.

Three parts are all you need: StateGraph holds the state, Nodes process it, and Edges — especially conditional ones — control the flow.

Don’t rush to use LangGraph for everything. If your workflow runs forward with no branching, stick with LangChain. But the moment your agent needs loops, choices, or saved state — LangGraph is the right tool.

In the next post, we install LangGraph and build our first working graph from scratch.

Frequently Asked Questions

Is LangGraph a replacement for LangChain?

No. LangGraph sits on top of LangChain. You still use LangChain’s models, tools, and retrievers. LangGraph adds the routing layer for workflows that need loops and branching.

python
# LangGraph uses LangChain under the hood
# pip install langgraph  # This installs langchain-core automatically

How does LangGraph compare to AutoGen or CrewAI?

AutoGen and CrewAI focus on multi-agent chat — agents talking to each other. LangGraph is lower-level. You define the exact graph layout, which gives full control over the flow. The tradeoff: more setup work for more control over what happens.

Does LangGraph work with models besides OpenAI?

Yes. LangGraph doesn’t care which model you use. It works with any model LangChain supports — OpenAI, Anthropic, Google, and open-source models through Ollama or vLLM. The graph layer sits above the model layer.

Can LangGraph handle workflows that run for hours or days?

Yes. LangGraph’s checkpoint system saves the full state at each node. A workflow can pause (say, waiting for human review), and resume later — even on another server. This is one of its best features for production use.

References

  • LangGraph Official Documentation — Graph API Overview
  • LangChain Blog — “LangGraph” (launch post)
  • LangGraph GitHub Repository
  • LangChain Official Documentation
  • IBM — “What is LangGraph?”
  • DataCamp — “LangGraph Tutorial: What Is LangGraph and How to Use It?”
  • Harrison Chase — “Cognitive Architecture for Language Agents”
Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Get the full course,
completely free.
Join 57,000+ students learning Python, SQL & ML. One year of access, all resources included.
📚 10 Courses
🐍 Python & ML
🗄️ SQL
📦 Downloads
📅 1 Year Access
No thanks
🎓
Free AI/ML Starter Kit
Python · SQL · ML · 10 Courses · 57,000+ students
🎉   You're in! Check your inbox (or Promotions/Spam) for the access link.
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science