Menu

LangGraph Conditional Edges — Dynamic Routing Guide

Written by Selva Prabhakaran | 23 min read

You have a LangGraph workflow. Every request goes through the same nodes in the same order. Sort a ticket? Same path. Summarize a contract? Same path. That works — until your manager asks, “Can we send urgent ones to a human and auto-reply to the rest?” Now your graph needs to think. That’s what conditional edges are for.

What Are Conditional Edges (And Why Static Edges Aren’t Enough)?

Picture two nodes. One labels a support ticket. The other writes a reply. Every ticket visits both, every time. That’s a static edge — a fixed link that never changes.

Before You Start

  • Python: 3.10 or newer

  • Packages: langgraph 0.4+, langchain-openai 0.3+

  • Setup: pip install langgraph langchain-openai

  • Background: Know the basics of LangGraph nodes, edges, and state (Posts 5–7)

  • Time: 20–25 minutes

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class TicketState(TypedDict):
    ticket: str
    category: str
    response: str

def classify(state: TicketState) -> dict:
    return {"category": "billing"}

def draft_reply(state: TicketState) -> dict:
    return {"response": f"Handling your {state['category']} issue."}

graph = StateGraph(TicketState)
graph.add_node("classify", classify)
graph.add_node("draft_reply", draft_reply)

graph.add_edge(START, "classify")
graph.add_edge("classify", "draft_reply")  # static: always goes here
graph.add_edge("draft_reply", END)

app = graph.compile()
result = app.invoke({"ticket": "I was charged twice", "category": "", "response": ""})
print(result["response"])
python
Handling your billing issue.

That’s fine when every ticket gets the same treatment. But what if billing tickets need an auto-reply and complaints need a human? A static edge can’t branch. You’d need a new graph for each path.

A conditional edge swaps that rigid wire for a decision point. Instead of “always go to node X,” it says “call this function and follow wherever it points.”

Key Insight: > Think of a conditional edge as an if-else that sits between two nodes. It reads the state, picks a target, and sends the graph that way. The logic lives on the edge — not inside a node.

How Does add_conditional_edges() Work?

You pass it three things:

  • Source node — where the edge starts

  • Routing function — reads state, returns a node name as a string

  • Path map (optional) — maps return values to real node names

Here’s the simplest form. The router checks category and picks a target.

python
def route_ticket(state: TicketState) -> str:
    if state["category"] == "billing":
        return "auto_reply"
    else:
        return "human_review"

graph.add_conditional_edges("classify", route_ticket)

After classify runs, LangGraph calls route_ticket with the current state. The string it returns — "auto_reply" or "human_review" — tells the graph which node to visit next.

The Path Map — Cleaner Return Values

Your router might return short labels like "auto" or "human" that don’t match your node names. A path map acts as a lookup table between those labels and the real targets.

python
def route_ticket(state: TicketState) -> str:
    if state["category"] == "billing":
        return "auto"
    elif state["category"] == "complaint":
        return "human"
    else:
        return "end"

graph.add_conditional_edges(
    "classify",
    route_ticket,
    {
        "auto": "auto_reply",
        "human": "human_review",
        "end": END,
    }
)

The path map keeps routing logic and node names apart. Rename a node? Just update the map. The router stays the same.

Tip: > If your router returns a value that isn’t in the path map, you get a ValueError. Make sure every string the router can return has a key in the map.

How Do You Write Your First Routing Function?

The best way to learn is to build. Let’s make a ticket router that labels tickets and sends them down one of three paths.

Four nodes do the work. classify reads the ticket and picks a type. auto_reply handles billing. human_review flags things that need a person. escalate deals with emergencies. One conditional edge after classify steers each ticket.

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class TicketState(TypedDict):
    ticket: str
    category: str
    priority: str
    response: str

def classify(state: TicketState) -> dict:
    ticket = state["ticket"].lower()
    if "urgent" in ticket or "down" in ticket:
        return {"category": "outage", "priority": "high"}
    elif "charge" in ticket or "bill" in ticket:
        return {"category": "billing", "priority": "low"}
    else:
        return {"category": "general", "priority": "medium"}

def auto_reply(state: TicketState) -> dict:
    return {"response": f"Auto-reply: We're looking into your {state['category']} issue."}

def human_review(state: TicketState) -> dict:
    return {"response": f"Flagged for human review: {state['ticket']}"}

def escalate(state: TicketState) -> dict:
    return {"response": f"URGENT escalation: {state['ticket']}"}

The routing function peeks at category and priority, then returns the name of the next node.

python
def route_after_classify(state: TicketState) -> str:
    if state["priority"] == "high":
        return "escalate"
    elif state["category"] == "billing":
        return "auto_reply"
    else:
        return "human_review"

Now wire it up. Add the four nodes, link START to classify, and use a conditional edge to fan out to the three handlers.

python
graph = StateGraph(TicketState)
graph.add_node("classify", classify)
graph.add_node("auto_reply", auto_reply)
graph.add_node("human_review", human_review)
graph.add_node("escalate", escalate)

graph.add_edge(START, "classify")
graph.add_conditional_edges("classify", route_after_classify)
graph.add_edge("auto_reply", END)
graph.add_edge("human_review", END)
graph.add_edge("escalate", END)

app = graph.compile()

Let’s throw three different tickets at the graph and confirm that each one takes its own route.

python
tickets = [
    "I was charged twice on my bill",
    "My dashboard is down — urgent!",
    "How do I reset my password?",
]

for ticket in tickets:
    result = app.invoke({
        "ticket": ticket, "category": "", "priority": "", "response": "",
    })
    print(f"Ticket: {ticket}")
    print(f"  -> {result['response']}\n")
python
Ticket: I was charged twice on my bill
  -> Auto-reply: We're looking into your billing issue.

Ticket: My dashboard is down — urgent!
  -> URGENT escalation: My dashboard is down — urgent!

Ticket: How do I reset my password?
  -> Flagged for human review: How do I reset my password?

Billing? Auto-reply. Server down? Escalated. Password reset? Sent to a human. Three completely different outcomes from one graph, powered by a single routing function.

How Do You Branch Based on LLM Output?

Real apps don’t use keyword matching. They ask an LLM to decide, then route based on the answer. The flow is the same: a node calls the model, writes the result to state, and the router reads it.

Here’s how it looks. The detect_intent node sends a query to an LLM and asks for a one-word label. The router reads that label to pick the next step.

python
# Requires: pip install langchain-openai
# Requires: OPENAI_API_KEY environment variable set
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class State(TypedDict):
    query: str
    intent: str
    answer: str

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def detect_intent(state: State) -> dict:
    prompt = f"""Classify this user query into exactly one category.
Categories: question, complaint, feedback
Query: {state['query']}
Reply with just the category name."""
    response = llm.invoke(prompt)
    return {"intent": response.content.strip().lower()}

The router checks the intent value the LLM set.

python
def route_by_intent(state: State) -> str:
    intent = state["intent"]
    if intent == "question":
        return "answer_question"
    elif intent == "complaint":
        return "handle_complaint"
    else:
        return "log_feedback"

Warning: > LLMs are messy with format. Ask for “question” and you might get “Question”, “it’s a question”, or ” question\n”. Always clean the response with .strip().lower() and add a catch-all branch.

Here’s a safer version using a lookup dict with a default.

python
def route_by_intent_safe(state: State) -> str:
    intent = state["intent"]
    routing_map = {
        "question": "answer_question",
        "complaint": "handle_complaint",
        "feedback": "log_feedback",
    }
    return routing_map.get(intent, "log_feedback")  # default fallback

Skip that default, and one odd model reply crashes your whole graph.

How Do You Branch Based on State Values?

You don’t always need an LLM. Sometimes the data is already in state — a counter, a flag, a score. The router can check any of those fields.

Here’s a retry pattern. The process node tries some work and sets a success flag. The router checks the result: try again, finish up, or quit after too many tries.

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class RetryState(TypedDict):
    input_data: str
    result: str
    retry_count: int
    success: bool

def process(state: RetryState) -> dict:
    if state["retry_count"] < 2:
        return {"success": False, "retry_count": state["retry_count"] + 1}
    return {"success": True, "result": "Processed successfully"}

def route_retry(state: RetryState) -> str:
    if state["success"]:
        return "done"
    elif state["retry_count"] >= 3:
        return "give_up"
    else:
        return "process"  # loop back

Notice the router can return "process" — the same node it just came from. That’s a loop. LangGraph handles loops fine, but set a recursion limit so it can’t spin forever. We’ll cover loops in Post 11.

python
def done(state: RetryState) -> dict:
    return {"result": "Success!"}

def give_up(state: RetryState) -> dict:
    return {"result": "Failed after max retries."}

graph = StateGraph(RetryState)
graph.add_node("process", process)
graph.add_node("done", done)
graph.add_node("give_up", give_up)

graph.add_edge(START, "process")
graph.add_conditional_edges("process", route_retry)
graph.add_edge("done", END)
graph.add_edge("give_up", END)

app = graph.compile()
result = app.invoke({
    "input_data": "test", "result": "", "retry_count": 0, "success": False,
})
print(result["result"])
python
Success!

Two fails (count 0 and 1), then success on the third try. The router saw success=True and went to done.

Key Insight: > A good router does one thing: read state and return a string. No API calls, no writes, no side effects. Nodes do the work. Routers just pick the path.

How Does Parallel Branching Work (Multiple Conditional Paths)?

Every router so far has picked one node. But what if two nodes should run at the same time? Return a list of names, and LangGraph runs them all in parallel.

Example: a document review pipeline. After intake, a grammar checker and a fact checker run side by side. Their outputs merge, and the document gets published.

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
from operator import add
from typing import Annotated

class ReviewState(TypedDict):
    document: str
    grammar_ok: bool
    facts_ok: bool
    reviews: Annotated[list[str], add]

def intake(state: ReviewState) -> dict:
    return {"document": state["document"]}

def grammar_check(state: ReviewState) -> dict:
    return {"reviews": ["Grammar: looks good"], "grammar_ok": True}

def fact_check(state: ReviewState) -> dict:
    return {"reviews": ["Facts: verified"], "facts_ok": True}

def publish(state: ReviewState) -> dict:
    return {"reviews": ["Published!"]}

The router returns a two-item list. Both checkers launch at once.

python
def route_to_reviewers(state: ReviewState) -> list[str]:
    return ["grammar_check", "fact_check"]

graph = StateGraph(ReviewState)
graph.add_node("intake", intake)
graph.add_node("grammar_check", grammar_check)
graph.add_node("fact_check", fact_check)
graph.add_node("publish", publish)

graph.add_edge(START, "intake")
graph.add_conditional_edges("intake", route_to_reviewers)
graph.add_edge("grammar_check", "publish")
graph.add_edge("fact_check", "publish")
graph.add_edge("publish", END)

app = graph.compile()
result = app.invoke({
    "document": "LangGraph is great.",
    "grammar_ok": False, "facts_ok": False, "reviews": [],
})
print(result["reviews"])
python
['Grammar: looks good', 'Facts: verified', 'Published!']

Both checks ran at once, and both results landed in reviews thanks to the add reducer. Without Annotated[list[str], add], the second node to finish would wipe out the first node’s output.

How Should You Handle Fallback and Default Edges?

Every router needs a catch-all. Models surprise you. Users surprise you. If the LLM returns a label you didn’t plan for, the graph crashes — unless there’s a default path. This is the #1 source of runtime errors in LangGraph.

python
# BAD: no fallback — crashes on unexpected values
def route_fragile(state: State) -> str:
    if state["intent"] == "question":
        return "answer_question"
    elif state["intent"] == "complaint":
        return "handle_complaint"
    # intent is "feedback" or empty? Returns None -> crash
python
# GOOD: always has a default
def route_safe(state: State) -> str:
    if state["intent"] == "question":
        return "answer_question"
    elif state["intent"] == "complaint":
        return "handle_complaint"
    else:
        return "fallback_handler"

Rule of thumb: end every router with else. Someone will add a new LLM label and forget to update the router. That silent None return turns into a 2 AM outage.

How Do You Build a Decision-Making Workflow End-to-End?

Let’s put it all together. We’ll build a customer service agent that routes in two stages. First by topic (billing, tech, or general). Then by tone. Angry customers get escalated no matter what the issue is.

The state holds five fields: the query, its category, a sentiment tag, the response, and an escalation flag.

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class ServiceState(TypedDict):
    query: str
    category: str
    sentiment: str
    response: str
    escalated: bool

def classify_query(state: ServiceState) -> dict:
    query = state["query"].lower()
    if "refund" in query or "money" in query:
        category = "billing"
    elif "broken" in query or "error" in query or "bug" in query:
        category = "technical"
    else:
        category = "general"

    sentiment = "negative" if any(
        w in query for w in ["angry", "terrible", "worst"]
    ) else "neutral"
    return {"category": category, "sentiment": sentiment}

Each category has its own handler. An extra escalation node handles angry customers.

python
def handle_billing(state: ServiceState) -> dict:
    return {"response": "Billing concern flagged. Expect resolution within 24 hours."}

def handle_technical(state: ServiceState) -> dict:
    return {"response": "Let me troubleshoot that. Have you tried restarting the application?"}

def handle_general(state: ServiceState) -> dict:
    return {"response": "Thanks for reaching out! Connecting you with the right team."}

def escalate_to_human(state: ServiceState) -> dict:
    return {"response": "Escalating to a human agent due to negative sentiment.", "escalated": True}

def finalize(state: ServiceState) -> dict:
    return {}  # no-op: response already set by handler

Two routers power the two stages. The first reads the category. The second checks the tone after each handler.

python
def route_by_category(state: ServiceState) -> str:
    category = state["category"]
    if category == "billing":
        return "handle_billing"
    elif category == "technical":
        return "handle_technical"
    else:
        return "handle_general"

def route_by_sentiment(state: ServiceState) -> str:
    if state["sentiment"] == "negative":
        return "escalate_to_human"
    return "finalize"

Wire the two stages together. Category routing first, then sentiment routing after each handler.

python
graph = StateGraph(ServiceState)
graph.add_node("classify_query", classify_query)
graph.add_node("handle_billing", handle_billing)
graph.add_node("handle_technical", handle_technical)
graph.add_node("handle_general", handle_general)
graph.add_node("escalate_to_human", escalate_to_human)
graph.add_node("finalize", finalize)

graph.add_edge(START, "classify_query")
graph.add_conditional_edges("classify_query", route_by_category)
graph.add_conditional_edges("handle_billing", route_by_sentiment)
graph.add_conditional_edges("handle_technical", route_by_sentiment)
graph.add_conditional_edges("handle_general", route_by_sentiment)
graph.add_edge("escalate_to_human", END)
graph.add_edge("finalize", END)

app = graph.compile()

Test with three queries — calm billing, angry tech, and neutral general.

python
queries = [
    "I need a refund for my subscription",
    "The app is broken and I'm angry about it",
    "What are your business hours?",
]

for query in queries:
    result = app.invoke({
        "query": query, "category": "", "sentiment": "",
        "response": "", "escalated": False,
    })
    print(f"Query: {query}")
    print(f"  Category: {result['category']} | Sentiment: {result['sentiment']}")
    print(f"  Response: {result['response']}")
    print(f"  Escalated: {result['escalated']}\n")
python
Query: I need a refund for my subscription
  Category: billing | Sentiment: neutral
  Response: Billing concern flagged. Expect resolution within 24 hours.
  Escalated: False

Query: The app is broken and I'm angry about it
  Category: technical | Sentiment: negative
  Response: Escalating to a human agent due to negative sentiment.
  Escalated: True

Query: What are your business hours?
  Category: general | Sentiment: neutral
  Response: Thanks for reaching out! Connecting you with the right team.
  Escalated: False

The angry tech query got escalated. The billing request got a standard reply. The general question went to the right team. Two routing stages working in sync.

How Do You Debug Conditional Routing?

The graph took the wrong path. How do you find out why? Three tricks I use all the time.

Trick 1: Print inside the router. See the exact state at decision time.

python
def route_debug(state: ServiceState) -> str:
    print(f"DEBUG: category={state['category']}, sentiment={state['sentiment']}")
    if state["sentiment"] == "negative":
        return "escalate_to_human"
    return "finalize"

Trick 2: Print the graph layout. Call get_graph() to see all nodes and edges.

python
print(app.get_graph().draw_ascii())

This shows every node, edge, and conditional edge. Nodes with no incoming edges stand out fast.

Trick 3: Stream step by step. Swap invoke() for stream() to watch nodes fire one at a time.

python
for event in app.stream({
    "query": "This is terrible, I want a refund",
    "category": "", "sentiment": "", "response": "", "escalated": False,
}):
    print(event)

Each event shows which node ran and what changed. If the wrong node fires, you know the router sent it there.

Tip: > For bigger graphs, use get_graph().draw_mermaid() to make a Mermaid diagram. Paste it in any Markdown viewer to see the full flow.

What Are the Most Common Mistakes with Conditional Edges?

Mistake 1: Router Modifies State

python
# WRONG
def bad_router(state: TicketState) -> str:
    state["category"] = "billing"  # side effect!
    return "auto_reply"
python
# RIGHT
def good_router(state: TicketState) -> str:
    if state["category"] == "billing":
        return "auto_reply"
    return "human_review"

Routers get a read-only view of state. Changes inside a router won’t be tracked. Only nodes can update state.

Mistake 2: Returning a Name That Was Never Registered

python
def route_missing(state: TicketState) -> str:
    return "support"  # "support" was never registered with add_node()

This fails because "support" was never added with add_node(). Make sure every string your router can return matches a real node or END.

Mistake 3: No Default Path

The #1 crash cause. Close every if/elif with else. Give every dict.get() a default.

Mistake 4: Assuming the LLM Returns Clean Strings

python
# FRAGILE
def route_fragile(state: State) -> str:
    if state["intent"] == "question":
        return "answer"
    elif state["intent"] == "complaint":
        return "escalate"
    # LLM returned "Question" (capitalized) — no match!

Always .strip().lower() before comparing. Better yet, use structured output (JSON mode) so the model gives you clean data from the start.

How Do You Route Right from START?

You can hook a conditional edge onto START itself. This means the first node to run depends on what the user passes in — no fixed starting point.

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class InputState(TypedDict):
    input_type: str
    data: str
    result: str

def process_text(state: InputState) -> dict:
    return {"result": f"Processed text: {state['data'][:20]}..."}

def process_number(state: InputState) -> dict:
    return {"result": f"Processed number: {state['data']}"}

def route_start(state: InputState) -> str:
    if state["input_type"] == "text":
        return "process_text"
    return "process_number"

graph = StateGraph(InputState)
graph.add_node("process_text", process_text)
graph.add_node("process_number", process_number)

graph.add_conditional_edges(START, route_start)
graph.add_edge("process_text", END)
graph.add_edge("process_number", END)

app = graph.compile()

result = app.invoke({"input_type": "text", "data": "Hello world from LangGraph", "result": ""})
print(result["result"])
python
Processed text: Hello world from Lan...

Great for when different inputs need totally different pipelines.

When Should You NOT Use Conditional Edges?

They’re not always the right tool. Here are three cases where something else fits better.

Simple if-else in one node. If the “decision” just changes a variable and the same node handles all cases, don’t split into separate nodes. One node with an if-else is simpler.

The node should pick the next step. LangGraph’s Command object lets a node return a state update and a routing choice in one shot. Cleaner than writing to state just so a router can read it back.

You need to fan out over a list. Want to run the same node N times with different inputs? Use Send objects. They were built for map-reduce patterns.

Quick Check: Predict the Output

What does this routing function return when state["score"] is 75?

python
def route_by_score(state: dict) -> str:
    score = state["score"]
    if score >= 90:
        return "excellent"
    elif score >= 70:
        return "good"
    elif score >= 50:
        return "average"
    else:
        return "needs_improvement"

Click to reveal answer

The answer is "good". Since 75 passes the >= 70 check, the second branch fires. The graph proceeds to the node named "good".

Quick Check: Spot the Bug

What’s wrong with this graph setup?

python
graph.add_node("classify", classify)
graph.add_node("handle_a", handle_a)
graph.add_node("handle_b", handle_b)

def router(state):
    if state["type"] == "a":
        return "handle_a"
    if state["type"] == "b":
        return "handle_b"

graph.add_conditional_edges("classify", router)

Click to reveal answer

No else clause. If state["type"] isn’t "a" or "b", the function returns None. LangGraph can’t route to None — it crashes. Fix: add else: return END or a fallback node.

Error Troubleshooting

ValueError: Expected node name or END, got None

The router gave back None because no condition matched and there’s no else. Add a default return value.

python
# Fix: add else clause
def route_fixed(state):
    if state["type"] == "a":
        return "handle_a"
    else:
        return "fallback"  # catches everything

ValueError: Node ‘xyz’ not found in graph

Your router returned a name that was never added with add_node(). Check for typos — "suport" vs "support" is a classic. Make sure every return value maps to a real node or END.

GraphRecursionError: Recursion limit reached

A conditional edge is sending the graph in circles and the loop won’t stop. The exit condition in the router likely never fires. Fix the logic, or if the loop is on purpose, raise the limit: app = graph.compile(recursion_limit=50).

Practice Exercise

Build a content moderation graph with conditional routing.

Requirements:
– State with fields: text (str), classification (str), response (str)
– A classify_content node that checks for banned words (“spam”, “scam”) and sets classification to “blocked”, warning words (“free”, “click”) set “warning”, everything else sets “safe”
– Three handler nodes (handle_safe, handle_warning, handle_blocked) that set appropriate response messages
– A routing function that reads the classification and routes correctly
– Test with three inputs that hit all three paths

Click to see solution

python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class ModerationState(TypedDict):
    text: str
    classification: str
    response: str

def classify_content(state: ModerationState) -> dict:
    text = state["text"].lower()
    if any(word in text for word in ["spam", "scam"]):
        return {"classification": "blocked"}
    elif any(word in text for word in ["free", "click"]):
        return {"classification": "warning"}
    return {"classification": "safe"}

def handle_safe(state: ModerationState) -> dict:
    return {"response": "Content approved."}

def handle_warning(state: ModerationState) -> dict:
    return {"response": "Content flagged for manual review."}

def handle_blocked(state: ModerationState) -> dict:
    return {"response": "Content blocked — violates policy."}

def route_moderation(state: ModerationState) -> str:
    classification = state["classification"]
    if classification == "blocked":
        return "handle_blocked"
    elif classification == "warning":
        return "handle_warning"
    else:
        return "handle_safe"

graph = StateGraph(ModerationState)
graph.add_node("classify_content", classify_content)
graph.add_node("handle_safe", handle_safe)
graph.add_node("handle_warning", handle_warning)
graph.add_node("handle_blocked", handle_blocked)

graph.add_edge(START, "classify_content")
graph.add_conditional_edges("classify_content", route_moderation)
graph.add_edge("handle_safe", END)
graph.add_edge("handle_warning", END)
graph.add_edge("handle_blocked", END)

app = graph.compile()

for text in ["Great article!", "Click here for free stuff", "This is a scam"]:
    result = app.invoke({"text": text, "classification": "", "response": ""})
    print(f"{text} -> {result['response']}")
python
Great article! -> Content approved.
Click here for free stuff -> Content flagged for manual review.
This is a scam -> Content blocked — violates policy.

Same pattern as before: a node writes a label to state, and the router reads it to pick the path. Three inputs, three paths, three right answers.

Summary

Conditional edges turn LangGraph from a fixed pipeline into a graph that decides on the fly. Quick recap:

  • Static vs. conditional: Static edges are fixed. Conditional edges let the graph pick its path from state.

  • The API: add_conditional_edges() takes a source node, a router, and an optional path map.

  • Router rules: Read state, return a string. No side effects. No state changes.

  • LLM routing: Same pattern. Always clean the output and add a fallback.

  • Parallel paths: Return a list to run multiple nodes at once.

  • Defaults are a must: Every router needs an else clause.

  • Debugging: draw_ascii() for layout, print for state, stream() for step-by-step traces.

Up next: tool calling in LangGraph — where your agent starts to search the web, query databases, and take real actions.

FAQ

Can a routing function return END to stop the graph?

Yes. Return END from langgraph.graph and the graph stops right there. This is how you build “exit early” logic.

python
from langgraph.graph import END

def route_or_stop(state: dict) -> str:
    if not state["input"]:
        return END
    return "process"

Can I have multiple conditional edges from the same node?

No. Each node gets at most one conditional edge. If your choice depends on several state fields, put all the logic in one routing function with nested if/elif.

What’s the difference between add_conditional_edges() and Command?

add_conditional_edges() sets routing at build time. Command lets a node pick its next stop at runtime by returning a Command with a goto field. For simple branching, conditional edges are easier. We’ll cover Command later.

How do I unit test routing functions?

They’re plain Python functions. Pass a dict, check the return value. No graph needed.

python
def test_route_by_category():
    assert route_by_category({"category": "billing", "sentiment": ""}) == "handle_billing"
    assert route_by_category({"category": "technical", "sentiment": ""}) == "handle_technical"
    assert route_by_category({"category": "other", "sentiment": ""}) == "handle_general"

References

  • LangGraph Official Documentation — Graph API: Conditional Edges. Link

  • LangGraph Official Documentation — How-to: Branching. Link

  • LangGraph Official Documentation — Quickstart. Link

  • LangChain Blog — LangGraph: Multi-Agent Workflows. Link

  • Harrison Chase — LangGraph: A Library for Building Stateful, Multi-Actor Applications with LLMs. LangChain Blog (2024). Link

  • Real Python — LangGraph: Build Stateful AI Agents in Python. Link

  • LangGraph GitHub Repository — Examples and Source. Link

Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Get the full course,
completely free.
Join 57,000+ students learning Python, SQL & ML. One year of access, all resources included.
📚 10 Courses
🐍 Python & ML
🗄️ SQL
📦 Downloads
📅 1 Year Access
No thanks
🎓
Free AI/ML Starter Kit
Python · SQL · ML · 10 Courses · 57,000+ students
🎉   You're in! Check your inbox (or Promotions/Spam) for the access link.
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science