machine learning +
LLM Temperature, Top-P, and Top-K Explained — With Python Simulations
LangGraph Conditional Edges: Dynamic Routing Guide
Build LangGraph workflows that branch at runtime using conditional edges and routing functions. Route by LLM output, user input, or custom logic with examples.
Build LangGraph workflows that pick their own path at runtime using conditional edges, routing functions, and LLM-driven branching.
Picture this: your LangGraph workflow handles every request the exact same way. Sort a ticket? One path. Sum up a legal doc? Same path again. That’s fine — until someone asks: “Can it bump urgent tickets to a person and auto-reply to the rest?” Your graph now needs to make choices. Conditional edges give it that power.
What Are Conditional Edges (And Why Do Static Edges Fall Short)?
Imagine a two-node graph. The first node labels a ticket. The second writes a reply. All tickets take this road, no matter what. That’s a static edge — a hard-wired link.
Prerequisites
- Python: 3.10 or newer
- Packages: langgraph (0.4+), langchain-openai (0.3+)
- Install:
pip install langgraph langchain-openai - Background: LangGraph basics — nodes, edges, state (Posts 5-7 of this series)
- Time: 20-25 minutes
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class TicketState(TypedDict):
ticket: str
category: str
response: str
def classify(state: TicketState) -> dict:
return {"category": "billing"}
def draft_reply(state: TicketState) -> dict:
return {"response": f"Handling your {state['category']} issue."}
graph = StateGraph(TicketState)
graph.add_node("classify", classify)
graph.add_node("draft_reply", draft_reply)
graph.add_edge(START, "classify")
graph.add_edge("classify", "draft_reply") # static: always goes here
graph.add_edge("draft_reply", END)
app = graph.compile()
result = app.invoke({"ticket": "I was charged twice", "category": "", "response": ""})
print(result["response"])
python
Handling your billing issue.
This serves a one-track flow. But what if billing tickets need auto-replies, while complaints need a human eye? Static edges can’t fork. You’d have to build a brand new graph per path.
A conditional edge trades that locked link for a switch. Rather than “always go to node X,” it says: “run this function, then follow where it leads.”
Key Insight: > A conditional edge is simply a function. It reads state and hands back the name of the next node to run. Think of it as an if-else living on the wire between nodes — not inside them.
How Does add_conditional_edges() Work?
You call this method with three things:
- Source node — the node the edge leaves from
- Routing function — it reads state and hands back a node name (a string)
- Path map (you can skip this) — links return values to actual node names
Here’s the most basic form. The router looks at category in state and hands back a name.
python
def route_ticket(state: TicketState) -> str:
if state["category"] == "billing":
return "auto_reply"
else:
return "human_review"
graph.add_conditional_edges("classify", route_ticket)
After classify finishes, LangGraph feeds the current state to route_ticket. If it hands back "auto_reply", that node fires. If it hands back "human_review", the graph heads there.
The Path Map — Cleaner Return Values
Sometimes your router returns labels that don’t line up with node names. The path map bridges them.
python
def route_ticket(state: TicketState) -> str:
if state["category"] == "billing":
return "auto"
elif state["category"] == "complaint":
return "human"
else:
return "end"
graph.add_conditional_edges(
"classify",
route_ticket,
{
"auto": "auto_reply",
"human": "human_review",
"end": END,
}
)
I prefer path maps because they split the routing logic from node naming. Rename a node, and the router stays the same.
Tip: > If your router hands back a value not in the path map, LangGraph throws aValueError. Make sure you handle every case.
How Do You Write Your First Routing Function?
Let’s learn hands-on. We’ll build a ticket router that sorts support requests and sends them down three lanes.
Four nodes make up this graph: classify tags the ticket type, auto_reply fields billing issues, human_review flags general asks, and escalate takes on urgent cases. A conditional edge after classify steers each ticket.
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class TicketState(TypedDict):
ticket: str
category: str
priority: str
response: str
def classify(state: TicketState) -> dict:
ticket = state["ticket"].lower()
if "urgent" in ticket or "down" in ticket:
return {"category": "outage", "priority": "high"}
elif "charge" in ticket or "bill" in ticket:
return {"category": "billing", "priority": "low"}
else:
return {"category": "general", "priority": "medium"}
def auto_reply(state: TicketState) -> dict:
return {"response": f"Auto-reply: We're looking into your {state['category']} issue."}
def human_review(state: TicketState) -> dict:
return {"response": f"Flagged for human review: {state['ticket']}"}
def escalate(state: TicketState) -> dict:
return {"response": f"URGENT escalation: {state['ticket']}"}
The router reads category and priority, then picks the right lane.
python
def route_after_classify(state: TicketState) -> str:
if state["priority"] == "high":
return "escalate"
elif state["category"] == "billing":
return "auto_reply"
else:
return "human_review"
Time to wire it all up. The conditional edge goes between classify and the three target nodes.
python
graph = StateGraph(TicketState)
graph.add_node("classify", classify)
graph.add_node("auto_reply", auto_reply)
graph.add_node("human_review", human_review)
graph.add_node("escalate", escalate)
graph.add_edge(START, "classify")
graph.add_conditional_edges("classify", route_after_classify)
graph.add_edge("auto_reply", END)
graph.add_edge("human_review", END)
graph.add_edge("escalate", END)
app = graph.compile()
Let’s try three tickets. Each one should land on a different node.
python
tickets = [
"I was charged twice on my bill",
"My dashboard is down — urgent!",
"How do I reset my password?",
]
for ticket in tickets:
result = app.invoke({
"ticket": ticket, "category": "", "priority": "", "response": "",
})
print(f"Ticket: {ticket}")
print(f" -> {result['response']}\n")
python
Ticket: I was charged twice on my bill
-> Auto-reply: We're looking into your billing issue.
Ticket: My dashboard is down — urgent!
-> URGENT escalation: My dashboard is down — urgent!
Ticket: How do I reset my password?
-> Flagged for human review: How do I reset my password?
Billing got an auto-reply. The urgent one shot up to the top. The password ask went to a person. One graph, three paths — and each ticket finds the right one.
How Do You Route Based on What the LLM Says?
In real apps, you don’t match strings by hand. You route based on what the model tells you. The flow is the same: a node calls the LLM, stores what it says in state, and the router checks it.
Here’s the shape. detect_intent asks the LLM to tag a query. The router then reads that tag from state.
python
# Requires: pip install langchain-openai
# Requires: OPENAI_API_KEY environment variable set
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class State(TypedDict):
query: str
intent: str
answer: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def detect_intent(state: State) -> dict:
prompt = f"""Classify this user query into exactly one category.
Categories: question, complaint, feedback
Query: {state['query']}
Reply with just the category name."""
response = llm.invoke(prompt)
return {"intent": response.content.strip().lower()}
The router pulls the intent the LLM set.
python
def route_by_intent(state: State) -> str:
intent = state["intent"]
if intent == "question":
return "answer_question"
elif intent == "complaint":
return "handle_complaint"
else:
return "log_feedback"
Warning: > Models don’t always answer in the exact format you want. Ask for “question” and you might get “Question” or “it’s a question.” Always clean the text (.strip().lower()) and keep a fallback lane in your router.
A safer version using dict.get() with a default:
python
def route_by_intent_safe(state: State) -> str:
intent = state["intent"]
routing_map = {
"question": "answer_question",
"complaint": "handle_complaint",
"feedback": "log_feedback",
}
return routing_map.get(intent, "log_feedback") # default fallback
Skip that default, and one odd LLM reply tanks your whole graph.
How Do You Route on State Values Alone?
An LLM isn’t always needed. You can route on counters, flags, scores, or any data sitting in state.
Take this retry loop. The process node does some work and flips a success flag. The router checks that flag and either loops back, moves on, or gives up once tries run out.
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class RetryState(TypedDict):
input_data: str
result: str
retry_count: int
success: bool
def process(state: RetryState) -> dict:
if state["retry_count"] < 2:
return {"success": False, "retry_count": state["retry_count"] + 1}
return {"success": True, "result": "Processed successfully"}
def route_retry(state: RetryState) -> str:
if state["success"]:
return "done"
elif state["retry_count"] >= 3:
return "give_up"
else:
return "process" # loop back
Notice the router hands back "process" — the node we started from. That creates a loop. LangGraph can handle loops, but set a cap so it can’t spin forever. Post 11 digs into cycles and loop limits.
python
def done(state: RetryState) -> dict:
return {"result": "Success!"}
def give_up(state: RetryState) -> dict:
return {"result": "Failed after max retries."}
graph = StateGraph(RetryState)
graph.add_node("process", process)
graph.add_node("done", done)
graph.add_node("give_up", give_up)
graph.add_edge(START, "process")
graph.add_conditional_edges("process", route_retry)
graph.add_edge("done", END)
graph.add_edge("give_up", END)
app = graph.compile()
result = app.invoke({
"input_data": "test", "result": "", "retry_count": 0, "success": False,
})
print(result["result"])
python
Success!
The graph failed on tries 0 and 1, nailed it on try 2, and then the router sent it to done.
Key Insight: > Keep routers pure. They read state and hand back a string. No state changes, no API calls, no side effects. Nodes do the work. Routers just pick what fires next.
How Do You Send State to Many Nodes at Once?
Need to fan out? Hand back a list of node names from the router. LangGraph fires all of them side by side in the next step.
Here’s a doc review flow. After intake, a grammar pass and a fact pass both run at the same time. Their outputs merge, and the doc gets shipped.
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
from operator import add
from typing import Annotated
class ReviewState(TypedDict):
document: str
grammar_ok: bool
facts_ok: bool
reviews: Annotated[list[str], add]
def intake(state: ReviewState) -> dict:
return {"document": state["document"]}
def grammar_check(state: ReviewState) -> dict:
return {"reviews": ["Grammar: looks good"], "grammar_ok": True}
def fact_check(state: ReviewState) -> dict:
return {"reviews": ["Facts: verified"], "facts_ok": True}
def publish(state: ReviewState) -> dict:
return {"reviews": ["Published!"]}
The router hands back a list. Both checks kick off in parallel.
python
def route_to_reviewers(state: ReviewState) -> list[str]:
return ["grammar_check", "fact_check"]
graph = StateGraph(ReviewState)
graph.add_node("intake", intake)
graph.add_node("grammar_check", grammar_check)
graph.add_node("fact_check", fact_check)
graph.add_node("publish", publish)
graph.add_edge(START, "intake")
graph.add_conditional_edges("intake", route_to_reviewers)
graph.add_edge("grammar_check", "publish")
graph.add_edge("fact_check", "publish")
graph.add_edge("publish", END)
app = graph.compile()
result = app.invoke({
"document": "LangGraph is great.",
"grammar_ok": False, "facts_ok": False, "reviews": [],
})
print(result["reviews"])
python
['Grammar: looks good', 'Facts: verified', 'Published!']
Both checks ran at the same time. Their results joined in the reviews list thanks to the add reducer. Drop the Annotated[list[str], add], and one node’s output would erase the other’s.
Why Do You Need Fallback Edges?
Every router needs a safe landing. What if the LLM sends back something you didn’t plan for? A missing default is the number one cause of runtime blowups.
python
# BAD: no fallback — crashes on unexpected values
def route_fragile(state: State) -> str:
if state["intent"] == "question":
return "answer_question"
elif state["intent"] == "complaint":
return "handle_complaint"
# intent is "feedback" or empty? Returns None -> crash
python
# GOOD: always has a default
def route_safe(state: State) -> str:
if state["intent"] == "question":
return "answer_question"
elif state["intent"] == "complaint":
return "handle_complaint"
else:
return "fallback_handler"
I make it a rule: every router ends with else. Add a new LLM tag but skip the router update, and you’ve got a live bug hiding in plain sight.
How Do You Build a Two-Stage Decision Flow?
Let’s put it all together. We’ll make a support agent that routes twice: first by topic, then by mood. Angry users get bumped up — no matter what they asked about.
The state holds the query, its topic tag, a mood tag, the reply text, and a flag for whether it got bumped.
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class ServiceState(TypedDict):
query: str
category: str
sentiment: str
response: str
escalated: bool
def classify_query(state: ServiceState) -> dict:
query = state["query"].lower()
if "refund" in query or "money" in query:
category = "billing"
elif "broken" in query or "error" in query or "bug" in query:
category = "technical"
else:
category = "general"
sentiment = "negative" if any(
w in query for w in ["angry", "terrible", "worst"]
) else "neutral"
return {"category": category, "sentiment": sentiment}
One handler per topic, plus a bump-up node for angry users.
python
def handle_billing(state: ServiceState) -> dict:
return {"response": "Billing concern flagged. Expect resolution within 24 hours."}
def handle_technical(state: ServiceState) -> dict:
return {"response": "Let me troubleshoot that. Have you tried restarting the application?"}
def handle_general(state: ServiceState) -> dict:
return {"response": "Thanks for reaching out! Connecting you with the right team."}
def escalate_to_human(state: ServiceState) -> dict:
return {"response": "Escalating to a human agent due to negative sentiment.", "escalated": True}
def finalize(state: ServiceState) -> dict:
return {} # no-op: response already set by handler
Two routers. The first picks by topic. The second (wired to each handler) checks mood.
python
def route_by_category(state: ServiceState) -> str:
category = state["category"]
if category == "billing":
return "handle_billing"
elif category == "technical":
return "handle_technical"
else:
return "handle_general"
def route_by_sentiment(state: ServiceState) -> str:
if state["sentiment"] == "negative":
return "escalate_to_human"
return "finalize"
Wire both layers. Topic first, mood second after each handler.
python
graph = StateGraph(ServiceState)
graph.add_node("classify_query", classify_query)
graph.add_node("handle_billing", handle_billing)
graph.add_node("handle_technical", handle_technical)
graph.add_node("handle_general", handle_general)
graph.add_node("escalate_to_human", escalate_to_human)
graph.add_node("finalize", finalize)
graph.add_edge(START, "classify_query")
graph.add_conditional_edges("classify_query", route_by_category)
graph.add_conditional_edges("handle_billing", route_by_sentiment)
graph.add_conditional_edges("handle_technical", route_by_sentiment)
graph.add_conditional_edges("handle_general", route_by_sentiment)
graph.add_edge("escalate_to_human", END)
graph.add_edge("finalize", END)
app = graph.compile()
Try three test cases.
python
queries = [
"I need a refund for my subscription",
"The app is broken and I'm angry about it",
"What are your business hours?",
]
for query in queries:
result = app.invoke({
"query": query, "category": "", "sentiment": "",
"response": "", "escalated": False,
})
print(f"Query: {query}")
print(f" Category: {result['category']} | Sentiment: {result['sentiment']}")
print(f" Response: {result['response']}")
print(f" Escalated: {result['escalated']}\n")
python
Query: I need a refund for my subscription
Category: billing | Sentiment: neutral
Response: Billing concern flagged. Expect resolution within 24 hours.
Escalated: False
Query: The app is broken and I'm angry about it
Category: technical | Sentiment: negative
Response: Escalating to a human agent due to negative sentiment.
Escalated: True
Query: What are your business hours?
Category: general | Sentiment: neutral
Response: Thanks for reaching out! Connecting you with the right team.
Escalated: False
The angry tech query got bumped up. Billing was auto-handled. The general ask went to the right crew. Two routing layers, one graph — working in concert.
How Do You Track Down Routing Bugs?
The graph took the wrong fork. How do you find out why? I lean on three go-to tricks.
Trick 1: Log inside the router. Drop a print call so you can see the state your router gets.
python
def route_debug(state: ServiceState) -> str:
print(f"DEBUG: category={state['category']}, sentiment={state['sentiment']}")
if state["sentiment"] == "negative":
return "escalate_to_human"
return "finalize"
Trick 2: Print the graph shape. Call get_graph() to show all nodes and edges.
python
print(app.get_graph().draw_ascii())
This draws every node, link, and conditional edge. Nodes with no edges going in pop right out.
Trick 3: Watch the run live. Swap invoke() for stream() and see each node fire in order.
python
for event in app.stream({
"query": "This is terrible, I want a refund",
"category": "", "sentiment": "", "response": "", "escalated": False,
}):
print(event)
Each event tells you which node ran and what state it touched. When the wrong node fires, the router’s return value is the culprit.
Tip: > For graphs with lots of forks, tryget_graph().draw_mermaid()to get a diagram you can paste in any Markdown viewer.
What Mistakes Trip People Up Most?
Mistake 1: Router Tweaks State
python
# WRONG
def bad_router(state: TicketState) -> str:
state["category"] = "billing" # side effect!
return "auto_reply"
python
# RIGHT
def good_router(state: TicketState) -> str:
if state["category"] == "billing":
return "auto_reply"
return "human_review"
Routers see a read-only copy of state. Edits inside a router don’t get saved by LangGraph. Put all state work in nodes.
Mistake 2: Pointing to a Node That Doesn’t Exist
python
def route_missing(state: TicketState) -> str:
return "support" # "support" was never registered with add_node()
LangGraph blows up if the name doesn’t match a known node. Check for typos and verify every return value maps to a node or END.
Mistake 3: Leaving Out the Default Path
The top runtime crash. Each if/elif chain must end with else. Each dict.get() must have a default.
Mistake 4: Counting on Exact LLM Spelling
python
# FRAGILE
def route_fragile(state: State) -> str:
if state["intent"] == "question":
return "answer"
elif state["intent"] == "complaint":
return "escalate"
# LLM returned "Question" (capitalized) — no match!
Always run .strip().lower() on model output before routing. Better yet, use JSON mode to lock down the shape.
Can You Route Right from START?
Yes! You can attach a conditional edge to START. This lets the graph choose which node kicks off based on the first input.
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class InputState(TypedDict):
input_type: str
data: str
result: str
def process_text(state: InputState) -> dict:
return {"result": f"Processed text: {state['data'][:20]}..."}
def process_number(state: InputState) -> dict:
return {"result": f"Processed number: {state['data']}"}
def route_start(state: InputState) -> str:
if state["input_type"] == "text":
return "process_text"
return "process_number"
graph = StateGraph(InputState)
graph.add_node("process_text", process_text)
graph.add_node("process_number", process_number)
graph.add_conditional_edges(START, route_start)
graph.add_edge("process_text", END)
graph.add_edge("process_number", END)
app = graph.compile()
result = app.invoke({"input_type": "text", "data": "Hello world from LangGraph", "result": ""})
print(result["result"])
python
Processed text: Hello world from Lan...
Handy when each input type calls for a totally different flow.
When Should You Skip Conditional Edges?
They’re not always the best pick. Here’s when to reach for something else.
Plain if-else in one node. If the “routing” just flips a value and the same node covers all cases, keep it in one node. Splitting into many nodes plus edges adds mess for no gain.
Let the node decide where to go. LangGraph’s Command object lets a node send back both a state update and a “go here next” hint in one shot. This skips the clunky dance of writing to state just so the router can read it.
Fan-out over a list. If you want to run the same node N times with different inputs (say, one per item in a list), use Send objects. They’re tailor-made for map-reduce work.
Quick Check: Predict the Output
What does this router hand back when state["score"] is 75?
python
def route_by_score(state: dict) -> str:
score = state["score"]
if score >= 90:
return "excellent"
elif score >= 70:
return "good"
elif score >= 50:
return "average"
else:
return "needs_improvement"
Quick Check: Spot the Bug
What’s off with this setup?
python
graph.add_node("classify", classify)
graph.add_node("handle_a", handle_a)
graph.add_node("handle_b", handle_b)
def router(state):
if state["type"] == "a":
return "handle_a"
if state["type"] == "b":
return "handle_b"
graph.add_conditional_edges("classify", router)
Error Fixes
ValueError: Expected node name or END, got None
Your router handed back None. No branch matched and there’s no else. Add a default return.
python
# Fix: add else clause
def route_fixed(state):
if state["type"] == "a":
return "handle_a"
else:
return "fallback" # catches everything
ValueError: Node 'xyz' not found in graph
Your router named a node that was never added. Look for typos. Make sure each value your router can hand back has a matching node.
GraphRecursionError: Recursion limit reached
A conditional edge made a cycle, and it didn’t break in time. Fix the exit check in your router or raise the cap: app = graph.compile(recursion_limit=50).
Practice Exercise
Build a content filter graph with conditional routing.
What you need:
– State with fields: text (str), classification (str), response (str)
– A classify_content node that flags banned words (“spam”, “scam”) as “blocked”, warning words (“free”, “click”) as “warning”, and the rest as “safe”
– Three handler nodes (handle_safe, handle_warning, handle_blocked) with fitting replies
– A router that reads the tag and picks the right lane
– Three test inputs, one per lane
Summary
Conditional edges turn LangGraph from a fixed pipe into a choice-making engine. Here’s what you picked up:
- Static edges lock nodes in a set order. Conditional edges swap them for runtime picks.
add_conditional_edges()takes a source node, a routing function, and an optional path map.- Routing functions are pure logic — they read state and hand back a string. Never change state inside one.
- LLM-based routing works the same way. Always clean model text and add a fallback.
- Parallel branching sends state to many nodes by having the router hand back a list.
- Fallback paths are a must. Every router needs an
else. - Debugging uses
draw_ascii(), print calls in routers, andstream()mode.
Next up: tool calling in LangGraph — giving your agent the power to search the web, query data stores, and take real actions.
FAQ
Can a routing function hand back END to stop the graph?
Yes. Return the END value from langgraph.graph and the run halts right there. This is how you build “exit early” logic.
python
from langgraph.graph import END
def route_or_stop(state: dict) -> str:
if not state["input"]:
return END
return "process"
Can I wire many conditional edges out of one node?
No. A node gets at most one outgoing conditional edge. If you need to check many fields, fold them into a single routing function with nested if/elif checks.
How is add_conditional_edges() different from Command?
add_conditional_edges() locks in routing when you build the graph. Command (a newer feature) lets nodes choose their next stop at runtime by sending back a Command with a goto field. For everyday routing, conditional edges are the simpler choice. We’ll explore Command in a later post.
How do I test routing functions on their own?
They’re normal Python functions. Feed them a dict — no graph build needed.
python
def test_route_by_category():
assert route_by_category({"category": "billing", "sentiment": ""}) == "handle_billing"
assert route_by_category({"category": "technical", "sentiment": ""}) == "handle_technical"
assert route_by_category({"category": "other", "sentiment": ""}) == "handle_general"
References
- LangGraph Official Documentation — Graph API: Conditional Edges. Link
- LangGraph Official Documentation — How-to: Branching. Link
- LangGraph Official Documentation — Quickstart. Link
- LangChain Blog — LangGraph: Multi-Agent Workflows. Link
- Harrison Chase — LangGraph: A Library for Building Stateful, Multi-Actor Applications with LLMs. LangChain Blog (2024). Link
- Real Python — LangGraph: Build Stateful AI Agents in Python. Link
- LangGraph GitHub Repository — Examples and Source. Link
Free Course
Master Core Python — Your First Step into AI/ML
Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.
Start Free Course →Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Up Next in Learning Path
LangGraph Tool Calling: Build Agents That Take Action
