LangGraph Graph Concepts — Nodes, Edges, and State
Your first LangGraph script ran. Great. But can you change it? Try adding a new branch, reshaping the state, or dropping in an extra step — and suddenly you’re lost. The code worked, but you never built a mental model of why it worked.
That changes here. We’ll take apart the three pieces that power every LangGraph program, then put them back together in graphs that get more involved as we go. By the end, you’ll modify graphs without guessing.
What You Need
-
Python version: 3.10+
-
Required libraries: langgraph (0.3+)
-
Install:
pip install langgraph -
Previous article: LangGraph Installation, Setup, and Your First Graph — you should be able to create and run a basic graph on your own
-
Time to complete: ~25 minutes
Nodes: Where the Work Happens
Think about an assembly line. One station stamps a piece of metal. The next one drills a hole. A third one paints the surface. Each station does exactly one job, then passes the piece to the next.
A LangGraph node is one of those stations. It’s a plain Python function — nothing special about it. LangGraph hands the function the current state, the function does its thing, and it hands back whatever changed.
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class SimpleState(TypedDict):
message: str
def greet(state: SimpleState) -> dict:
return {"message": "Hello from the node!"}
graph = StateGraph(SimpleState)
graph.add_node("greet", greet)
graph.add_edge(START, "greet")
graph.add_edge("greet", END)
app = graph.compile()
result = app.invoke({"message": ""})
print(result)
{'message': 'Hello from the node!'}
We gave the graph an empty message. The greet function replaced it with “Hello from the node!” and LangGraph took care of everything else — calling the function, feeding it the state, and collecting the result.
What should you take away? A node is a function. It receives state, returns a dict of changes. You register it with add_node("name", function). No decorators, no base classes — just a function.
Key Insight: If you can write a Python function, you can write a LangGraph node. There’s no magic. State goes in, updates come out.
Nodes Can Do Anything
Call an API. Query a database. Run a math formula. Send an email. The only contract is: accept the state dict, return a dict of fields you want to update.
Here’s a second node chained after the first one:
def transform_message(state: SimpleState) -> dict:
original = state["message"]
return {"message": original.upper() + " (transformed)"}
graph2 = StateGraph(SimpleState)
graph2.add_node("greet", greet)
graph2.add_node("transform", transform_message)
graph2.add_edge(START, "greet")
graph2.add_edge("greet", "transform")
graph2.add_edge("transform", END)
app2 = graph2.compile()
result2 = app2.invoke({"message": ""})
print(result2)
{'message': 'HELLO FROM THE NODE! (transformed)'}
greet wrote the message. transform read that message, uppercased it, and tacked on a suffix. One node’s output became the next node’s input — connected through the shared state.
Edges: The Wiring Between Nodes
Nodes on their own are just loose functions. Nothing calls them, nothing orders them. Edges are what turn a bag of functions into a pipeline.
An edge is a one-way link: “when this node finishes, run that one next.” The line graph.add_edge("greet", "transform") told LangGraph to run transform right after greet.
The Two Bookends: START and END
Every graph has a beginning and an ending. LangGraph marks these with two built-in constants you import from langgraph.graph.
START is the entry point. You connect it to your first node. When you call app.invoke(), LangGraph traces the edge from START to figure out which node to kick off.
END is the exit. Once a node connects to END, the graph stops and hands back the final state.
from langgraph.graph import START, END
# The minimum graph structure:
# START -> your_node -> END
Warning: If you forget the START edge, the graph compiles fine but sits idle when you invoke it. If you forget the END edge, LangGraph throws an error — it finds a dead end and refuses to run.
Stringing Together a Multi-Step Pipeline
More nodes? More edges. Each edge adds one link in the chain. Let’s build a three-step pipeline where each step transforms a text field and records its name in a log:
class PipelineState(TypedDict):
text: str
steps_completed: list[str]
def step_one(state: PipelineState) -> dict:
return {
"text": "raw data",
"steps_completed": ["step_one"]
}
def step_two(state: PipelineState) -> dict:
updated = state["text"] + " -> cleaned"
steps = state["steps_completed"] + ["step_two"]
return {"text": updated, "steps_completed": steps}
def step_three(state: PipelineState) -> dict:
updated = state["text"] + " -> analyzed"
steps = state["steps_completed"] + ["step_three"]
return {"text": updated, "steps_completed": steps}
Now wire and run:
pipeline = StateGraph(PipelineState)
pipeline.add_node("step_one", step_one)
pipeline.add_node("step_two", step_two)
pipeline.add_node("step_three", step_three)
pipeline.add_edge(START, "step_one")
pipeline.add_edge("step_one", "step_two")
pipeline.add_edge("step_two", "step_three")
pipeline.add_edge("step_three", END)
app3 = pipeline.compile()
result3 = app3.invoke({"text": "", "steps_completed": []})
print(f"Final text: {result3['text']}")
print(f"Steps: {result3['steps_completed']}")
Final text: raw data -> cleaned -> analyzed
Steps: ['step_one', 'step_two', 'step_three']
Data entered at step one, picked up changes at each stop, and came out the other end fully transformed. That’s the edge system doing its job.
State: The Thread That Ties Everything Together
A question that hits every newcomer: “How does the second node know what the first one did?” The answer is the state object.
State is a shared container — a typed Python dict — that every node can read and write. When node A sets a value, node B sees it. When node B changes a value, node C gets the updated version. State is what makes nodes aware of each other.
Declaring State with TypedDict
You spell out the shape of your state using Python’s TypedDict:
from typing import TypedDict
class ChatState(TypedDict):
user_input: str
response: str
turn_count: int
Three fields, each with a type. Every node in the graph has access to all three. Why TypedDict instead of a regular dict? Your editor can auto-complete field names and flag typos. LangGraph also uses the type hints to validate what nodes return.
Tip: Keep your state lean at the start. Add fields only when you need them. A 20-field state designed up front usually means you’re guessing at needs you don’t have yet.
Watching State Move Step by Step
Most tutorials gloss over this part. Let’s slow down and trace exactly what happens inside a running graph. Once you see this, everything about LangGraph clicks.
We’ll build a graph that takes a number through three transforms: add 5, double, subtract 3. Each node logs its work so we can follow the trail.
class CounterState(TypedDict):
value: int
history: list[str]
def add_five(state: CounterState) -> dict:
new_value = state["value"] + 5
return {
"value": new_value,
"history": state["history"] + [f"add_five: {state['value']} -> {new_value}"]
}
def double_it(state: CounterState) -> dict:
new_value = state["value"] * 2
return {
"value": new_value,
"history": state["history"] + [f"double_it: {state['value']} -> {new_value}"]
}
def subtract_three(state: CounterState) -> dict:
new_value = state["value"] - 3
return {
"value": new_value,
"history": state["history"] + [f"subtract_three: {state['value']} -> {new_value}"]
}
Feed it the number 10 and watch:
counter_graph = StateGraph(CounterState)
counter_graph.add_node("add_five", add_five)
counter_graph.add_node("double_it", double_it)
counter_graph.add_node("subtract_three", subtract_three)
counter_graph.add_edge(START, "add_five")
counter_graph.add_edge("add_five", "double_it")
counter_graph.add_edge("double_it", "subtract_three")
counter_graph.add_edge("subtract_three", END)
counter_app = counter_graph.compile()
result4 = counter_app.invoke({"value": 10, "history": []})
print(f"Final value: {result4['value']}")
print("\nExecution trace:")
for entry in result4["history"]:
print(f" {entry}")
Final value: 27
Execution trace:
add_five: 10 -> 15
double_it: 15 -> 30
subtract_three: 30 -> 27
Here’s the full breakdown:
| Step | Node | Sees | Does | Produces |
|---|---|---|---|---|
| 1 | add_five |
10 | 10 + 5 | 15 |
| 2 | double_it |
15 | 15 * 2 | 30 |
| 3 | subtract_three |
30 | 30 – 3 | 27 |
The value hopped from node to node, changing at each stop. That’s state flow in action.
Key Insight: LangGraph doesn’t copy state between nodes. It merges updates into the same object. Return {"value": 15} and only the value field changes. Everything else stays put.
You Only Need to Return What Changed
This trips people up early on. A node doesn’t have to send back every field — just the ones it touched. LangGraph leaves the rest alone.
class ProfileState(TypedDict):
name: str
email: str
verified: bool
def set_name(state: ProfileState) -> dict:
return {"name": "Alice"} # Only updates 'name'
def set_email(state: ProfileState) -> dict:
return {"email": "alice@example.com"} # Only updates 'email'
def verify(state: ProfileState) -> dict:
return {"verified": True} # Only updates 'verified'
profile_graph = StateGraph(ProfileState)
profile_graph.add_node("set_name", set_name)
profile_graph.add_node("set_email", set_email)
profile_graph.add_node("verify", verify)
profile_graph.add_edge(START, "set_name")
profile_graph.add_edge("set_name", "set_email")
profile_graph.add_edge("set_email", "verify")
profile_graph.add_edge("verify", END)
profile_app = profile_graph.compile()
result5 = profile_app.invoke({"name": "", "email": "", "verified": False})
print(result5)
{'name': 'Alice', 'email': 'alice@example.com', 'verified': True}
Three nodes, each touching one field. LangGraph stitched all three updates into a single final state. Clean and simple.
Conditional Edges: Making Your Graph Think
So far, every graph has followed a fixed track. Node A, then B, then C — no surprises. But real workflows need to make choices. Should the order be approved or rejected? Should the agent retry or give up?
That’s what conditional edges are for. Instead of a hard-wired “go to node B,” you write a small function that inspects the state and decides where to go. The graph branches at runtime.
A Content Router in Action
Let’s build a system that classifies incoming text and routes it to the right handler. Urgent messages get escalated. Questions go to Q&A. Everything else gets filed.
First, the classifier:
class ContentState(TypedDict):
text: str
category: str
result: str
def classify(state: ContentState) -> dict:
text = state["text"].lower()
if "urgent" in text or "emergency" in text:
return {"category": "urgent"}
elif "question" in text or "?" in text:
return {"category": "question"}
else:
return {"category": "general"}
Next, the routing function and the three handlers:
def route_content(state: ContentState) -> str:
if state["category"] == "urgent":
return "handle_urgent"
elif state["category"] == "question":
return "handle_question"
else:
return "handle_general"
def handle_urgent(state: ContentState) -> dict:
return {"result": f"URGENT: Escalated '{state['text']}'"}
def handle_question(state: ContentState) -> dict:
return {"result": f"Q&A: Processing question '{state['text']}'"}
def handle_general(state: ContentState) -> dict:
return {"result": f"GENERAL: Filed '{state['text']}'"}
Now the key part — add_conditional_edges. It replaces a normal edge with a branching point. You give it the source node, the routing function, and a map that translates return values to node names:
content_graph = StateGraph(ContentState)
content_graph.add_node("classify", classify)
content_graph.add_node("handle_urgent", handle_urgent)
content_graph.add_node("handle_question", handle_question)
content_graph.add_node("handle_general", handle_general)
content_graph.add_edge(START, "classify")
content_graph.add_conditional_edges(
"classify",
route_content,
{
"handle_urgent": "handle_urgent",
"handle_question": "handle_question",
"handle_general": "handle_general",
}
)
content_graph.add_edge("handle_urgent", END)
content_graph.add_edge("handle_question", END)
content_graph.add_edge("handle_general", END)
content_app = content_graph.compile()
Run three different inputs through the same graph:
test_inputs = [
{"text": "Emergency! Server is down!", "category": "", "result": ""},
{"text": "What is LangGraph?", "category": "", "result": ""},
{"text": "Weekly status update", "category": "", "result": ""},
]
for inp in test_inputs:
output = content_app.invoke(inp)
print(f"Input: '{inp['text']}'")
print(f" Category: {output['category']}, Result: {output['result']}")
Input: 'Emergency! Server is down!'
Category: urgent, Result: URGENT: Escalated 'Emergency! Server is down!'
Input: 'What is LangGraph?'
Category: question, Result: Q&A: Processing question 'What is LangGraph?'
Input: 'Weekly status update'
Category: general, Result: GENERAL: Filed 'Weekly status update'
One graph, three outcomes. The routing function looked at state["category"] and sent each input down a different path.
Note: The routing function must return a string that matches either a key in the mapping dict or a registered node name. Return anything else and LangGraph raises a ValueError.
What Goes Into add_conditional_edges
Three pieces:
content_graph.add_conditional_edges(
"classify", # 1. Which node to branch FROM
route_content, # 2. Function that picks WHERE to go
{ # 3. Map from return values to node names
"handle_urgent": "handle_urgent",
"handle_question": "handle_question",
"handle_general": "handle_general",
}
)
The map is optional. If your routing function already returns real node names, leave it out:
# Works the same — route_content returns node names directly
content_graph.add_conditional_edges("classify", route_content)
I still prefer the map. It documents every possible path in one spot, which helps when you come back to the code a month later.
Visualizing the Graph
Never trust that your wiring is correct — verify it. LangGraph can render any compiled graph as a Mermaid diagram:
mermaid_code = content_app.get_graph().draw_mermaid()
print(mermaid_code)
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
__start__([<p>__start__</p>])
classify(classify)
handle_urgent(handle_urgent)
handle_question(handle_question)
handle_general(handle_general)
__end__([<p>__end__</p>])
__start__ --> classify;
classify -.-> handle_urgent;
classify -.-> handle_question;
classify -.-> handle_general;
handle_urgent --> __end__;
handle_question --> __end__;
handle_general --> __end__;
Solid arrows are fixed edges. Dashed arrows are conditional ones. Paste the output into mermaid.live, a GitHub markdown block, or VS Code to see the diagram.
Tip: Draw your graph after every change. Catching a broken wire in a diagram is much faster than chasing a strange bug at runtime.
If You’ve Drawn a Flowchart, You Already Know This
LangGraph borrows directly from flowchart thinking:
| Flowchart | LangGraph | Code |
|---|---|---|
| Rectangle (process) | Node | add_node("name", func) |
| Arrow | Edge | add_edge("a", "b") |
| Diamond (decision) | Conditional edge | add_conditional_edges(...) |
| Start oval | START |
add_edge(START, "first") |
| End oval | END |
add_edge("last", END) |
| Data flowing on arrows | State | class MyState(TypedDict) |
The only difference? A flowchart hangs on a wall. A LangGraph graph runs as code. Every rectangle becomes a function, every arrow becomes an edge, and every diamond becomes a conditional branch that executes at runtime.
Full Example: An Order Processing Pipeline
Time to combine everything — nodes, edges, conditional edges, state — into a pipeline you could adapt for real work. This one validates an order, checks inventory, calculates pricing with a bulk discount, and then approves or rejects based on the results.
The state carries everything the pipeline needs to know about an order:
class OrderState(TypedDict):
order_id: str
item: str
quantity: int
price_per_unit: float
total: float
in_stock: bool
is_valid: bool
status: str
log: list[str]
Validation catches bad data early:
def validate_order(state: OrderState) -> dict:
errors = []
if state["quantity"] <= 0:
errors.append("Quantity must be positive")
if not state["item"]:
errors.append("Item name required")
is_valid = len(errors) == 0
msg = "Validated OK" if is_valid else f"Validation failed: {errors}"
return {
"is_valid": is_valid,
"log": state["log"] + [f"validate: {msg}"]
}
Inventory lookup uses a dict here, but in production this would be a database call:
INVENTORY = {"widget": 100, "gadget": 50, "doohickey": 0}
def check_inventory(state: OrderState) -> dict:
stock = INVENTORY.get(state["item"].lower(), 0)
in_stock = stock >= state["quantity"]
return {
"in_stock": in_stock,
"log": state["log"] + [
f"inventory: {state['item']} has {stock} units, need {state['quantity']}"
]
}
Pricing applies a 10% discount when you order 10 or more:
def calculate_price(state: OrderState) -> dict:
total = state["quantity"] * state["price_per_unit"]
if state["quantity"] >= 10:
total *= 0.9 # 10% bulk discount
return {
"total": total,
"log": state["log"] + [f"pricing: total=${total:.2f}"]
}
Approval and rejection stamp the final status:
def approve_order(state: OrderState) -> dict:
return {
"status": "approved",
"log": state["log"] + [f"APPROVED: Order {state['order_id']}"]
}
def reject_order(state: OrderState) -> dict:
return {
"status": "rejected",
"log": state["log"] + [f"REJECTED: Order {state['order_id']}"]
}
The router inspects the results and makes the call:
def should_approve(state: OrderState) -> str:
if state["is_valid"] and state["in_stock"]:
return "approve"
return "reject"
Wire it up — three nodes in sequence, then a conditional fork:
order_graph = StateGraph(OrderState)
order_graph.add_node("validate", validate_order)
order_graph.add_node("check_inventory", check_inventory)
order_graph.add_node("calculate_price", calculate_price)
order_graph.add_node("approve", approve_order)
order_graph.add_node("reject", reject_order)
order_graph.add_edge(START, "validate")
order_graph.add_edge("validate", "check_inventory")
order_graph.add_edge("check_inventory", "calculate_price")
order_graph.add_conditional_edges(
"calculate_price",
should_approve,
{"approve": "approve", "reject": "reject"}
)
order_graph.add_edge("approve", END)
order_graph.add_edge("reject", END)
order_app = order_graph.compile()
A good order — 5 widgets at $29.99:
good_order = {
"order_id": "ORD-001", "item": "widget",
"quantity": 5, "price_per_unit": 29.99,
"total": 0.0, "in_stock": False,
"is_valid": False, "status": "", "log": []
}
result_good = order_app.invoke(good_order)
print(f"Status: {result_good['status']}")
print(f"Total: ${result_good['total']:.2f}")
for entry in result_good["log"]:
print(f" {entry}")
Status: approved
Total: $149.95
validate: Validated OK
inventory: widget has 100 units, need 5
pricing: total=$149.95
APPROVED: Order ORD-001
An out-of-stock item:
bad_order = {
"order_id": "ORD-002", "item": "doohickey",
"quantity": 5, "price_per_unit": 9.99,
"total": 0.0, "in_stock": False,
"is_valid": False, "status": "", "log": []
}
result_bad = order_app.invoke(bad_order)
print(f"Status: {result_bad['status']}")
for entry in result_bad["log"]:
print(f" {entry}")
Status: rejected
validate: Validated OK
inventory: doohickey has 0 units, need 5
pricing: total=$49.95
REJECTED: Order ORD-002
Same graph, opposite outcomes. The router checked the state and picked the right branch each time.
Three Bugs That Waste the Most Time
Bug 1: Orphan Nodes
You added a node but forgot to connect it. It compiles. It just never runs.
# WRONG — orphan node never runs
graph = StateGraph(SimpleState)
graph.add_node("greet", greet)
graph.add_node("orphan", transform_message) # No edges!
graph.add_edge(START, "greet")
graph.add_edge("greet", END)
LangGraph doesn’t warn you. Use get_graph() after every build to confirm all nodes have incoming and outgoing edges.
Bug 2: Typos in Return Keys
Your state has count, but you returned counter. LangGraph raises InvalidUpdateError.
class MyState(TypedDict):
count: int
def bad_node(state: MyState) -> dict:
return {"counter": 10} # Typo! 'counter' not in MyState
Always double-check that return dict keys line up with your TypedDict fields.
Bug 3: Routing to a Phantom Node
The router returns "process_data" but the node is named "process". ValueError.
def bad_router(state):
return "process_data" # But you named the node "process"!
Triple-check that routing function return values match the exact strings in add_node.
Warning: These three account for the majority of LangGraph debugging sessions. Before running any graph: (1) visualize it, (2) match return keys to state fields, (3) match routing strings to node names.
Quick Check — Predict the Output
Look at this graph. What’s the final value of x?
class QuizState(TypedDict):
x: int
def add_ten(state: QuizState) -> dict:
return {"x": state["x"] + 10}
def halve(state: QuizState) -> dict:
return {"x": state["x"] // 2}
quiz_graph = StateGraph(QuizState)
quiz_graph.add_node("add_ten", add_ten)
quiz_graph.add_node("halve", halve)
quiz_graph.add_edge(START, "add_ten")
quiz_graph.add_edge("add_ten", "halve")
quiz_graph.add_edge("halve", END)
quiz_app = quiz_graph.compile()
# What does quiz_app.invoke({"x": 6}) return?
Work it out: 6 goes in. add_ten makes it 16. halve cuts it to 8. Answer: {"x": 8}.
result_quiz = quiz_app.invoke({"x": 6})
print(result_quiz)
{'x': 8}
If you got that right, state flow has clicked for you. If not, re-read the counter example above — trace each step on paper.
Practice Exercises
Exercise 1: Temperature Classifier
Build a graph that labels a temperature reading as “cold” (below 15), “moderate” (15-30), or “hot” (above 30), then routes to a handler that sets an action string.
Starter code:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class TempState(TypedDict):
temperature: int
label: str
action: str
# TODO: Write the classify node
def classify(state: TempState) -> dict:
pass
# TODO: Write handler nodes
def handle_cold(state: TempState) -> dict:
pass
def handle_moderate(state: TempState) -> dict:
pass
def handle_hot(state: TempState) -> dict:
pass
# TODO: Write routing function and build graph
Test it with:
# Should print: cold / Turn on heater
result = app.invoke({"temperature": 5, "label": "", "action": ""})
print(result["label"], "/", result["action"])
# Should print: hot / Turn on AC
result = app.invoke({"temperature": 35, "label": "", "action": ""})
print(result["label"], "/", result["action"])
Hints:
1. classify checks the temperature against 15 and 30 and returns the right label.
2. The router reads state["label"] and returns a handler name. Wire it with add_conditional_edges.
Click to see the solution
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class TempState(TypedDict):
temperature: int
label: str
action: str
def classify(state: TempState) -> dict:
temp = state["temperature"]
if temp < 15:
return {"label": "cold"}
elif temp <= 30:
return {"label": "moderate"}
else:
return {"label": "hot"}
def handle_cold(state: TempState) -> dict:
return {"action": "Turn on heater"}
def handle_moderate(state: TempState) -> dict:
return {"action": "Maintain current settings"}
def handle_hot(state: TempState) -> dict:
return {"action": "Turn on AC"}
def route_temp(state: TempState) -> str:
return {
"cold": "handle_cold",
"moderate": "handle_moderate",
"hot": "handle_hot"
}[state["label"]]
graph = StateGraph(TempState)
graph.add_node("classify", classify)
graph.add_node("handle_cold", handle_cold)
graph.add_node("handle_moderate", handle_moderate)
graph.add_node("handle_hot", handle_hot)
graph.add_edge(START, "classify")
graph.add_conditional_edges("classify", route_temp)
graph.add_edge("handle_cold", END)
graph.add_edge("handle_moderate", END)
graph.add_edge("handle_hot", END)
app = graph.compile()
for temp in [5, 22, 35]:
r = app.invoke({"temperature": temp, "label": "", "action": ""})
print(f"Temp: {temp} -> {r['label']} -> {r['action']}")
Temp: 5 -> cold -> Turn on heater
Temp: 22 -> moderate -> Maintain current settings
Temp: 35 -> hot -> Turn on AC
`classify` buckets the reading. The router maps the label to a handler. Each handler sets the action. All three connect to `END`.
Exercise 2: Text Processing Pipeline
Build a three-node graph: (1) clean strips whitespace and lowercases, (2) count_words tallies the words, (3) summarize produces a summary string. Track which nodes ran in a steps list.
Starter code:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class TextState(TypedDict):
text: str
word_count: int
summary: str
steps: list[str]
# TODO: Write clean, count_words, summarize nodes
# TODO: Build the graph
Expected: Input " Hello World From Python " produces text="hello world from python", word_count=4, summary="Processed: 4 words", steps=['clean', 'count_words', 'summarize'].
Hints:
1. state["text"].strip().lower() in clean. Return the cleaned text and the updated steps list.
2. len(state["text"].split()) for word count. Append each node’s name with state["steps"] + ["node_name"].
Click to see the solution
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class TextState(TypedDict):
text: str
word_count: int
summary: str
steps: list[str]
def clean(state: TextState) -> dict:
return {
"text": state["text"].strip().lower(),
"steps": state["steps"] + ["clean"]
}
def count_words(state: TextState) -> dict:
return {
"word_count": len(state["text"].split()),
"steps": state["steps"] + ["count_words"]
}
def summarize(state: TextState) -> dict:
return {
"summary": f"Processed: {state['word_count']} words",
"steps": state["steps"] + ["summarize"]
}
graph = StateGraph(TextState)
graph.add_node("clean", clean)
graph.add_node("count_words", count_words)
graph.add_node("summarize", summarize)
graph.add_edge(START, "clean")
graph.add_edge("clean", "count_words")
graph.add_edge("count_words", "summarize")
graph.add_edge("summarize", END)
app = graph.compile()
result = app.invoke({
"text": " Hello World From Python ",
"word_count": 0, "summary": "", "steps": []
})
print(result)
{'text': 'hello world from python', 'word_count': 4, 'summary': 'Processed: 4 words', 'steps': ['clean', 'count_words', 'summarize']}
Each node owns one job and logs its name. The linear wiring guarantees the right order.
Summary
Every LangGraph program is built from three pieces:
Nodes — Python functions that receive state and return updates. Register them with add_node. They can do anything: call APIs, query databases, run LLMs, transform data.
Edges — One-way links between nodes. add_edge creates fixed paths. add_conditional_edges creates branches that choose the next node at runtime based on the state.
State — A TypedDict that acts as shared memory for the whole graph. Nodes read from it, write to it, and only need to return the fields they changed. LangGraph handles the merge.
START and END mark the entry and exit. Every graph needs at least one complete path between them.
Practice exercise: Build a quiz grader. The state holds score (int), grade (str), and feedback (str). A calculate_grade node assigns a letter grade (90+=”A”, 80+=”B”, 70+=”C”, below 70=”F”). Conditional edges route to a feedback node per grade. Test with 95, 85, 72, and 55.
Click to see the solution
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class QuizState(TypedDict):
score: int
grade: str
feedback: str
def calculate_grade(state: QuizState) -> dict:
s = state["score"]
if s >= 90: return {"grade": "A"}
elif s >= 80: return {"grade": "B"}
elif s >= 70: return {"grade": "C"}
else: return {"grade": "F"}
def fb_a(state: QuizState) -> dict:
return {"feedback": f"Excellent! Score: {state['score']}"}
def fb_b(state: QuizState) -> dict:
return {"feedback": f"Good work! Score: {state['score']}"}
def fb_c(state: QuizState) -> dict:
return {"feedback": f"Passing. Score: {state['score']}"}
def fb_f(state: QuizState) -> dict:
return {"feedback": f"Needs improvement. Score: {state['score']}"}
def route_grade(state: QuizState) -> str:
return {"A": "fb_a", "B": "fb_b", "C": "fb_c", "F": "fb_f"}[state["grade"]]
g = StateGraph(QuizState)
g.add_node("calculate_grade", calculate_grade)
for name, func in [("fb_a", fb_a), ("fb_b", fb_b), ("fb_c", fb_c), ("fb_f", fb_f)]:
g.add_node(name, func)
g.add_edge(name, END)
g.add_edge(START, "calculate_grade")
g.add_conditional_edges("calculate_grade", route_grade)
app = g.compile()
for score in [95, 85, 72, 55]:
r = app.invoke({"score": score, "grade": "", "feedback": ""})
print(f"Score: {score} -> {r['grade']} -> {r['feedback']}")
Score: 95 -> A -> Excellent! Score: 95
Score: 85 -> B -> Good work! Score: 85
Score: 72 -> C -> Passing. Score: 72
Score: 55 -> F -> Needs improvement. Score: 55
Next up: state management in depth — reducers, message history, and complex state that grows across many nodes.
FAQ
Can a node have two outgoing normal edges?
No. One outgoing normal edge per node. For branching, use add_conditional_edges with a routing function.
# This raises an error:
# graph.add_edge("my_node", "node_a")
# graph.add_edge("my_node", "node_b") # Error! Already has outgoing edge.
# Use conditional edges for branching
graph.add_conditional_edges("my_node", routing_function)
Can I use Pydantic instead of TypedDict?
Yes. LangGraph accepts both. Pydantic adds runtime checks on every update — return a bad type and you get an error right away. For small graphs, TypedDict is enough. For production code with strict contracts, Pydantic is a good choice.
What if two nodes write to the same field?
The last writer wins. If node A sets count to 5 and node B later sets it to 10, the final value is 10. To accumulate instead of overwrite, you need reducer functions — covered in the next article.
Is there a limit on how many nodes a graph can hold?
No hard limit. Graphs with 30+ nodes work fine. But beyond 10-15, consider splitting into subgraphs for easier testing and debugging. We cover subgraphs later in this series.
Is the execution order always the same?
For linear chains, yes — always A then B then C. For conditional edges, the path depends on what’s in the state, but the routing function is pure: same state in, same decision out. LangGraph adds no randomness.
References
- LangGraph documentation — Graph API overview
- LangGraph documentation — Concepts: Nodes
- LangGraph documentation — Concepts: Edges
- LangGraph documentation — Concepts: State
- LangGraph documentation — Quickstart
- LangGraph Cheatsheet — Core Concepts
- Python documentation — TypedDict
- LangGraph GitHub repository
Last reviewed: March 2026 | LangGraph version: 0.3+
Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.
Start Free Course →