machine learning +
Build a Python AI Chatbot with Memory Using LangChain
LangGraph Subgraphs: Compose Reusable Workflows
Learn how to build modular LangGraph apps with subgraphs you can develop, test, and reuse across projects — with full code examples and patterns.
Subgraphs let you break a big LangGraph app into small, testable pieces you can plug into any parent graph — like functions inside a program.
Picture this. Your LangGraph project began with three nodes and two edges. Six months later, it has grown to twenty nodes. Touch the summary step and the classifier falls over. Patch the classifier and the output drifts.
If that rings a bell, you do not need more caution. You need subgraphs. Think of a subgraph as a mini graph you slot inside a bigger one — the way you split a long function into neat helper functions. Each piece runs on its own, yet they all fit together in one parent graph.
Below, I will show you what subgraphs are, how to hook them up, and when they make sense. We start with a tiny example and work our way up to a real, three-stage document pipeline.
What Exactly Is a Subgraph?
At its core, a subgraph is a compiled StateGraph that acts as one node inside a larger graph. From the parent’s point of view, it is just a single step. Inside, though, it can run as many nodes, edges, and state rules as it likes.
Prerequisites
- Python: 3.10 or newer
- Packages: langgraph 0.4+, langchain-core 0.3+
- Install command:
pip install langgraph langchain-core - Background: LangGraph basics — nodes, edges, state, conditional routing (covered in Posts 5-9)
- Estimated time: 25-30 minutes
A helpful way to think about it: calling a subgraph is like calling a function from another function. The caller does not care what happens behind the scenes. It passes state in and gets updated state back.
Here is the smallest subgraph I can show you. It has two nodes — one to check the input and one to format it. We write those node functions, connect them in a StateGraph, and compile the result.
python
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class SharedState(TypedDict):
raw_text: str
cleaned_text: str
is_valid: bool
# --- Build the subgraph ---
def validate(state: SharedState) -> dict:
text = state["raw_text"].strip()
return {"is_valid": len(text) > 0, "cleaned_text": text}
def format_text(state: SharedState) -> dict:
return {"cleaned_text": state["cleaned_text"].upper()}
sub_builder = StateGraph(SharedState)
sub_builder.add_node("validate", validate)
sub_builder.add_node("format_text", format_text)
sub_builder.add_edge(START, "validate")
sub_builder.add_edge("validate", "format_text")
sub_builder.add_edge("format_text", END)
sub_graph = sub_builder.compile()
So far, nothing special. We made a StateGraph, gave it two nodes, linked them, and compiled. What we get back — sub_graph — is a runnable object. Try it on its own:
python
result = sub_graph.invoke({
"raw_text": " hello world ",
"cleaned_text": "",
"is_valid": False
})
print(result)
python
{'raw_text': ' hello world ', 'cleaned_text': 'HELLO WORLD', 'is_valid': True}
It trimmed the spaces, flagged the text as valid, and upper-cased it. By itself, this is just a regular graph. The magic begins once we place it inside a parent.
How Do You Plug a Subgraph Into a Parent Graph?
You might wonder: why not just call the subgraph yourself? Because letting the parent graph do it gives you free run-order control, error recovery, and checkpoint support. You keep your code modular without giving up any of the runner’s perks.
The recipe is simple. Call add_node() on the parent and hand it the compiled subgraph — no wrapper function required. In the example below, the parent owns two regular nodes (intake and store), with the subgraph sitting between them.
python
def intake(state: SharedState) -> dict:
return {"raw_text": state["raw_text"]}
def store(state: SharedState) -> dict:
print(f"Storing: {state['cleaned_text']}")
return state
parent_builder = StateGraph(SharedState)
parent_builder.add_node("intake", intake)
parent_builder.add_node("processor", sub_graph) # compiled subgraph as a node
parent_builder.add_node("store", store)
parent_builder.add_edge(START, "intake")
parent_builder.add_edge("intake", "processor")
parent_builder.add_edge("processor", "store")
parent_builder.add_edge("store", END)
parent_app = parent_builder.compile()
result = parent_app.invoke({
"raw_text": " data science rocks ",
"cleaned_text": "",
"is_valid": False
})
print(result["cleaned_text"])
python
Storing: DATA SCIENCE ROCKS
DATA SCIENCE ROCKS
A single line — add_node("processor", sub_graph) — drops the entire subgraph into the parent. From the parent’s angle, it is just another node. Data travels from intake, passes through the subgraph’s inner steps, and comes out the other side to store.
Key Insight: A compiled subgraph behaves like any other node. Add it with `add_node()`, wire it with `add_edge()`, and the parent never knows that several steps run inside.
What Is Shared State and How Does It Work?
The snippet above works right away because both the parent and the subgraph rely on the exact same SharedState class. Every key — raw_text, cleaned_text, is_valid — is visible to both sides.
When the two schemas match, LangGraph feeds the full state into the subgraph for you. Any key the subgraph edits flows straight back to the parent. You do not have to write any mapping code.
python
# Same schema on both sides — keys travel freely
print(f"raw_text: {result['raw_text']}")
print(f"cleaned_text: {result['cleaned_text']}")
print(f"is_valid: {result['is_valid']}")
This prints:
python
raw_text: data science rocks
cleaned_text: DATA SCIENCE ROCKS
is_valid: True
Keys the subgraph changed (cleaned_text, is_valid) appear in the parent’s final output. Keys it never touched (raw_text) remain as they were.
Tip: Begin with shared state. Even if the subgraph only reads a few of the parent’s keys, a shared schema still works fine — the subgraph simply ignores keys it does not use. Switch to isolated state only when you truly need keys the parent must not see.
My rule of thumb: always start shared. Reach for isolation only when a concrete need forces you.
What If the Parent and Child Have Different Schemas?
Now and then, a subgraph tracks internal details the parent should not know about — maybe a loop counter or a scratch variable. When the key names do not line up, you cannot mount the subgraph directly.
The fix is a thin wrapper function. It sits between the parent and the subgraph, converts parent state into subgraph state, calls the subgraph, and converts the result back.
Below is a subgraph with its own SummaryState. It uses input_text, summary, and word_count — none of which appear in the parent.
python
# Subgraph with a DIFFERENT state schema
class SummaryState(TypedDict):
input_text: str
summary: str
word_count: int
def extract_key_points(state: SummaryState) -> dict:
words = state["input_text"].split()
short = " ".join(words[:5]) + "..."
return {"summary": short, "word_count": len(words)}
summary_builder = StateGraph(SummaryState)
summary_builder.add_node("extract", extract_key_points)
summary_builder.add_edge(START, "extract")
summary_builder.add_edge("extract", END)
summary_graph = summary_builder.compile()
The parent works with raw_text, cleaned_text, and final_summary. Zero overlap. The bridge is a wrapper named run_summarizer. It maps parent keys to subgraph keys, runs the subgraph, and maps the output back.
python
class ParentState(TypedDict):
raw_text: str
cleaned_text: str
final_summary: str
def run_summarizer(state: ParentState) -> dict:
# Map parent state -> subgraph state
sub_input = {
"input_text": state["cleaned_text"],
"summary": "",
"word_count": 0
}
# Invoke subgraph
sub_result = summary_graph.invoke(sub_input)
# Map subgraph result -> parent state
return {"final_summary": sub_result["summary"]}
parent2 = StateGraph(ParentState)
parent2.add_node("summarizer", run_summarizer)
parent2.add_edge(START, "summarizer")
parent2.add_edge("summarizer", END)
app2 = parent2.compile()
result2 = app2.invoke({
"raw_text": "the quick brown fox jumps over the lazy dog today",
"cleaned_text": "the quick brown fox jumps over the lazy dog today",
"final_summary": ""
})
print(result2["final_summary"])
python
the quick brown fox jumps...
The wrapper does three jobs: it feeds cleaned_text in as input_text, triggers the subgraph, and pulls summary back as final_summary. The word_count key never reaches the parent — it stays locked inside the subgraph.
Key Insight: When schemas differ, the wrapper acts as a translator. It maps parent state into child state on the way in and maps the child’s output back on the way out. Private keys stay private.
Can You Put a Subgraph Inside Another Subgraph?
Absolutely — and it comes up more often than you would expect. A parent calls a child, and the child itself holds a grandchild. Every level compiles on its own, so each one is easy to test in isolation.
Let me walk you through a three-level stack. The grandchild splits text into tokens. The child scrubs the text clean and then delegates the splitting to the grandchild. The parent kicks the whole chain off.
python
# Level 3: Grandchild — tokenizes text
class TokenState(TypedDict):
text: str
tokens: list[str]
def tokenize(state: TokenState) -> dict:
return {"tokens": state["text"].lower().split()}
grandchild = StateGraph(TokenState)
grandchild.add_node("tokenize", tokenize)
grandchild.add_edge(START, "tokenize")
grandchild.add_edge("tokenize", END)
grandchild_graph = grandchild.compile()
The child uses re.sub() to strip everything that is not a letter or space, then passes the clean text to the grandchild via a wrapper. Because CleanState and TokenState carry different keys, the wrapper handles the translation.
python
# Level 2: Child — cleans text, then delegates to grandchild
import re
class CleanState(TypedDict):
raw: str
cleaned: str
token_list: list[str]
def clean(state: CleanState) -> dict:
cleaned = re.sub(r'[^a-zA-Z\s]', '', state["raw"])
return {"cleaned": cleaned.strip()}
def run_tokenizer(state: CleanState) -> dict:
sub_input = {"text": state["cleaned"], "tokens": []}
sub_result = grandchild_graph.invoke(sub_input)
return {"token_list": sub_result["tokens"]}
child = StateGraph(CleanState)
child.add_node("clean", clean)
child.add_node("tokenize", run_tokenizer)
child.add_edge(START, "clean")
child.add_edge("clean", "tokenize")
child.add_edge("tokenize", END)
child_graph = child.compile()
Finally, the parent feeds raw input to the child through yet another wrapper. The flow is always the same — map in, invoke, map out.
python
# Level 1: Parent — sends raw input to child pipeline
class PipelineState(TypedDict):
user_input: str
processed_tokens: list[str]
def run_child(state: PipelineState) -> dict:
sub_input = {"raw": state["user_input"], "cleaned": "", "token_list": []}
sub_result = child_graph.invoke(sub_input)
return {"processed_tokens": sub_result["token_list"]}
parent3 = StateGraph(PipelineState)
parent3.add_node("process", run_child)
parent3.add_edge(START, "process")
parent3.add_edge("process", END)
app3 = parent3.compile()
result3 = app3.invoke({"user_input": "Hello, World! 123", "processed_tokens": []})
print(result3["processed_tokens"])
python
['hello', 'world']
That is three levels at work. The comma, bang, and digits from "Hello, World! 123" were scrubbed by the child’s clean node, then tokenized by the grandchild.
Warning: Stick to three levels or fewer. Every extra layer makes tracing bugs harder. If you find yourself at four levels, merge two neighbouring layers into one. Deep nesting is a warning sign, not a feature.
Why Is Independent Testing the Biggest Win?
This is, without a doubt, the main reason to use subgraphs in practice. Since each one compiles to a standalone runnable, you can test it alone with fixed inputs and verify the outputs. No parent graph setup, no mocking.
python
# Test grandchild in isolation
test_result = grandchild_graph.invoke({
"text": "LangGraph makes AI workflows modular",
"tokens": []
})
print(test_result["tokens"])
# Test child in isolation
test_child = child_graph.invoke({
"raw": "Hello!! World??",
"cleaned": "",
"token_list": []
})
print(test_child["token_list"])
The grandchild returns lowercase tokens. The child removes punctuation first:
python
['langgraph', 'makes', 'ai', 'workflows', 'modular']
['hello', 'world']
Plain asserts or pytest work great. Every subgraph is just a function that takes a dict and returns a dict — about as simple as testing gets.
python
# Simple assertion-based tests
assert grandchild_graph.invoke({"text": "A B C", "tokens": []})["tokens"] == ["a", "b", "c"]
assert child_graph.invoke({"raw": "!!!", "cleaned": "", "token_list": []})["token_list"] == []
print("All tests passed!")
python
All tests passed!
Tip: Test every subgraph before you snap them together. If the parent gives the wrong output later, you can cross off any subgraph whose tests still pass. The bug almost always lives in the wrapper — the glue between the two schemas.
Exercise 1: Build and Test a Scoring Subgraph
Build a subgraph that takes a text field and returns a score — the word count divided by 10, capped at 1.0. Test it with at least two inputs.
python
# Starter code — fill in the TODOs
from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
class ScorerState(TypedDict):
text: str
score: float
def score_text(state: ScorerState) -> dict:
# TODO: count words in state["text"]
# TODO: return score = min(word_count / 10, 1.0)
pass
scorer_builder = StateGraph(ScorerState)
# TODO: add node, edges, compile
# scorer_graph = ...
# Test 1: 5-word input should score 0.5
# Test 2: 15-word input should score 1.0 (capped)
Real-World Example — A Multi-Stage Document Pipeline
Time for a real-ish scenario. Here is a pipeline I would actually ship — a document processor with three stages: classify the doc, extract key data, and run a quality check. Each stage lives in its own subgraph.
The classification subgraph reads the text and assigns a category. For this demo we use keyword matching. In production, you would swap it with an LLM call — and nothing else in the pipeline would need to change. That swap-ability is the whole reason subgraphs exist.
python
# Stage 1: Classification subgraph
class ClassifyState(TypedDict):
doc_text: str
category: str
confidence: float
def classify_doc(state: ClassifyState) -> dict:
text = state["doc_text"].lower()
if "invoice" in text or "payment" in text:
return {"category": "financial", "confidence": 0.9}
elif "contract" in text or "agreement" in text:
return {"category": "legal", "confidence": 0.85}
else:
return {"category": "general", "confidence": 0.5}
classify_builder = StateGraph(ClassifyState)
classify_builder.add_node("classify", classify_doc)
classify_builder.add_edge(START, "classify")
classify_builder.add_edge("classify", END)
classify_graph = classify_builder.compile()
The extraction subgraph grabs key fields depending on what the classifier decided. Notice that category comes from the first stage — the parent forwards it.
python
# Stage 2: Extraction subgraph
class ExtractState(TypedDict):
doc_text: str
category: str
extracted_data: dict
def extract_fields(state: ExtractState) -> dict:
text = state["doc_text"]
data = {"source_length": len(text)}
if state["category"] == "financial":
data["doc_type"] = "invoice"
data["has_amount"] = "$" in text or "amount" in text.lower()
elif state["category"] == "legal":
data["doc_type"] = "contract"
data["has_dates"] = any(w.isdigit() for w in text.split())
else:
data["doc_type"] = "general"
return {"extracted_data": data}
extract_builder = StateGraph(ExtractState)
extract_builder.add_node("extract", extract_fields)
extract_builder.add_edge(START, "extract")
extract_builder.add_edge("extract", END)
extract_graph = extract_builder.compile()
Note: Real-world vs. tutorial code: A production extractor would likely call an LLM with a structured output schema or a document-parsing library. We keep it simple here so the focus stays on the subgraph wiring.
The quality-check subgraph inspects what the extractor found. Too few fields? It flags the document.
python
# Stage 3: Quality check subgraph
class QualityState(TypedDict):
extracted_data: dict
quality_score: float
passed_qa: bool
def check_quality(state: QualityState) -> dict:
data = state["extracted_data"]
field_count = len(data)
score = min(field_count / 3, 1.0)
return {"quality_score": round(score, 2), "passed_qa": score >= 0.6}
qa_builder = StateGraph(QualityState)
qa_builder.add_node("check", check_quality)
qa_builder.add_edge(START, "check")
qa_builder.add_edge("check", END)
qa_graph = qa_builder.compile()
The parent graph stitches the three stages into one flow. A wrapper for each stage handles the key mapping. Follow the data: document enters classify, doc_category enters extract, and doc_data enters quality check.
python
class DocPipelineState(TypedDict):
document: str
doc_category: str
doc_confidence: float
doc_data: dict
qa_score: float
qa_passed: bool
def run_classifier(state: DocPipelineState) -> dict:
result = classify_graph.invoke({
"doc_text": state["document"],
"category": "",
"confidence": 0.0
})
return {
"doc_category": result["category"],
"doc_confidence": result["confidence"]
}
def run_extractor(state: DocPipelineState) -> dict:
result = extract_graph.invoke({
"doc_text": state["document"],
"category": state["doc_category"],
"extracted_data": {}
})
return {"doc_data": result["extracted_data"]}
def run_qa(state: DocPipelineState) -> dict:
result = qa_graph.invoke({
"extracted_data": state["doc_data"],
"quality_score": 0.0,
"passed_qa": False
})
return {"qa_score": result["quality_score"], "qa_passed": result["passed_qa"]}
Now we wire the wrappers into the parent. The pipeline flows through classify, then extract, then quality check, one after the other.
python
pipeline = StateGraph(DocPipelineState)
pipeline.add_node("classify", run_classifier)
pipeline.add_node("extract", run_extractor)
pipeline.add_node("qa_check", run_qa)
pipeline.add_edge(START, "classify")
pipeline.add_edge("classify", "extract")
pipeline.add_edge("extract", "qa_check")
pipeline.add_edge("qa_check", END)
doc_app = pipeline.compile()
Let us push a financial document through and check every field:
python
doc_result = doc_app.invoke({
"document": "Invoice #1234: Payment of $5000 due by 2026-03-15",
"doc_category": "",
"doc_confidence": 0.0,
"doc_data": {},
"qa_score": 0.0,
"qa_passed": False
})
print(f"Category: {doc_result['doc_category']}")
print(f"Confidence: {doc_result['doc_confidence']}")
print(f"Extracted: {doc_result['doc_data']}")
print(f"QA Score: {doc_result['qa_score']}")
print(f"QA Passed: {doc_result['qa_passed']}")
python
Category: financial
Confidence: 0.9
Extracted: {'source_length': 50, 'doc_type': 'invoice', 'has_amount': True}
QA Score: 1.0
QA Passed: True
The classifier picked “financial” with 0.9 confidence. The extractor pulled out three fields. The QA step scored a perfect 1.0 because three fields clear the bar.
Every subgraph handles one job. Want an LLM-powered classifier? Swap it in and leave extract and QA alone. Need a fourth stage for archiving? Add one more subgraph node and the others stay untouched.
Exercise 2: Write a State-Mapping Wrapper
You already have scorer_graph from Exercise 1. Create a parent graph whose ReviewState holds document and review_score. Write a wrapper that maps document to text and pulls score back as review_score.
python
# Starter code — fill in the wrapper function
class ReviewState(TypedDict):
document: str
review_score: float
def run_scorer(state: ReviewState) -> dict:
# TODO: build sub_input dict mapping document -> text
# TODO: invoke scorer_graph
# TODO: return dict mapping score -> review_score
pass
# TODO: build parent graph, compile, and test
How Do Checkpoints and Streaming Behave with Subgraphs?
In production you will want two things: checkpoints (so you can pause and resume) and streaming (so you can watch progress). How they work depends on the way you mount the subgraph.
Checkpoints work differently for direct mounts versus wrapper calls. If you mount with add_node("name", compiled_subgraph), the subgraph shares the parent’s checkpointer under its own namespace — no collisions. If you call the subgraph inside a wrapper function, it runs in its own world and will not share checkpoints unless you hand it a checkpointer yourself.
python
from langgraph.checkpoint.memory import MemorySaver
# Direct mount: subgraph shares parent's checkpointer
checkpointer = MemorySaver()
parent_app = parent_builder.compile(checkpointer=checkpointer)
Streaming needs one extra flag. By default you only see events from the parent’s own nodes. Pass subgraphs=True to also see what happens inside the subgraph.
python
# Stream with subgraph visibility
for event in parent_app.stream(
{"raw_text": " test input ", "cleaned_text": "", "is_valid": False},
subgraphs=True
):
print(event)
TIP: Turn on
subgraphs=Truewhile you develop and debug. In production you will usually want only parent events to keep logs tidy.
When Should You Reach for Subgraphs vs. Keep a Flat Graph?
Not every project needs subgraphs. Here is the mental model I use.
Reach for subgraphs when:
- Your graph has 8 or more nodes that group into clear stages (classify, extract, validate)
- The same logic shows up in several workflows — one summarizer shared by three pipelines
- Different people on your team own different stages
- You need to unit-test stages one at a time
Stay with one graph when:
- You have fewer than 6 nodes total
- Every node reads or writes nearly every key in the state
- There is no plan to reuse parts in other projects
- The cost of writing wrappers is more than the gain
| Factor | Flat Graph | Subgraphs |
|---|---|---|
| Node count | Under 6 | 8+ in clear groups |
| State overlap | All nodes share state | Some stages need private keys |
| Reuse | One-off workflow | Shared across pipelines |
| Team size | Solo | Multiple owners |
| Testing | End-to-end tests are enough | Per-stage unit tests needed |
Key Insight: Subgraphs manage complexity the same way functions do in regular code. If you would never write a 200-line function, do not build a 20-node flat graph either.
Common Mistakes and How to Fix Them
Mistake 1: Handing a builder to the parent instead of a compiled graph
Always call .compile() before mounting a subgraph. Passing the builder object will raise an error.
Wrong:
python
sub_builder = StateGraph(SharedState)
sub_builder.add_node("step", some_func)
# ...
parent.add_node("sub", sub_builder) # Error! Not compiled
Right:
python
sub_graph = sub_builder.compile() # compile first
parent.add_node("sub", sub_graph) # pass the compiled graph
Mistake 2: Mounting directly when key names do not line up
If you use add_node("name", compiled_subgraph), the parent and child must use matching state keys. Mismatched names cause missing-key crashes or silent data loss.
Wrong:
python
class ParentS(TypedDict):
user_input: str
class ChildS(TypedDict):
query: str # different key name!
child_graph = StateGraph(ChildS) # ...compile()
parent.add_node("child", child_graph) # keys don't match!
Right — add a wrapper to bridge the gap:
python
def bridge(state: ParentS) -> dict:
result = child_graph.invoke({"query": state["user_input"]})
return {"user_input": result["query"]}
parent.add_node("child", bridge)
Mistake 3: Leaving out keys when you call the subgraph
A wrapper must supply every key the subgraph expects — even keys the subgraph will overwrite on its own.
Wrong:
python
def wrapper(state):
return sub_graph.invoke({"input_text": state["text"]})
# Missing "summary" and "word_count" keys!
Right:
python
def wrapper(state):
return sub_graph.invoke({
"input_text": state["text"],
"summary": "", # provide all keys
"word_count": 0 # even with default values
})
Warning: TypedDict does not enforce keys at runtime. Python will not alert you about a missing key. The error shows up later when the subgraph’s node tries to access it. Always fill in every key in your `invoke()` call.
Summary
Subgraphs turn sprawling LangGraph workflows into neat, manageable chunks. Here is what we covered:
- A subgraph is a compiled
StateGraphadded to a parent withadd_node() - Shared state: matching schemas let you mount directly — one line of code
- Isolated state: mismatched schemas need a wrapper to map keys back and forth
- Nesting is fine up to three levels; go deeper and debugging gets painful
- Solo testing is the number-one practical benefit — each subgraph stands on its own
- Checkpoints share automatically with direct mounts; wrappers need manual setup
- Use subgraphs when your graph hits 8+ nodes with clear groupings
Practice exercise: Build a two-stage pipeline. A “preprocessor” subgraph lowercases and trims raw_text. A “counter” subgraph counts words and characters. Give each its own schema and connect them through a parent using wrappers.
Up next: human-in-the-loop patterns — how to pause a graph mid-run, collect a human decision, and resume where you left off.
Complete Code
Frequently Asked Questions
Can a subgraph use conditional edges?
Yes. A subgraph is a full StateGraph, so it supports conditional edges, cycles, and every other LangGraph feature. The parent has no visibility into those internal routes.
python
# Inside a subgraph builder
sub_builder.add_conditional_edges(
"classify",
route_fn,
{"urgent": "escalate", "normal": "auto_reply"}
)
Do subgraphs share the parent’s checkpointer?
That depends on the mount style. Direct mounting with add_node("name", compiled_subgraph) shares the parent’s checkpointer under a separate namespace — no clashes. Calling the subgraph inside a wrapper gives it no shared checkpointer unless you pass one in yourself.
Can the same compiled subgraph appear in several parent graphs?
Yes — that is the whole idea. Compile once, mount in as many parents as you need.
python
parent_a.add_node("validator", validation_subgraph)
parent_b.add_node("validator", validation_subgraph) # same object, different parent
Each parent runs its own copy. No state bleeds between them.
Does the extra layer slow things down?
Barely. The overhead is one function call per subgraph run — microseconds compared to the milliseconds your nodes spend on LLM calls or data work. In my experience, subgraph overhead has never been the bottleneck.
How do you stop checkpoint collisions when you run many subgraphs?
Each stateful subgraph needs its own storage space. For direct mounts, LangGraph assigns a unique namespace based on the node name you pass to add_node(). For wrapper-based subgraphs, you would pass separate checkpointer instances or configure namespaces by hand.
References
- LangGraph Documentation — Subgraphs. Link
- LangGraph Documentation — Graph API Overview. Link
- LangChain OpenTutorial — How to Transform Subgraph Input and Output. Link
- LangGraph GitHub Repository. Link
- LangGraph Documentation — Workflows and Agents. Link
- James Li — Building Complex AI Workflows with LangGraph: Subgraph Architecture. DEV Community. Link
- LangGraph Documentation — Subgraphs Overview. Link
Free Course
Master Core Python — Your First Step into AI/ML
Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.
Start Free Course →Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Up Next in Learning Path
LangGraph Error Handling: Retries & Fallback Strategies
