Menu

LangGraph Human-in-the-Loop: Add Approval Steps

Learn to pause LangGraph agents for human review, accept edits, and resume safely with working interrupt and Command patterns.

Written by Selva Prabhakaran | 27 min read

Add human-in-the-loop approval to your LangGraph agents so they pause for review, accept edits, and resume safely.

Picture this. Your agent is about to blast an email to every name on your client list. Or it’s one step away from running a DELETE query on your live database. Do you really want it flying solo?

I didn’t think so. That’s why you need a way to make it stop, show you what it plans to do, and wait for your green light. That’s what human-in-the-loop (HITL) does in LangGraph. You mark a spot in your graph where it should pause. The agent freezes, hands you the details, and picks up right where it stopped once you say “go.”

By the end of this post, you’ll know how to wire up approval gates, collect edits mid-flow, and review tool calls before they fire. Let’s get into it.

Why Does Human-in-the-Loop Matter for AI Agents?

An agent with no guard rails is a gamble. LLMs make things up. They read prompts wrong. They act fast on bad guesses.

In a chat app, a wrong reply is just annoying. But when your agent fires off emails, moves money, or edits live data, a wrong move can hurt. HITL gives you a safety net. You let the agent handle the easy stuff on its own, but you step in for the big calls.

Here are some real cases where you’d want a pause:

  • Tool call review: The agent wants to hit an API. You check the payload first.
  • Draft check: The agent wrote a reply. You read it before it goes out.
  • Data guard: The agent built a SQL query. You make sure it won’t nuke your tables.
  • Spend cap: The agent wants to kick off a costly job. You okay the bill first.
Key Insight: HITL isn’t about not trusting your agent. It’s about earning trust step by step. Start by gating every action. As things go well, open the gates one by one. That’s how real-world agents grow up.

Prerequisites

  • Python version: 3.10+
  • Required libraries: langgraph (0.4+), langchain-openai (0.3+), langchain-core (0.3+)
  • Install: pip install langgraph langchain-openai langchain-core python-dotenv
  • API key: OpenAI API key stored in a .env file as OPENAI_API_KEY (Get one here)
  • Prior knowledge: Posts 1-12 of this series, especially Tool Calling in LangGraph and State Management
  • Time to complete: 25-30 minutes

How Does LangGraph Pause a Running Graph?

Two parts make this work: a checkpointer and an interrupt.

The checkpointer saves your graph’s state each time a node finishes. Think of it like a save slot in a game. If the graph stops, you load that slot and keep going from that exact spot.

The interrupt tells LangGraph where to stop. When the code hits an interrupt, it saves the state, halts the graph, and gives control back to you. You grab the user’s input, then start the graph again with that input.

Here’s the basic setup. We make a checkpointer with MemorySaver. It keeps state in RAM. Great for learning and quick tests. For a live app, you’d swap in PostgresSaver — same API, but the data sticks around.

python
import os
from dotenv import load_dotenv
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from typing import TypedDict

load_dotenv()

# Create checkpointer -- saves state between pauses
checkpointer = MemorySaver()

# Define state
class State(TypedDict):
    message: str
    approved: bool

print("Checkpointer ready -- MemorySaver stores state in memory")
python
Checkpointer ready -- MemorySaver stores state in memory
Tip: MemorySaver vs production checkpointers: `MemorySaver` loses all state when your process restarts. In production, use `langgraph-checkpoint-postgres` or `langgraph-checkpoint-sqlite`. The code stays identical — you just swap the checkpointer object.

How Do You Write Your First Interrupt?

Let’s build a graph that drafts a note, pauses for a thumbs-up or thumbs-down, then sends or drops it.

The interrupt() call takes any value that can turn into JSON. That value gets sent back to you, so you can show the person what needs a sign-off. When the graph starts again, the value you pass in becomes what interrupt() hands back.

Three node functions drive the flow. draft_message writes the text. human_review calls interrupt() to freeze and catch the choice. publish acts on that choice.

python
def draft_message(state: State) -> State:
    """Simulate drafting a message."""
    draft = f"DRAFT: {state['message']}"
    print(f"Drafted message: {draft}")
    return {"message": draft, "approved": False}

def human_review(state: State) -> State:
    """Pause for human approval."""
    decision = interrupt({
        "draft": state["message"],
        "question": "Approve this message? (yes/no)"
    })
    approved = decision.lower() == "yes"
    print(f"Human decision: {decision} -> approved={approved}")
    return {"approved": approved}

def publish(state: State) -> State:
    """Publish the approved message."""
    if state["approved"]:
        print(f"PUBLISHED: {state['message']}")
    else:
        print("Message rejected -- not published")
    return state

print("Node functions defined")
python
Node functions defined

Now hook up the graph and compile it with your checkpointer. The checkpointer=checkpointer part is a must when you use interrupts. Without it, LangGraph has no place to stash state, so it can’t pick up later.

python
builder = StateGraph(State)
builder.add_node("draft", draft_message)
builder.add_node("review", human_review)
builder.add_node("publish", publish)

builder.add_edge(START, "draft")
builder.add_edge("draft", "review")
builder.add_edge("review", "publish")
builder.add_edge("publish", END)

graph = builder.compile(checkpointer=checkpointer)

print("Graph compiled with checkpointer")
python
Graph compiled with checkpointer

How Does the Two-Phase Run Pattern Work?

Every HITL flow has two halves. Phase 1: run until the graph freezes. Phase 2: feed in the person’s answer and let it finish. The thread_id in the config links those two halves.

python
# Phase 1: Run until interrupt
config = {"configurable": {"thread_id": "thread-1"}}

result = graph.invoke(
    {"message": "Hello customers!", "approved": False},
    config=config
)

print(f"\nGraph paused. Result keys: {list(result.keys())}")
python
Drafted message: DRAFT: Hello customers!

Graph paused. Result keys: ['message', 'approved']

The graph ran draft, then hit interrupt() inside human_review and froze. State is saved. The draft is ready for a look.

What did the interrupt hand back? Let’s peek at the snapshot.

python
snapshot = graph.get_state(config)
print(f"Next node to run: {snapshot.next}")
print(f"Interrupt payload: {snapshot.tasks[0].interrupts[0].value}")
python
Next node to run: ('review',)
Interrupt payload: {'draft': 'DRAFT: Hello customers!', 'question': 'Approve this message? (yes/no)'}

The snapshot tells you exactly where the graph stopped and what data it surfaced. In a real app, you’d show this in a UI. Now pass the answer back.

python
# Phase 2: Resume with human input
result = graph.invoke(
    Command(resume="yes"),
    config=config  # Same thread_id!
)

print(f"\nFinal state: approved={result['approved']}")
python
Human decision: yes -> approved=True
PUBLISHED: DRAFT: Hello customers!

Final state: approved=True

Command(resume="yes") sends the reply back to interrupt(). That call returns "yes", the node keeps going, and the rest of the graph wraps up.

Warning: Always use the same `thread_id` when you resume. A new `thread_id` kicks off a fresh run. Your paused graph just sits there, waiting for a reply that never comes.

Think about it: What if you ran graph.invoke(Command(resume="yes"), {"configurable": {"thread_id": "thread-DIFFERENT"}}) instead? LangGraph would look for a thread that doesn’t exist and throw an error. The paused graph on “thread-1” stays frozen.

How Do You Handle Rejection and Route by Decision?

Saying “yes” is only half the story. You also need a clean path for “no.” A branch after the review node steers the flow based on what the person chose.

python
checkpointer_v2 = MemorySaver()

def conditional_after_review(state: State) -> str:
    """Route based on approval status."""
    if state["approved"]:
        return "publish"
    return "discard"

def discard(state: State) -> State:
    """Handle rejected messages."""
    print(f"DISCARDED: {state['message']}")
    return state

builder2 = StateGraph(State)
builder2.add_node("draft", draft_message)
builder2.add_node("review", human_review)
builder2.add_node("publish", publish)
builder2.add_node("discard", discard)

builder2.add_edge(START, "draft")
builder2.add_edge("draft", "review")
builder2.add_conditional_edges("review", conditional_after_review)
builder2.add_edge("publish", END)
builder2.add_edge("discard", END)

graph_v2 = builder2.compile(checkpointer=checkpointer_v2)

print("Graph v2 compiled -- with approve/reject routing")
python
Graph v2 compiled -- with approve/reject routing
python
# Run and reject
config_v2 = {"configurable": {"thread_id": "thread-reject"}}

graph_v2.invoke(
    {"message": "Buy now!! Limited offer!!!", "approved": False},
    config=config_v2
)

result = graph_v2.invoke(
    Command(resume="no"),
    config=config_v2
)

print(f"\nFinal: approved={result['approved']}")
python
Drafted message: DRAFT: Buy now!! Limited offer!!!
Human decision: no -> approved=False
DISCARDED: DRAFT: Buy now!! Limited offer!!!

Final: approved=False

The spammy draft got caught. The branch sent it to discard rather than publish. This yes/no routing is the core pattern that most HITL flows build on.


Exercise 1: Build an Approval Gate

Build a graph with three nodes: generate, review, and execute. The generate node creates a task description from the input. The review node uses interrupt() to pause for approval. If approved, execute runs the task. If rejected, route to a cancel node.

Hint 1

Define a state with `task` (str) and `status` (str) fields. The review node should return the human’s decision in the status field.

Hint 2

Use a conditional edge after `review` that checks `state[“status”]`. Route to “execute” if status is “approved” and “cancel” if status is “rejected”.

Solution
python
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from typing import TypedDict

class TaskState(TypedDict):
    task: str
    status: str

def generate(state: TaskState) -> TaskState:
    return {"task": f"Task: {state['task']}", "status": "pending"}

def review(state: TaskState) -> TaskState:
    decision = interrupt({"task": state["task"], "question": "Approve? (yes/no)"})
    status = "approved" if decision.lower() == "yes" else "rejected"
    return {"status": status}

def execute(state: TaskState) -> TaskState:
    print(f"Executed: {state['task']}")
    return {"status": "completed"}

def cancel(state: TaskState) -> TaskState:
    print(f"Cancelled: {state['task']}")
    return {"status": "cancelled"}

def route(state: TaskState) -> str:
    return "execute" if state["status"] == "approved" else "cancel"

cp = MemorySaver()
b = StateGraph(TaskState)
b.add_node("generate", generate)
b.add_node("review", review)
b.add_node("execute", execute)
b.add_node("cancel", cancel)
b.add_edge(START, "generate")
b.add_edge("generate", "review")
b.add_conditional_edges("review", route)
b.add_edge("execute", END)
b.add_edge("cancel", END)
graph = b.compile(checkpointer=cp)

cfg = {"configurable": {"thread_id": "ex1"}}
graph.invoke({"task": "Deploy v2.0", "status": ""}, cfg)
result = graph.invoke(Command(resume="yes"), cfg)
print(result["status"])  # completed

The key learning here is wiring `interrupt()` with a conditional edge. The interrupt collects the decision, and the conditional edge acts on it. Two separate concerns, cleanly separated.


How Do You Let People Edit State Mid-Flow?

A plain “yes” or “no” isn’t always enough. Maybe the person spots a typo. Or they want to tweak a number, swap a phrase, or redo a whole section. That’s common in content work.

You handle edits by sending the new content through Command(resume=...). The interrupt hands back whatever the person typed — it doesn’t have to be just a word.

python
class EditableState(TypedDict):
    message: str
    status: str

def generate_draft(state: EditableState) -> EditableState:
    """Generate an initial draft."""
    draft = f"Dear customer, {state['message']}"
    print(f"Generated: {draft}")
    return {"message": draft, "status": "drafted"}

def review_and_edit(state: EditableState) -> EditableState:
    """Let human review and optionally edit the message."""
    response = interrupt({
        "current_draft": state["message"],
        "instructions": "Reply 'approve' or provide an edited version"
    })

    if response.lower() == "approve":
        print("Human approved without changes")
        return {"status": "approved"}
    else:
        print(f"Human edited to: {response}")
        return {"message": response, "status": "approved"}

def send_message(state: EditableState) -> EditableState:
    """Send the final message."""
    print(f"SENT: {state['message']}")
    return {"status": "sent"}

checkpointer_edit = MemorySaver()

builder_edit = StateGraph(EditableState)
builder_edit.add_node("generate", generate_draft)
builder_edit.add_node("review", review_and_edit)
builder_edit.add_node("send", send_message)

builder_edit.add_edge(START, "generate")
builder_edit.add_edge("generate", "review")
builder_edit.add_edge("review", "send")
builder_edit.add_edge("send", END)

graph_edit = builder_edit.compile(checkpointer=checkpointer_edit)

print("Edit-capable graph compiled")
python
Edit-capable graph compiled
python
config_edit = {"configurable": {"thread_id": "thread-edit"}}

# Phase 1: generate and pause
graph_edit.invoke(
    {"message": "your order has shipped", "status": "new"},
    config=config_edit
)

# Phase 2: human provides an edited version
result = graph_edit.invoke(
    Command(resume="Dear valued customer, your order #12345 has shipped and arrives Friday."),
    config=config_edit
)

print(f"\nFinal status: {result['status']}")
python
Generated: Dear customer, your order has shipped
Human edited to: Dear valued customer, your order #12345 has shipped and arrives Friday.
SENT: Dear valued customer, your order #12345 has shipped and arrives Friday.

Final status: sent

The person swapped the bland draft for a custom note. The graph used the new text for everything after that. This trick works for any kind of fix — changing words, tuning numbers, or swapping config values.

How Do You Review Tool Calls Before They Run?

Here’s the pattern you’ll reach for most in live systems: checking tool calls before they fire. The agent picks a tool and sets its inputs. A person looks at that plan. Only calls that get a thumbs-up go through.

Why bother? Because you want to see “I’m about to call delete_user(id=42)before it runs for real.

python
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
from langgraph.types import interrupt, Command

@tool
def search_database(query: str) -> str:
    """Search the customer database with a SQL query."""
    return f"Results for: {query} -> 3 records found"

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a customer."""
    return f"Email sent to {to}: {subject}"

tools = [search_database, send_email]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)

print("Tools and LLM configured")
python
Tools and LLM configured

We have two tools. One is low-risk (a database search). The other is high-risk (sending emails). The graph below asks for a sign-off on all tool calls. I’ll show you how to be picky about it next.

The call_llm node gets the model’s reply. human_approve_tools catches any tool-call requests and freezes for review. run_tools runs the calls that get the green light. A routing function checks if we need tools or we’re done.

python
def call_llm(state: MessagesState) -> MessagesState:
    """Call the LLM, which may request tool calls."""
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

def human_approve_tools(state: MessagesState) -> MessagesState:
    """Intercept tool calls for human approval."""
    last_message = state["messages"][-1]

    if not hasattr(last_message, "tool_calls") or not last_message.tool_calls:
        return state

    tool_info = [
        {"tool": tc["name"], "arguments": tc["args"], "id": tc["id"]}
        for tc in last_message.tool_calls
    ]

    decision = interrupt({
        "pending_tool_calls": tool_info,
        "question": "Approve these tool calls? (yes/no)"
    })

    if decision.lower() != "yes":
        rejection_msg = AIMessage(content="Tool calls were rejected by human reviewer.")
        return {"messages": [rejection_msg]}

    return state

def run_tools(state: MessagesState) -> MessagesState:
    """Execute approved tool calls."""
    tool_node = ToolNode(tools)
    return tool_node.invoke(state)

def should_continue(state: MessagesState) -> str:
    """Check if we need to run tools or finish."""
    last_message = state["messages"][-1]
    if hasattr(last_message, "tool_calls") and last_message.tool_calls:
        return "tools"
    return "end"

print("Node functions ready for tool-approval graph")
python
Node functions ready for tool-approval graph
python
checkpointer_tools = MemorySaver()

builder_tools = StateGraph(MessagesState)
builder_tools.add_node("llm", call_llm)
builder_tools.add_node("approve", human_approve_tools)
builder_tools.add_node("tools", run_tools)

builder_tools.add_edge(START, "llm")
builder_tools.add_conditional_edges("llm", should_continue, {
    "tools": "approve",
    "end": END
})
builder_tools.add_edge("approve", "tools")
builder_tools.add_edge("tools", "llm")

graph_tools = builder_tools.compile(checkpointer=checkpointer_tools)

print("Tool-approval graph compiled")
python
Tool-approval graph compiled
python
config_tools = {"configurable": {"thread_id": "thread-tools-1"}}

result = graph_tools.invoke(
    {"messages": [HumanMessage(content="Search our database for customers named Smith")]},
    config=config_tools
)

snapshot = graph_tools.get_state(config_tools)
print(f"Paused at: {snapshot.next}")
interrupt_data = snapshot.tasks[0].interrupts[0].value
print(f"Pending calls: {interrupt_data['pending_tool_calls']}")
python
Paused at: ('approve',)
Pending calls: [{'tool': 'search_database', 'arguments': {'query': 'customers named Smith'}, 'id': '...'}]

The agent wants to search for “customers named Smith.” The interrupt caught it and is waiting for a ruling.

python
result = graph_tools.invoke(
    Command(resume="yes"),
    config=config_tools
)

print(f"Agent response: {result['messages'][-1].content}")
python
Agent response: I found 3 records matching "customers named Smith" in the database.

The tool ran, the output went back to the LLM, and the agent gave its final answer. If you’d said “no,” it would have seen the “rejected” note instead.

[UNDER-THE-HOOD]
What goes on inside Command(resume=...): LangGraph pulls the saved state from the checkpointer, replays up to the interrupt() call, plugs your value in as its return, and keeps going. The node doesn’t start over — it picks up at the exact line where it froze. Feel free to skip this if you’re just starting out.

How Do You Auto-Approve Safe Tools but Gate Risky Ones?

Asking for a sign-off on every tool call gets old fast. In practice, you let safe calls fly and only flag the risky ones. Here’s how.

python
TOOLS_REQUIRING_APPROVAL = {"send_email"}

def selective_approval(state: MessagesState) -> MessagesState:
    """Only interrupt for tools that need human approval."""
    last_message = state["messages"][-1]

    if not hasattr(last_message, "tool_calls") or not last_message.tool_calls:
        return state

    risky_calls = [
        tc for tc in last_message.tool_calls
        if tc["name"] in TOOLS_REQUIRING_APPROVAL
    ]

    if not risky_calls:
        # All safe -- auto-approve
        print(f"Auto-approved: {[tc['name'] for tc in last_message.tool_calls]}")
        return state

    tool_info = [{"tool": tc["name"], "args": tc["args"]} for tc in risky_calls]

    decision = interrupt({
        "risky_tools": tool_info,
        "auto_approved": [
            tc["name"] for tc in last_message.tool_calls
            if tc["name"] not in TOOLS_REQUIRING_APPROVAL
        ],
        "question": "Approve the risky tool calls above?"
    })

    if decision.lower() != "yes":
        rejection = AIMessage(content="Risky tool calls rejected by reviewer.")
        return {"messages": [rejection]}

    return state

print("Selective approval defined")
print(f"Gated tools: {TOOLS_REQUIRING_APPROVAL}")
python
Selective approval defined
Gated tools: {'send_email'}

Database lookups sail through on their own. Email sends get flagged. You set a policy, and the graph sticks to it. This is what most live setups look like.

[BEST-PRACTICE]
Start tight, then loosen up. Ship with interrupts on every tool call. Track how often each one gets a “yes.” When a tool clears 99% of the time, move it to the auto-approve list. Let the data guide you.

How Do Multi-Step Approval Chains Work?

Some flows need more than one human check. Maybe the agent drafts a report, gets content notes, revises, and then needs a final okay. Each review point is its own interrupt().

python
from typing import List

class ReviewState(TypedDict):
    content: str
    review_notes: List[str]
    stage: str

def generate_report(state: ReviewState) -> ReviewState:
    """Generate initial report content."""
    report = f"Q4 Revenue Report: {state['content']}"
    print(f"Generated: {report[:50]}...")
    return {"content": report, "stage": "draft", "review_notes": []}

def first_review(state: ReviewState) -> ReviewState:
    """First checkpoint -- content accuracy."""
    feedback = interrupt({
        "stage": "Content Review",
        "content": state["content"],
        "question": "Is the content accurate? Provide notes or 'approve'"
    })

    if feedback.lower() == "approve":
        print("First review: approved")
        return {"stage": "reviewed", "review_notes": ["Content approved"]}
    else:
        print(f"First review notes: {feedback}")
        return {
            "content": f"{state['content']}\n[REVISED: {feedback}]",
            "stage": "revised",
            "review_notes": [feedback]
        }

def final_review(state: ReviewState) -> ReviewState:
    """Final checkpoint -- publication approval."""
    decision = interrupt({
        "stage": "Final Approval",
        "content": state["content"],
        "review_history": state["review_notes"],
        "question": "Approve for publication? (yes/no)"
    })

    approved = decision.lower() == "yes"
    status = "published" if approved else "rejected"
    print(f"Final review: {status}")
    return {"stage": status}

checkpointer_review = MemorySaver()

builder_review = StateGraph(ReviewState)
builder_review.add_node("generate", generate_report)
builder_review.add_node("first_review", first_review)
builder_review.add_node("final_review", final_review)

builder_review.add_edge(START, "generate")
builder_review.add_edge("generate", "first_review")
builder_review.add_edge("first_review", "final_review")
builder_review.add_edge("final_review", END)

graph_review = builder_review.compile(checkpointer=checkpointer_review)

print("Multi-step review graph compiled")
python
Multi-step review graph compiled

Let’s walk through the full two-stop flow.

python
config_review = {"configurable": {"thread_id": "thread-review"}}

graph_review.invoke(
    {"content": "Total revenue: $2.4M", "review_notes": [], "stage": "new"},
    config=config_review
)

snapshot = graph_review.get_state(config_review)
print(f"Paused at: {snapshot.next}")
print(f"Current stage: {snapshot.values['stage']}")
python
Generated: Q4 Revenue Report: Total revenue: $2.4M...
Paused at: ('first_review',)
Current stage: draft
python
# First review: provide corrections
graph_review.invoke(
    Command(resume="Revenue should be $2.6M -- update the figure"),
    config=config_review
)

snapshot = graph_review.get_state(config_review)
print(f"Paused at: {snapshot.next}")
print(f"Current stage: {snapshot.values['stage']}")
python
First review notes: Revenue should be $2.6M -- update the figure
Paused at: ('final_review',)
Current stage: revised

The graph took the fix at the first stop, updated the content, and froze again at the second stop. Each interrupt stands on its own.

python
result = graph_review.invoke(
    Command(resume="yes"),
    config=config_review
)

print(f"Final stage: {result['stage']}")
python
Final review: published
Final stage: published

Two human checks, one smooth flow. The checkpointer holds onto everything between stops.

What Are interrupt_before and interrupt_after?

So far you’ve called interrupt() inside nodes. LangGraph also has interrupt_before and interrupt_after. You set these at compile time, and the graph pauses right before or right after a given node runs.

When should you use each? Here’s the quick guide:

Featureinterrupt() functioninterrupt_before / interrupt_after
Where you set itInside the node codeAt compile time
Data exchangePass data to caller AND receive resume valueInspect state only
Use caseInteractive approval, collecting inputState inspection checkpoints
Modify stateVia resume valueVia update_state()
python
def step_one(state: State) -> State:
    """First processing step."""
    print("Step one executing")
    return {"message": f"Processed: {state['message']}", "approved": False}

def step_two(state: State) -> State:
    """Second processing step."""
    print("Step two executing")
    return {"message": f"Final: {state['message']}", "approved": True}

checkpointer_static = MemorySaver()

builder_static = StateGraph(State)
builder_static.add_node("step_one", step_one)
builder_static.add_node("step_two", step_two)
builder_static.add_edge(START, "step_one")
builder_static.add_edge("step_one", "step_two")
builder_static.add_edge("step_two", END)

# Pause BEFORE step_two runs
compiled_static = builder_static.compile(
    checkpointer=checkpointer_static,
    interrupt_before=["step_two"]
)

print("Graph compiled with interrupt_before=['step_two']")
python
Graph compiled with interrupt_before=['step_two']
python
config_static = {"configurable": {"thread_id": "thread-static"}}

result = compiled_static.invoke(
    {"message": "Hello", "approved": False},
    config=config_static
)

snapshot = compiled_static.get_state(config_static)
print(f"Paused before: {snapshot.next}")
print(f"Current message: {snapshot.values['message']}")
python
Step one executing
Paused before: ('step_two',)
Current message: Processed: Hello

Step one ran. The graph froze before step two. You can look at or change the state before moving on.

python
# Resume -- no Command needed for static interrupts
result = compiled_static.invoke(None, config=config_static)

print(f"Final message: {result['message']}")
print(f"Approved: {result['approved']}")
python
Step two executing
Final message: Final: Processed: Hello
Approved: True

[COMMON-MISTAKE]
Don’t mix up interrupt() with interrupt_before/interrupt_after. interrupt() freezes mid-node and lets you swap data both ways. The compile-time options freeze between nodes and only let you peek at state. Reach for interrupt() when you need a dialog. Reach for interrupt_before/interrupt_after when you just need a look.

How Do You Edit State Before the Next Node Runs?

With interrupt_before, you can patch the state with update_state() right before the next node kicks off. This is handy for quick fixes without building a whole review node.

python
checkpointer_modify = MemorySaver()

class TaskState(TypedDict):
    task: str
    priority: int

def process_task(state: TaskState) -> TaskState:
    print(f"Processing '{state['task']}' with priority {state['priority']}")
    return state

def execute_task(state: TaskState) -> TaskState:
    print(f"Executing: {state['task']} (priority={state['priority']})")
    return state

builder_mod = StateGraph(TaskState)
builder_mod.add_node("process", process_task)
builder_mod.add_node("execute", execute_task)
builder_mod.add_edge(START, "process")
builder_mod.add_edge("process", "execute")
builder_mod.add_edge("execute", END)

graph_mod = builder_mod.compile(
    checkpointer=checkpointer_modify,
    interrupt_before=["execute"]
)

config_mod = {"configurable": {"thread_id": "thread-modify"}}
graph_mod.invoke({"task": "Deploy to staging", "priority": 3}, config=config_mod)

print(f"Current priority: {graph_mod.get_state(config_mod).values['priority']}")
python
Processing 'Deploy to staging' with priority 3
Current priority: 3
python
# Human changes priority before execution continues
graph_mod.update_state(config_mod, {"priority": 1})

print(f"Updated priority: {graph_mod.get_state(config_mod).values['priority']}")

result = graph_mod.invoke(None, config=config_mod)
print(f"Final: task='{result['task']}', priority={result['priority']}")
python
Updated priority: 1
Executing: Deploy to staging (priority=1)
Final: task='Deploy to staging', priority=1

The person bumped the priority from 3 to 1 before the job ran. update_state() writes straight to the saved state. The next node sees the new value.


Exercise 2: Content Moderation with Selective Routing

Build a content moderation graph. The analyze node classifies a comment as “safe”, “flagged”, or “blocked”. Safe comments publish directly. Blocked comments get discarded immediately. Flagged comments pause for human review via interrupt(), where the human can approve or reject.

Hint 1

Define a state with `comment`, `classification`, and `decision` fields. Use a conditional edge after the analyze node to route based on classification.

Hint 2

The review node calls `interrupt()` with the comment and classification. After review, add another conditional edge that routes to “publish” or “discard” based on the human’s decision.

Solution
python
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from typing import TypedDict

class ModerationState(TypedDict):
    comment: str
    classification: str
    decision: str

def analyze(state: ModerationState) -> ModerationState:
    comment = state["comment"].lower()
    if any(w in comment for w in ["spam", "scam"]):
        return {"classification": "blocked"}
    elif any(w in comment for w in ["maybe", "borderline"]):
        return {"classification": "flagged"}
    return {"classification": "safe"}

def review(state: ModerationState) -> ModerationState:
    decision = interrupt({
        "comment": state["comment"],
        "classification": state["classification"]
    })
    return {"decision": decision.lower()}

def publish(state: ModerationState) -> ModerationState:
    print(f"Published: {state['comment']}")
    return {"decision": "published"}

def discard_it(state: ModerationState) -> ModerationState:
    print(f"Discarded: {state['comment']}")
    return {"decision": "discarded"}

def route_analysis(state: ModerationState) -> str:
    c = state["classification"]
    if c == "safe": return "publish"
    if c == "flagged": return "review"
    return "discard_it"

def route_review(state: ModerationState) -> str:
    return "publish" if state["decision"] == "approve" else "discard_it"

cp = MemorySaver()
b = StateGraph(ModerationState)
b.add_node("analyze", analyze)
b.add_node("review", review)
b.add_node("publish", publish)
b.add_node("discard_it", discard_it)
b.add_edge(START, "analyze")
b.add_conditional_edges("analyze", route_analysis)
b.add_conditional_edges("review", route_review)
b.add_edge("publish", END)
b.add_edge("discard_it", END)
graph = b.compile(checkpointer=cp)

# Test flagged comment
cfg = {"configurable": {"thread_id": "mod-1"}}
graph.invoke({"comment": "This is borderline", "classification": "", "decision": ""}, cfg)
result = graph.invoke(Command(resume="approve"), cfg)
print(result["decision"])  # published

This exercise combines two concepts: conditional routing (from Post 8) and HITL interrupts. The tricky part is having two layers of conditional edges — one after analysis, one after review.


What Mistakes Do People Make with HITL?

I see the same HITL bugs pop up again and again. Here are the ones that’ll save you the most time.

Mistake 1: No checkpointer

python
# WRONG -- interrupt() fails without a checkpointer
# graph = builder.compile()

# RIGHT
graph = builder.compile(checkpointer=MemorySaver())

If there’s no checkpointer, interrupt() has no place to save state. You’ll get a ValueError.

Mistake 2: Wrong thread_id on resume

python
# WRONG
# graph.invoke({"msg": "hi"}, {"configurable": {"thread_id": "abc"}})
# graph.invoke(Command(resume="yes"), {"configurable": {"thread_id": "xyz"}})

# RIGHT -- same thread_id
config = {"configurable": {"thread_id": "abc"}}
graph.invoke({"msg": "hi"}, config)
graph.invoke(Command(resume="yes"), config)

Mistake 3: Data that can’t turn into JSON

python
# WRONG -- lambda can't be JSON-serialized
# decision = interrupt({"callback": lambda x: x})

# RIGHT -- only JSON-serializable values
decision = interrupt({"message": "Approve?", "options": ["yes", "no"]})

Mistake 4: Tossing the return value

python
# WRONG -- human input goes nowhere
def bad_review(state):
    interrupt({"question": "approve?"})  # Return value ignored!
    return {"approved": True}  # Always approves!

# RIGHT -- capture and use the return value
def good_review(state):
    decision = interrupt({"question": "approve?"})
    return {"approved": decision.lower() == "yes"}

If you don’t grab what interrupt() gives back, the person’s input vanishes. The node keeps going as if no one said a thing.

When Should You Skip HITL?

HITL adds wait time. Every interrupt means the flow stalls until a person replies — that could be seconds, minutes, or hours. Don’t add pauses where they slow things down without adding safety.

Skip HITL when:
– The task only reads data (no writes, no side effects)
– You can easily undo the action
– You’re running batch jobs on thousands of items
– The stakes are low (styling text, making internal notes)

Use HITL when:
– The action touches outside systems (emails, data changes)
– There’s no undo or it’s costly to reverse
– Rules or laws require a human check
– You’re early in your rollout and still earning trust

Summary

HITL in LangGraph boils down to three pieces: a checkpointer to save state, interrupt() to freeze and share data, and Command(resume=...) to carry on with the person’s input.

You picked up five patterns in this post:

  1. Basic approval — freeze, get a yes/no, then send or drop
  2. User edits — freeze, get new content, keep going with the fix
  3. Tool call review — catch tool calls, look at the inputs, okay or block
  4. Picky approval — let safe tools run, flag the risky ones
  5. Multi-step review — set up more than one freeze point in a single flow

The next post digs into Persistence and Checkpointing — how to save graph state to a database, pick up after a server restart, and juggle many chats at once. That’s the backbone that makes HITL work in the real world.

FAQ

Can I have more than one interrupt in the same node?

Yes. Call interrupt() as many times as you like in one node. Each call freezes and waits for a reply. They run one after the other.

python
def multi_interrupt_node(state):
    step1 = interrupt({"step": 1, "question": "approve step 1?"})
    # ... process step 1 based on response ...
    step2 = interrupt({"step": 2, "question": "approve step 2?"})
    # ... process step 2 ...
    return state

What if my process crashes between the interrupt and the resume?

With MemorySaver, the state is gone. With a disk-backed saver like PostgresSaver, the state lives on. Your app restarts, you resume with the same thread_id, and the graph picks up right where it was.

Can I set a time limit on an interrupt?

LangGraph has no built-in timeout for interrupts. The pause lasts until you resume it. Build timeouts in your own code — for instance, a cron job that auto-rejects any graph that’s been waiting longer than 24 hours.

References

  1. LangGraph documentation — Human-in-the-loop concepts. Link
  2. LangGraph documentation — Interrupts. Link
  3. LangGraph documentation — Persistence and checkpointers. Link
  4. LangGraph API Reference — interrupt(). Link
  5. LangGraph API Reference — Command. Link
  6. LangGraph changelog — v0.4: Working with Interrupts. Link
  7. LangChain blog — LangGraph v0.2: Checkpointer libraries. Link
  8. LangGraph how-to guides — Human-in-the-loop. Link
Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Free Callback - Limited Slots
Not Sure Which Course to Start With?
Talk to our AI Counsellors and Practitioners. We'll help you clear all your questions for your background and goals, bridging the gap between your current skills and a career in AI.
10-digit mobile number
📞
Thank You!
We'll Call You Soon!
Our learning advisor will reach out within 24 hours.
(Check your inbox too — we've sent a confirmation)
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science