Menu

LangGraph Human-in-the-Loop — How to Add Approval Steps to AI Agents

Written by Selva Prabhakaran | 27 min read

Your LangGraph agent runs on its own. It picks tools, routes between nodes, and spits out results. But what happens when it’s about to blast an email to your whole customer list? Or fire a SQL query that wipes a table?

You don’t want that running unsupervised.

Human-in-the-loop (HITL) fixes this. You tell LangGraph: “Pause here. Show the human what’s about to happen. Only keep going if they say yes.” The agent freezes, waits for a thumbs-up, and picks up right where it stopped.

For any agent that touches the real world, HITL is what turns a risky toy into a tool you can trust.

Why Does Human-in-the-Loop Matter?

An agent with no guardrails is a ticking clock. LLMs make things up. They misread what you asked. They take bold action based on wrong facts.

A chatbot giving a bad answer? Annoying. An agent that fires off emails, moves money, or runs destructive queries on the wrong assumption? That’s a real problem. HITL acts as your safety net. Day-to-day tasks still fly at full speed, but you drop human checkpoints into the high-stakes spots.

Where would you actually want a pause button?

  • Tool call review: The agent is about to call an API. You eyeball the request before it goes out.

  • Content review: The agent drafted a customer reply. You scan it before it ships.

  • Data changes: The agent cooked up a SQL query. You make sure it won’t trash your production table.

  • Budget control: The agent wants to run an expensive job. You approve the spend first.

Key Insight: HITL isn’t about mistrusting the agent. It’s about earning trust over time. Gate every action at first. As patterns prove safe, pull checkpoints out one by one. That’s how real-world agents grow up.

Before You Start

  • Python: 3.10+
  • Packages: langgraph 0.4+, langchain-openai 0.3+, langchain-core 0.3+
  • Install: pip install langgraph langchain-openai langchain-core python-dotenv
  • API key: OPENAI_API_KEY in a .env file
  • Background: Posts 1–12, especially Tool Calling and State Management
  • Time: 25–30 minutes

How Does LangGraph Pause a Running Graph?

Two pieces make it work: a checkpointer and an interrupt.

The checkpointer saves the graph’s state after each node runs — like a save slot in a video game. If the graph stops, you load that save and carry on from the exact same spot.

The interrupt marks where to stop. When the graph reaches an interrupt, it writes state, halts, and gives control back to your code. You collect the human’s answer, then kick the graph off again.

Here’s the bare-minimum setup. We spin up a checkpointer with MemorySaver, which stores state in RAM. Perfect for learning and quick prototypes. For production, swap in PostgresSaver — same interface, persistent storage.

python
import os
from dotenv import load_dotenv
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from typing import TypedDict

load_dotenv()

# Create checkpointer -- saves state between pauses
checkpointer = MemorySaver()

# Define state
class State(TypedDict):
    message: str
    approved: bool

print("Checkpointer ready -- MemorySaver stores state in memory")
python
Checkpointer ready -- MemorySaver stores state in memory

Tip: MemorySaver wipes clean when the process shuts down. For real apps, reach for langgraph-checkpoint-postgres or langgraph-checkpoint-sqlite. Your code stays the same — only the checkpointer object changes.

How Do You Pause and Resume a Graph for the First Time?

Let’s build a graph that drafts a message, pauses for approval, then either publishes or drops it.

The interrupt() function accepts any value that can be turned into JSON. That value gets sent to whoever called the graph, so you can show the human exactly what needs a decision. When you resume, whatever value you pass back becomes interrupt()‘s return.

Three node functions drive the flow. draft_message writes the text. human_review freezes the graph with interrupt() and grabs the human’s answer. publish finishes the job based on that answer.

python
def draft_message(state: State) -> State:
    """Simulate drafting a message."""
    draft = f"DRAFT: {state['message']}"
    print(f"Drafted message: {draft}")
    return {"message": draft, "approved": False}

def human_review(state: State) -> State:
    """Pause for human approval."""
    decision = interrupt({
        "draft": state["message"],
        "question": "Approve this message? (yes/no)"
    })
    approved = decision.lower() == "yes"
    print(f"Human decision: {decision} -> approved={approved}")
    return {"approved": approved}

def publish(state: State) -> State:
    """Publish the approved message."""
    if state["approved"]:
        print(f"PUBLISHED: {state['message']}")
    else:
        print("Message rejected -- not published")
    return state

print("Node functions defined")
python
Node functions defined

Now connect the nodes and compile with the checkpointer attached. Passing checkpointer=checkpointer is mandatory for interrupt-based graphs. Skip it and LangGraph has no way to save state or pick up later.

python
builder = StateGraph(State)
builder.add_node("draft", draft_message)
builder.add_node("review", human_review)
builder.add_node("publish", publish)

builder.add_edge(START, "draft")
builder.add_edge("draft", "review")
builder.add_edge("review", "publish")
builder.add_edge("publish", END)

graph = builder.compile(checkpointer=checkpointer)

print("Graph compiled with checkpointer")
python
Graph compiled with checkpointer

How Does the Two-Phase Run Pattern Work?

Every HITL workflow splits into two phases. Phase 1: run the graph until the interrupt fires. Phase 2: feed the human’s answer back in and resume. A thread_id in the config links both phases to the same run.

python
# Phase 1: Run until interrupt
config = {"configurable": {"thread_id": "thread-1"}}

result = graph.invoke(
    {"message": "Hello customers!", "approved": False},
    config=config
)

print(f"\nGraph paused. Result keys: {list(result.keys())}")
python
Drafted message: DRAFT: Hello customers!

Graph paused. Result keys: ['message', 'approved']

The graph executed draft, then ran into the interrupt() inside human_review and froze. State got saved. The draft is sitting there, waiting for a human to look at it.

What data did the interrupt expose? Let’s peek at the snapshot.

python
snapshot = graph.get_state(config)
print(f"Next node to run: {snapshot.next}")
print(f"Interrupt payload: {snapshot.tasks[0].interrupts[0].value}")
python
Next node to run: ('review',)
Interrupt payload: {'draft': 'DRAFT: Hello customers!', 'question': 'Approve this message? (yes/no)'}

The snapshot shows the exact pause point and what the interrupt surfaced. In a live app, you’d render this in a UI. Now pass the human’s answer back in.

python
# Phase 2: Resume with human input
result = graph.invoke(
    Command(resume="yes"),
    config=config  # Same thread_id!
)

print(f"\nFinal state: approved={result['approved']}")
python
Human decision: yes -> approved=True
PUBLISHED: DRAFT: Hello customers!

Final state: approved=True

Command(resume="yes") injects "yes" as the return value of interrupt(). The node picks up, finishes its logic, and the remaining graph runs to the end.

Warning: You must resume with the same thread_id. A different ID creates a fresh run instead of picking up the paused one. The old graph just hangs in limbo.

Quick check: What if you passed {"configurable": {"thread_id": "thread-DIFFERENT"}} when resuming? LangGraph would look for a thread that doesn’t exist and throw an error. The original paused graph on "thread-1" stays frozen.

How Do You Route Based on Approval or Rejection?

Getting a “yes” is only half the story. You also need a clean path for “no.” A conditional edge right after the review node forks the graph based on the human’s verdict.

python
checkpointer_v2 = MemorySaver()

def conditional_after_review(state: State) -> str:
    """Route based on approval status."""
    if state["approved"]:
        return "publish"
    return "discard"

def discard(state: State) -> State:
    """Handle rejected messages."""
    print(f"DISCARDED: {state['message']}")
    return state

builder2 = StateGraph(State)
builder2.add_node("draft", draft_message)
builder2.add_node("review", human_review)
builder2.add_node("publish", publish)
builder2.add_node("discard", discard)

builder2.add_edge(START, "draft")
builder2.add_edge("draft", "review")
builder2.add_conditional_edges("review", conditional_after_review)
builder2.add_edge("publish", END)
builder2.add_edge("discard", END)

graph_v2 = builder2.compile(checkpointer=checkpointer_v2)

print("Graph v2 compiled -- with approve/reject routing")
python
Graph v2 compiled -- with approve/reject routing
python
# Run and reject
config_v2 = {"configurable": {"thread_id": "thread-reject"}}

graph_v2.invoke(
    {"message": "Buy now!! Limited offer!!!", "approved": False},
    config=config_v2
)

result = graph_v2.invoke(
    Command(resume="no"),
    config=config_v2
)

print(f"\nFinal: approved={result['approved']}")
python
Drafted message: DRAFT: Buy now!! Limited offer!!!
Human decision: no -> approved=False
DISCARDED: DRAFT: Buy now!! Limited offer!!!

Final: approved=False

The spammy draft got flagged and tossed out. The conditional edge routed to discard instead of publish. This yes/no fork is the building block for nearly every HITL workflow you’ll see.

Exercise 1: Build an Approval Gate

Build a graph with three nodes: generate, review, and execute. The generate node creates a task description from the input. The review node uses interrupt() to pause for approval. If approved, execute runs the task. If rejected, route to a cancel node.

Hint 1

Define a state with task (str) and status (str) fields. The review node should return the human’s decision in the status field.

Hint 2
Use a conditional edge after review that checks state["status"]. Route to “execute” if status is “approved” and “cancel” if status is “rejected”.

Solution

python
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from typing import TypedDict

class TaskState(TypedDict):
    task: str
    status: str

def generate(state: TaskState) -> TaskState:
    return {"task": f"Task: {state['task']}", "status": "pending"}

def review(state: TaskState) -> TaskState:
    decision = interrupt({"task": state["task"], "question": "Approve? (yes/no)"})
    status = "approved" if decision.lower() == "yes" else "rejected"
    return {"status": status}

def execute(state: TaskState) -> TaskState:
    print(f"Executed: {state['task']}")
    return {"status": "completed"}

def cancel(state: TaskState) -> TaskState:
    print(f"Cancelled: {state['task']}")
    return {"status": "cancelled"}

def route(state: TaskState) -> str:
    return "execute" if state["status"] == "approved" else "cancel"

cp = MemorySaver()
b = StateGraph(TaskState)
b.add_node("generate", generate)
b.add_node("review", review)
b.add_node("execute", execute)
b.add_node("cancel", cancel)
b.add_edge(START, "generate")
b.add_edge("generate", "review")
b.add_conditional_edges("review", route)
b.add_edge("execute", END)
b.add_edge("cancel", END)
graph = b.compile(checkpointer=cp)

cfg = {"configurable": {"thread_id": "ex1"}}
graph.invoke({"task": "Deploy v2.0", "status": ""}, cfg)
result = graph.invoke(Command(resume="yes"), cfg)
print(result["status"])  # completed

The big idea: interrupt() grabs the decision, the conditional edge routes on it. Two concerns, kept apart.

How Do You Let Humans Edit State Mid-Flow?

A binary yes/no doesn’t always cut it. The reviewer might want to fix a typo, change a number, or rewrite a whole section. Content teams run into this constantly.

The fix: pass the corrected version through Command(resume=...). The interrupt returns whatever the human sends — it’s not limited to a short string.

python
class EditableState(TypedDict):
    message: str
    status: str

def generate_draft(state: EditableState) -> EditableState:
    """Generate an initial draft."""
    draft = f"Dear customer, {state['message']}"
    print(f"Generated: {draft}")
    return {"message": draft, "status": "drafted"}

def review_and_edit(state: EditableState) -> EditableState:
    """Let human review and optionally edit the message."""
    response = interrupt({
        "current_draft": state["message"],
        "instructions": "Reply 'approve' or provide an edited version"
    })

    if response.lower() == "approve":
        print("Human approved without changes")
        return {"status": "approved"}
    else:
        print(f"Human edited to: {response}")
        return {"message": response, "status": "approved"}

def send_message(state: EditableState) -> EditableState:
    """Send the final message."""
    print(f"SENT: {state['message']}")
    return {"status": "sent"}

checkpointer_edit = MemorySaver()

builder_edit = StateGraph(EditableState)
builder_edit.add_node("generate", generate_draft)
builder_edit.add_node("review", review_and_edit)
builder_edit.add_node("send", send_message)

builder_edit.add_edge(START, "generate")
builder_edit.add_edge("generate", "review")
builder_edit.add_edge("review", "send")
builder_edit.add_edge("send", END)

graph_edit = builder_edit.compile(checkpointer=checkpointer_edit)

print("Edit-capable graph compiled")
python
Edit-capable graph compiled
python
config_edit = {"configurable": {"thread_id": "thread-edit"}}

# Phase 1: generate and pause
graph_edit.invoke(
    {"message": "your order has shipped", "status": "new"},
    config=config_edit
)

# Phase 2: human provides an edited version
result = graph_edit.invoke(
    Command(resume="Dear valued customer, your order #12345 has shipped and arrives Friday."),
    config=config_edit
)

print(f"\nFinal status: {result['status']}")
python
Generated: Dear customer, your order has shipped
Human edited to: Dear valued customer, your order #12345 has shipped and arrives Friday.
SENT: Dear valued customer, your order #12345 has shipped and arrives Friday.

Final status: sent

The reviewer replaced the bland draft with a personal message. The graph picked up the edited version and used it for every step that followed. This approach works for any kind of fix — rewriting copy, changing numbers, or tweaking config values.

How Do You Review Tool Calls Before They Execute?

This is the HITL pattern you’ll reach for the most: reviewing tool calls before they fire. The agent decides which tool to use and what arguments to pass. A human looks it over. Only approved calls execute.

Why bother? Because you want to see “I’m about to run delete_user(id=42)before that function actually touches your database.

python
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
from langgraph.types import interrupt, Command

@tool
def search_database(query: str) -> str:
    """Search the customer database with a SQL query."""
    return f"Results for: {query} -> 3 records found"

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a customer."""
    return f"Email sent to {to}: {subject}"

tools = [search_database, send_email]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)

print("Tools and LLM configured")
python
Tools and LLM configured

We have two tools: a low-risk one (database search) and a high-risk one (email sending). The graph below gates every tool call behind approval. I’ll show you how to be selective later.

The call_llm node gets the model’s reply. human_approve_tools catches any tool-call requests and pauses for a human check. run_tools executes whatever passed review. A router checks whether tools are needed or the agent is done.

python
def call_llm(state: MessagesState) -> MessagesState:
    """Call the LLM, which may request tool calls."""
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

def human_approve_tools(state: MessagesState) -> MessagesState:
    """Intercept tool calls for human approval."""
    last_message = state["messages"][-1]

    if not hasattr(last_message, "tool_calls") or not last_message.tool_calls:
        return state

    tool_info = [
        {"tool": tc["name"], "arguments": tc["args"], "id": tc["id"]}
        for tc in last_message.tool_calls
    ]

    decision = interrupt({
        "pending_tool_calls": tool_info,
        "question": "Approve these tool calls? (yes/no)"
    })

    if decision.lower() != "yes":
        rejection_msg = AIMessage(content="Tool calls were rejected by human reviewer.")
        return {"messages": [rejection_msg]}

    return state

def run_tools(state: MessagesState) -> MessagesState:
    """Execute approved tool calls."""
    tool_node = ToolNode(tools)
    return tool_node.invoke(state)

def should_continue(state: MessagesState) -> str:
    """Check if we need to run tools or finish."""
    last_message = state["messages"][-1]
    if hasattr(last_message, "tool_calls") and last_message.tool_calls:
        return "tools"
    return "end"

print("Node functions ready for tool-approval graph")
python
Node functions ready for tool-approval graph
python
checkpointer_tools = MemorySaver()

builder_tools = StateGraph(MessagesState)
builder_tools.add_node("llm", call_llm)
builder_tools.add_node("approve", human_approve_tools)
builder_tools.add_node("tools", run_tools)

builder_tools.add_edge(START, "llm")
builder_tools.add_conditional_edges("llm", should_continue, {
    "tools": "approve",
    "end": END
})
builder_tools.add_edge("approve", "tools")
builder_tools.add_edge("tools", "llm")

graph_tools = builder_tools.compile(checkpointer=checkpointer_tools)

print("Tool-approval graph compiled")
python
Tool-approval graph compiled
python
config_tools = {"configurable": {"thread_id": "thread-tools-1"}}

result = graph_tools.invoke(
    {"messages": [HumanMessage(content="Search our database for customers named Smith")]},
    config=config_tools
)

snapshot = graph_tools.get_state(config_tools)
print(f"Paused at: {snapshot.next}")
interrupt_data = snapshot.tasks[0].interrupts[0].value
print(f"Pending calls: {interrupt_data['pending_tool_calls']}")
python
Paused at: ('approve',)
Pending calls: [{'tool': 'search_database', 'arguments': {'query': 'customers named Smith'}, 'id': '...'}]

The agent wants to look up “customers named Smith.” The interrupt grabbed the call and is waiting for your green light.

python
result = graph_tools.invoke(
    Command(resume="yes"),
    config=config_tools
)

print(f"Agent response: {result['messages'][-1].content}")
python
Agent response: I found 3 records matching "customers named Smith" in the database.

The tool executed, its output flowed back to the LLM, and the agent wrote a final reply. Had we typed “no,” the agent would have received the rejection note instead and wrapped up without calling anything.

[UNDER-THE-HOOD]
What happens inside Command(resume=...): LangGraph reloads the saved state from the checkpointer, replays to the interrupt() call, injects your resume value as interrupt()‘s return, and keeps going. The node doesn’t restart from the top — it picks up at the exact line where it paused. Feel free to skip this detail if you’re just getting started.

How Do You Auto-Approve Safe Tools and Gate Risky Ones?

Requiring a thumbs-up on every tool call gets tedious quickly. In real systems, you wave low-risk calls through and only gate the risky ones. Here’s how.

python
TOOLS_REQUIRING_APPROVAL = {"send_email"}

def selective_approval(state: MessagesState) -> MessagesState:
    """Only interrupt for tools that need human approval."""
    last_message = state["messages"][-1]

    if not hasattr(last_message, "tool_calls") or not last_message.tool_calls:
        return state

    risky_calls = [
        tc for tc in last_message.tool_calls
        if tc["name"] in TOOLS_REQUIRING_APPROVAL
    ]

    if not risky_calls:
        # All safe -- auto-approve
        print(f"Auto-approved: {[tc['name'] for tc in last_message.tool_calls]}")
        return state

    tool_info = [{"tool": tc["name"], "args": tc["args"]} for tc in risky_calls]

    decision = interrupt({
        "risky_tools": tool_info,
        "auto_approved": [
            tc["name"] for tc in last_message.tool_calls
            if tc["name"] not in TOOLS_REQUIRING_APPROVAL
        ],
        "question": "Approve the risky tool calls above?"
    })

    if decision.lower() != "yes":
        rejection = AIMessage(content="Risky tool calls rejected by reviewer.")
        return {"messages": [rejection]}

    return state

print("Selective approval defined")
print(f"Gated tools: {TOOLS_REQUIRING_APPROVAL}")
python
Selective approval defined
Gated tools: {'send_email'}

Database lookups pass straight through. Email sends get stopped for review. You define the policy; the graph carries it out. Most production setups work exactly this way.

[BEST-PRACTICE]
Start tight, then relax. Ship with interrupts on every tool call. Monitor approval rates. Once a tool gets approved 99% of the time, add it to the auto-approve list. Let data drive trust instead of guessing.

How Do Multi-Step Review Chains Work?

Certain workflows call for several human checkpoints, not just one. Picture an agent that writes a report, collects content feedback, makes revisions, and then needs a final sign-off. Each review point is its own interrupt() call.

python
from typing import List

class ReviewState(TypedDict):
    content: str
    review_notes: List[str]
    stage: str

def generate_report(state: ReviewState) -> ReviewState:
    """Generate initial report content."""
    report = f"Q4 Revenue Report: {state['content']}"
    print(f"Generated: {report[:50]}...")
    return {"content": report, "stage": "draft", "review_notes": []}

def first_review(state: ReviewState) -> ReviewState:
    """First checkpoint -- content accuracy."""
    feedback = interrupt({
        "stage": "Content Review",
        "content": state["content"],
        "question": "Is the content accurate? Provide notes or 'approve'"
    })

    if feedback.lower() == "approve":
        print("First review: approved")
        return {"stage": "reviewed", "review_notes": ["Content approved"]}
    else:
        print(f"First review notes: {feedback}")
        return {
            "content": f"{state['content']}\n[REVISED: {feedback}]",
            "stage": "revised",
            "review_notes": [feedback]
        }

def final_review(state: ReviewState) -> ReviewState:
    """Final checkpoint -- publication approval."""
    decision = interrupt({
        "stage": "Final Approval",
        "content": state["content"],
        "review_history": state["review_notes"],
        "question": "Approve for publication? (yes/no)"
    })

    approved = decision.lower() == "yes"
    status = "published" if approved else "rejected"
    print(f"Final review: {status}")
    return {"stage": status}

checkpointer_review = MemorySaver()

builder_review = StateGraph(ReviewState)
builder_review.add_node("generate", generate_report)
builder_review.add_node("first_review", first_review)
builder_review.add_node("final_review", final_review)

builder_review.add_edge(START, "generate")
builder_review.add_edge("generate", "first_review")
builder_review.add_edge("first_review", "final_review")
builder_review.add_edge("final_review", END)

graph_review = builder_review.compile(checkpointer=checkpointer_review)

print("Multi-step review graph compiled")
python
Multi-step review graph compiled

Let’s walk through the full two-interrupt flow.

python
config_review = {"configurable": {"thread_id": "thread-review"}}

graph_review.invoke(
    {"content": "Total revenue: $2.4M", "review_notes": [], "stage": "new"},
    config=config_review
)

snapshot = graph_review.get_state(config_review)
print(f"Paused at: {snapshot.next}")
print(f"Current stage: {snapshot.values['stage']}")
python
Generated: Q4 Revenue Report: Total revenue: $2.4M...
Paused at: ('first_review',)
Current stage: draft
python
# First review: provide corrections
graph_review.invoke(
    Command(resume="Revenue should be $2.6M -- update the figure"),
    config=config_review
)

snapshot = graph_review.get_state(config_review)
print(f"Paused at: {snapshot.next}")
print(f"Current stage: {snapshot.values['stage']}")
python
First review notes: Revenue should be $2.6M -- update the figure
Paused at: ('final_review',)
Current stage: revised

The graph accepted the corrections at the first stop, revised the content, then froze again at the second stop. The two interrupts are fully independent.

python
result = graph_review.invoke(
    Command(resume="yes"),
    config=config_review
)

print(f"Final stage: {result['stage']}")
python
Final review: published
Final stage: published

Two human checkpoints, one seamless workflow. The checkpointer holds the thread together across both pauses.

What Are interrupt_before and interrupt_after?

Up to now we’ve placed interrupt() calls inside node functions. LangGraph also has interrupt_before and interrupt_after — you set these when you compile, and they automatically freeze the graph before or after specific nodes.

When should you use which? Here’s a quick comparison:

Feature interrupt() function interrupt_before / interrupt_after
Where you set it Inside the node code At compile time
Data exchange Send data to caller AND get a resume value back Inspect state only
Use case Interactive approval, collecting input State inspection, debugging checkpoints
Modify state Via the resume value Via update_state()
python
def step_one(state: State) -> State:
    """First processing step."""
    print("Step one executing")
    return {"message": f"Processed: {state['message']}", "approved": False}

def step_two(state: State) -> State:
    """Second processing step."""
    print("Step two executing")
    return {"message": f"Final: {state['message']}", "approved": True}

checkpointer_static = MemorySaver()

builder_static = StateGraph(State)
builder_static.add_node("step_one", step_one)
builder_static.add_node("step_two", step_two)
builder_static.add_edge(START, "step_one")
builder_static.add_edge("step_one", "step_two")
builder_static.add_edge("step_two", END)

# Pause BEFORE step_two runs
compiled_static = builder_static.compile(
    checkpointer=checkpointer_static,
    interrupt_before=["step_two"]
)

print("Graph compiled with interrupt_before=['step_two']")
python
Graph compiled with interrupt_before=['step_two']
python
config_static = {"configurable": {"thread_id": "thread-static"}}

result = compiled_static.invoke(
    {"message": "Hello", "approved": False},
    config=config_static
)

snapshot = compiled_static.get_state(config_static)
print(f"Paused before: {snapshot.next}")
print(f"Current message: {snapshot.values['message']}")
python
Step one executing
Paused before: ('step_two',)
Current message: Processed: Hello

Step one ran. The graph froze right before step two started. At this point you can peek at the state — or change it — before letting it continue.

python
# Resume -- no Command needed for static interrupts
result = compiled_static.invoke(None, config=config_static)

print(f"Final message: {result['message']}")
print(f"Approved: {result['approved']}")
python
Step two executing
Final message: Final: Processed: Hello
Approved: True

[COMMON-MISTAKE]
Don’t mix up interrupt() with interrupt_before/interrupt_after. The interrupt() function pauses mid-node and supports two-way data exchange. The compile-time options pause between nodes and only let you inspect state. Use interrupt() for interactive approval. Use the compile-time options for debugging or simple state checkpoints.

How Do You Edit State Before Resuming?

When you use interrupt_before, you can also patch the state directly with update_state() before the next node kicks off. It’s a shortcut for small fixes that don’t need a dedicated review node.

python
checkpointer_modify = MemorySaver()

class TaskState(TypedDict):
    task: str
    priority: int

def process_task(state: TaskState) -> TaskState:
    print(f"Processing '{state['task']}' with priority {state['priority']}")
    return state

def execute_task(state: TaskState) -> TaskState:
    print(f"Executing: {state['task']} (priority={state['priority']})")
    return state

builder_mod = StateGraph(TaskState)
builder_mod.add_node("process", process_task)
builder_mod.add_node("execute", execute_task)
builder_mod.add_edge(START, "process")
builder_mod.add_edge("process", "execute")
builder_mod.add_edge("execute", END)

graph_mod = builder_mod.compile(
    checkpointer=checkpointer_modify,
    interrupt_before=["execute"]
)

config_mod = {"configurable": {"thread_id": "thread-modify"}}
graph_mod.invoke({"task": "Deploy to staging", "priority": 3}, config=config_mod)

print(f"Current priority: {graph_mod.get_state(config_mod).values['priority']}")
python
Processing 'Deploy to staging' with priority 3
Current priority: 3
python
# Human changes priority before execution continues
graph_mod.update_state(config_mod, {"priority": 1})

print(f"Updated priority: {graph_mod.get_state(config_mod).values['priority']}")

result = graph_mod.invoke(None, config=config_mod)
print(f"Final: task='{result['task']}', priority={result['priority']}")
python
Updated priority: 1
Executing: Deploy to staging (priority=1)
Final: task='Deploy to staging', priority=1

The reviewer bumped priority from 3 to 1 before the execute step kicked off. update_state() patches the saved state in place. The next node sees the updated values right away.

Exercise 2: Content Moderation with Selective Routing

Build a content moderation graph. The analyze node sorts a comment into “safe”, “flagged”, or “blocked”. Safe comments publish right away. Blocked comments get dropped. Flagged comments pause for human review via interrupt(), where the reviewer can approve or reject.

Hint 1

Define a state with comment, classification, and decision fields. Use a conditional edge after the analyze node to route by classification.

Hint 2
The review node calls interrupt() with the comment and classification. After review, add another conditional edge that routes to “publish” or “discard” based on the human’s decision.

Solution

python
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from typing import TypedDict

class ModerationState(TypedDict):
    comment: str
    classification: str
    decision: str

def analyze(state: ModerationState) -> ModerationState:
    comment = state["comment"].lower()
    if any(w in comment for w in ["spam", "scam"]):
        return {"classification": "blocked"}
    elif any(w in comment for w in ["maybe", "borderline"]):
        return {"classification": "flagged"}
    return {"classification": "safe"}

def review(state: ModerationState) -> ModerationState:
    decision = interrupt({
        "comment": state["comment"],
        "classification": state["classification"]
    })
    return {"decision": decision.lower()}

def publish(state: ModerationState) -> ModerationState:
    print(f"Published: {state['comment']}")
    return {"decision": "published"}

def discard_it(state: ModerationState) -> ModerationState:
    print(f"Discarded: {state['comment']}")
    return {"decision": "discarded"}

def route_analysis(state: ModerationState) -> str:
    c = state["classification"]
    if c == "safe": return "publish"
    if c == "flagged": return "review"
    return "discard_it"

def route_review(state: ModerationState) -> str:
    return "publish" if state["decision"] == "approve" else "discard_it"

cp = MemorySaver()
b = StateGraph(ModerationState)
b.add_node("analyze", analyze)
b.add_node("review", review)
b.add_node("publish", publish)
b.add_node("discard_it", discard_it)
b.add_edge(START, "analyze")
b.add_conditional_edges("analyze", route_analysis)
b.add_conditional_edges("review", route_review)
b.add_edge("publish", END)
b.add_edge("discard_it", END)
graph = b.compile(checkpointer=cp)

# Test flagged comment
cfg = {"configurable": {"thread_id": "mod-1"}}
graph.invoke({"comment": "This is borderline", "classification": "", "decision": ""}, cfg)
result = graph.invoke(Command(resume="approve"), cfg)
print(result["decision"])  # published

This exercise blends two ideas: conditional routing (from Post 8) and HITL interrupts. The challenge is stacking two conditional edges — one after analysis, one after the human review step.

What Are the Most Common HITL Mistakes?

I see the same HITL bugs pop up in almost every project. Fixing these upfront saves hours of head-scratching.

Mistake 1: Forgetting the checkpointer

python
# WRONG -- interrupt() fails without a checkpointer
# graph = builder.compile()

# RIGHT
graph = builder.compile(checkpointer=MemorySaver())

Without a checkpointer, interrupt() has nowhere to save state. You’ll get a ValueError.

Mistake 2: Using a different thread_id when resuming

python
# WRONG
# graph.invoke({"msg": "hi"}, {"configurable": {"thread_id": "abc"}})
# graph.invoke(Command(resume="yes"), {"configurable": {"thread_id": "xyz"}})

# RIGHT -- same thread_id
config = {"configurable": {"thread_id": "abc"}}
graph.invoke({"msg": "hi"}, config)
graph.invoke(Command(resume="yes"), config)

Mistake 3: Passing data that can’t be serialized to JSON

python
# WRONG -- lambda can't be JSON-serialized
# decision = interrupt({"callback": lambda x: x})

# RIGHT -- only JSON-serializable values
decision = interrupt({"message": "Approve?", "options": ["yes", "no"]})

Mistake 4: Ignoring the interrupt return value

python
# WRONG -- human input goes nowhere
def bad_review(state):
    interrupt({"question": "approve?"})  # Return value ignored!
    return {"approved": True}  # Always approves!

# RIGHT -- capture and use the return value
def good_review(state):
    decision = interrupt({"question": "approve?"})
    return {"approved": decision.lower() == "yes"}

Skip the capture and the human’s answer disappears into the void. The node plows ahead as if nobody replied.

When Should You Skip Human-in-the-Loop?

HITL costs you latency. Every interrupt stalls the workflow until a person responds — that might take seconds, minutes, or hours. Don’t drop pauses into spots where they add drag without adding safety.

Skip HITL when:
– The operation is read-only (fetching data, searching)
– The action is easy to undo
– The workflow processes thousands of items in batch mode
– The task is low-stakes (formatting text, writing internal summaries)

Use HITL when:
– The action hits external systems (sending messages, changing data)
– The action is hard or impossible to reverse
– Regulations require human oversight
– You’re early in rolling out an agent and still building confidence

Summary

HITL in LangGraph rests on three building blocks: a checkpointer that saves state, interrupt() that pauses the graph and surfaces data, and Command(resume=...) that feeds human input back in.

You learned five patterns in this post:

  • Basic approval — pause, get a yes or no, continue or discard

  • User corrections — pause, receive edited content, keep going with the changes

  • Tool call review — intercept tool calls, inspect arguments, approve or reject

  • Selective gating — auto-approve safe tools, flag risky ones

  • Multi-step review — several interrupt points spread across one workflow

Up next: Persistence and Checkpointing — saving graph state to databases, resuming across server restarts, and juggling many conversations at once. That’s the backbone that lets HITL work at scale.

FAQ

Can I put multiple interrupts in the same node?

Yes. Call interrupt() more than once inside a single node. Each call pauses and waits for a resume. They fire one after another.

python
def multi_interrupt_node(state):
    step1 = interrupt({"step": 1, "question": "approve step 1?"})
    # ... process step 1 based on response ...
    step2 = interrupt({"step": 2, "question": "approve step 2?"})
    # ... process step 2 ...
    return state

What happens if my process crashes between interrupt and resume?

With MemorySaver, the state is gone. With a durable checkpointer like PostgresSaver, the state lives on disk. Your process restarts, you resume with the same thread_id, and the graph picks up right where it stopped.

Can I set a timeout on an interrupt?

LangGraph has no built-in interrupt timeout. The interrupt waits as long as it takes. Build timeouts in your own layer — for example, a background job that auto-rejects paused graphs after 24 hours.

References

  • LangGraph documentation — Human-in-the-loop concepts. Link

  • LangGraph documentation — Interrupts. Link

  • LangGraph documentation — Persistence and checkpointers. Link

  • LangGraph API Reference — interrupt(). Link

  • LangGraph API Reference — Command. Link

  • LangGraph changelog — v0.4: Working with Interrupts. Link

  • LangChain blog — LangGraph v0.2: Checkpointer libraries. Link

  • LangGraph how-to guides — Human-in-the-loop. Link

Free Course
Master Core Python — Your First Step into AI/ML

Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.

Start Free Course
Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Get the full course,
completely free.
Join 57,000+ students learning Python, SQL & ML. One year of access, all resources included.
📚 10 Courses
🐍 Python & ML
🗄️ SQL
📦 Downloads
📅 1 Year Access
No thanks
🎓
Free AI/ML Starter Kit
Python · SQL · ML · 10 Courses · 57,000+ students
🎉   You're in! Check your inbox (or Promotions/Spam) for the access link.
⚡ Before you go

Python.
SQL. NumPy.
All free.

Get the exact 10-course programming foundation that Data Science professionals use.

🐍
Core Python — from first line to expert level
📈
NumPy & Pandas — the #1 libraries every DS job needs
🗃️
SQL Levels I–III — basics to Window Functions
📄
Real industry data — Jupyter notebooks included
R A M S K
57,000+ students
★★★★★ Rated 4.9/5
⚡ Before you go
Python. SQL.
All Free.
R A M S K
57,000+ students  ★★★★★ 4.9/5
Get Free Access Now
10 courses. Real projects. Zero cost. No credit card.
New learners enrolling right now
🔒 100% free ☕ No spam, ever ✓ Instant access
🚀
You're in!
Check your inbox for your access link.
(Check Promotions or Spam if you don't see it)
Or start your first course right now:
Start Free Course →
Scroll to Top
Scroll to Top
Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science