from langgraph_sdk import get_clientfrom langgraph_sdk.schema import Commandclient = get_client(url=<DEPLOYMENT_URL>)# Using the graph deployed with the name "agent"assistant_id = "agent"# create a threadthread = await client.threads.create()thread_id = thread["thread_id"]# Run the graph until the interrupt is hit.result = await client.runs.wait( thread_id, assistant_id, input={"some_text": "original text"} # (1)!)print(result['__interrupt__']) # (2)!# > [# > {# > 'value': {'text_to_revise': 'original text'},# > 'resumable': True,# > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],# > 'when': 'during'# > }# > ]# Resume the graphprint(await client.runs.wait( thread_id, assistant_id, command=Command(resume="Edited text") # (3)!))# > {'some_text': 'Edited text'}
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
3. The graph is resumed with a Command(resume=...), injecting the human’s input and continuing execution.
Copy
Ask AI
import { Client } from "@langchain/langgraph-sdk";const client = new Client({ apiUrl: <DEPLOYMENT_URL> });// Using the graph deployed with the name "agent"const assistantID = "agent";// create a threadconst thread = await client.threads.create();const threadID = thread["thread_id"];// Run the graph until the interrupt is hit.const result = await client.runs.wait( threadID, assistantID, { input: { "some_text": "original text" } } # (1)!);console.log(result['__interrupt__']); # (2)!// > [# > {# > 'value': {'text_to_revise': 'original text'},# > 'resumable': True,# > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],# > 'when': 'during'# > }# > ]// Resume the graphconsole.log(await client.runs.wait( threadID, assistantID, { command: { resume: "Edited text" }} # (3)!));# > {'some_text': 'Edited text'}
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
The graph is resumed with a { resume: ... } command object, injecting the human’s input and continuing execution.
Create a thread:
Copy
Ask AI
curl --request POST \--url <DEPLOYMENT_URL>/threads \--header 'Content-Type: application/json' \--data '{}'
interrupt(...) pauses execution at human_node, surfacing the given payload to a human.
Any JSON serializable value can be passed to the interrupt function. Here, a dict containing the text to revise.
Once resumed, the return value of interrupt(...) is the human-provided input, which is used to update the state.
Once you have a running Agent Server, you can interact with it using
LangGraph SDK
Python
JavaScript
cURL
Copy
Ask AI
from langgraph_sdk import get_clientfrom langgraph_sdk.schema import Commandclient = get_client(url=<DEPLOYMENT_URL>)# Using the graph deployed with the name "agent"assistant_id = "agent"# create a threadthread = await client.threads.create()thread_id = thread["thread_id"]# Run the graph until the interrupt is hit.result = await client.runs.wait( thread_id, assistant_id, input={"some_text": "original text"} # (1)!)print(result['__interrupt__']) # (2)!# > [# > {# > 'value': {'text_to_revise': 'original text'},# > 'resumable': True,# > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],# > 'when': 'during'# > }# > ]# Resume the graphprint(await client.runs.wait( thread_id, assistant_id, command=Command(resume="Edited text") # (3)!))# > {'some_text': 'Edited text'}
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
3. The graph is resumed with a Command(resume=...), injecting the human’s input and continuing execution.
Copy
Ask AI
import { Client } from "@langchain/langgraph-sdk";const client = new Client({ apiUrl: <DEPLOYMENT_URL> });// Using the graph deployed with the name "agent"const assistantID = "agent";// create a threadconst thread = await client.threads.create();const threadID = thread["thread_id"];// Run the graph until the interrupt is hit.const result = await client.runs.wait( threadID, assistantID, { input: { "some_text": "original text" } } # (1)!);console.log(result['__interrupt__']); # (2)!# > [# > {# > 'value': {'text_to_revise': 'original text'},# > 'resumable': True,# > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],# > 'when': 'during'# > }# > ]// Resume the graphconsole.log(await client.runs.wait( threadID, assistantID, { command: { resume: "Edited text" }} # (3)!));# > {'some_text': 'Edited text'}
The graph is invoked with some initial state.
When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
The graph is resumed with a { resume: ... } command object, injecting the human’s input and continuing execution.
Create a thread:
Copy
Ask AI
curl --request POST \--url <DEPLOYMENT_URL>/threads \--header 'Content-Type: application/json' \--data '{}'
client.runs.wait is called with the interrupt_before and interrupt_after parameters. This is a run-time configuration and can be changed for every invocation.
interrupt_before specifies the nodes where execution should pause before the node is executed.
interrupt_after specifies the nodes where execution should pause after the node is executed.
client.runs.wait is called with the interruptBefore and interruptAfter parameters. This is a run-time configuration and can be changed for every invocation.
interruptBefore specifies the nodes where execution should pause before the node is executed.
interruptAfter specifies the nodes where execution should pause after the node is executed.
The following example shows how to add static interrupts:
Python
JavaScript
cURL
Copy
Ask AI
from langgraph_sdk import get_clientclient = get_client(url=<DEPLOYMENT_URL>)# Using the graph deployed with the name "agent"assistant_id = "agent"# create a threadthread = await client.threads.create()thread_id = thread["thread_id"]# Run the graph until the breakpointresult = await client.runs.wait( thread_id, assistant_id, input=inputs # (1)!)# Resume the graphawait client.runs.wait( thread_id, assistant_id, input=None # (2)!)
The graph is run until the first breakpoint is hit.
The graph is resumed by passing in None for the input. This will run the graph until the next breakpoint is hit.
Copy
Ask AI
import { Client } from "@langchain/langgraph-sdk";const client = new Client({ apiUrl: <DEPLOYMENT_URL> });// Using the graph deployed with the name "agent"const assistantID = "agent";// create a threadconst thread = await client.threads.create();const threadID = thread["thread_id"];// Run the graph until the breakpointconst result = await client.runs.wait( threadID, assistantID, { input: input } # (1)!);// Resume the graphawait client.runs.wait( threadID, assistantID, { input: null } # (2)!);
The graph is run until the first breakpoint is hit.
The graph is resumed by passing in null for the input. This will run the graph until the next breakpoint is hit.
Create a thread:
Copy
Ask AI
curl --request POST \--url <DEPLOYMENT_URL>/threads \--header 'Content-Type: application/json' \--data '{}'
Common patterns: learn how to implement patterns like approving/rejecting actions, requesting user input, tool call review, and validating human input.