Futuristic visualization of AI model comparison chart showcasing LangGraph tutorial design with interconnected model icons and tech graphics

LangGraph Tutorial: Build Stateful AI Agents with Python

Dive into LangGraph to build robust, stateful AI agents using Python. This comprehensive tutorial covers everything from basic graph construction to advanced multi-agent systems, ensuring your AI applications maintain context and make intelligent decisions in 2026. Learn how to leverage its power for complex, real-world scenarios.

Introduction: Mastering Stateful AI Agents with LangGraph

The landscape of artificial intelligence continues to evolve rapidly, and in late 2025 and early 2026, the demand for more sophisticated, context-aware AI agents has never been higher. Traditional LLM applications often operate in a stateless manner, forgetting past interactions and requiring users to re-provide context. This limitation significantly hinders their ability to perform complex, multi-step tasks or engage in extended conversations effectively. This is precisely where LangGraph steps in, offering a powerful solution for building truly stateful AI agents capable of maintaining memory, executing conditional logic, and orchestrating intricate workflows. This comprehensive LangGraph Tutorial will guide you through the process of developing such advanced systems using Python, enabling you to create AI agents that are not only intelligent but also persistent and adaptive.

LangGraph, an extension of the popular LangChain framework, provides a graph-based approach to defining agent behavior. This visual and programmatic paradigm allows developers to explicitly map out the flow of information and decision-making within an AI system. By representing an agent's logic as a series of nodes and edges, you gain unparalleled control over its execution path, state management, and interaction with various tools and LLMs. This tutorial is designed for developers, researchers, and AI enthusiasts eager to move beyond basic chatbot implementations and build stateful AI agents that can tackle real-world challenges, from automated customer support to complex research assistants. We will explore its core concepts and walk through practical examples to illustrate its capabilities.

Understanding LangGraph's Core Concepts

Before we dive into coding, it is crucial to grasp the fundamental building blocks of LangGraph. At its heart, LangGraph models agents as finite state machines, where each 'state' represents a particular stage in the agent's workflow, and 'transitions' define how the agent moves between these states. This architecture is particularly well-suited for creating multi-turn conversations, multi-agent systems, and workflows that require persistent memory and conditional branching. Key concepts include graphs, nodes, edges, and state, which collectively enable the creation of highly dynamic and intelligent stateful AI agents. Understanding these elements is the first step in mastering this powerful framework for sophisticated AI development.

  • State: The central component. It defines the information that needs to be passed between nodes and persists across different steps of the agent's execution. This can be as simple as a dictionary or a more complex Pydantic model.
  • Nodes: These are the individual units of work within the graph. A node can be an LLM call, a tool invocation, a human intervention, or any custom Python function. Each node receives the current state, performs an action, and returns an update to the state.
  • Edges: Edges connect nodes and define the flow of execution. There are two main types: 'conditional edges' that route based on the current state or node output, and 'default edges' that define the next step if no conditions are met.
  • Graph: The overall structure comprising nodes and edges, along with a defined entry and exit point. LangGraph compiles this graph into an executable workflow.
ℹ️

Why Stateful Agents?

Stateful agents remember past interactions and context, allowing for more coherent and effective long-running conversations and complex task execution. This is a significant improvement over stateless models that treat each request as new, making them ideal for enterprise applications like Klarna, which has reportedly saved $60 million using similar agentic approaches [Firecrawl.dev](https://www.firecrawl.dev/blog/best-open-source-agent-frameworks).

LangGraph Tutorial: Building Your First Stateful Agent

This section provides a step-by-step guide to constructing a basic stateful AI agent using LangGraph. We will start by defining a simple state, then create nodes for LLM interaction and tool usage, and finally, connect them with edges to form a functional graph. For this tutorial, we will use a general-purpose LLM like GPT-5.3-Codex or Gemini 3.1 Pro Preview for text generation, demonstrating how to integrate powerful models into your agent workflows. This foundational example will lay the groundwork for more complex agent designs, illustrating the core principles of the framework.

Step-by-Step: Simple Stateful Agent

  1. 1

    Step 1: Install LangGraph and LangChain

    Begin by installing the necessary libraries. LangGraph builds upon LangChain, so both are essential. Ensure your Python environment is set up correctly. This step is critical for accessing all the functionalities required to build your agent.

  2. 2

    Step 2: Define the Agent State

    Create a Pydantic model or a simple TypedDict to represent the state of your agent. This state will hold all the relevant information that needs to be passed between nodes in your graph. For our example, we'll keep it simple with just 'messages'.

  3. 3

    Step 3: Define Agent Nodes (LLM and Tool)

    Implement the functions that will act as nodes in your graph. One node will be responsible for calling an LLM, and another for executing a tool. For instance, the LLM node will generate responses, while a tool node might search the web or interact with an external API. Consider using models like Qwen3 Max Thinking for robust LLM interactions.

  4. 4

    Step 4: Create the Graph with Conditional Routing

    Instantiate `StateGraph` and add your defined nodes. Then, define the edges, including a conditional edge that decides whether to call the LLM or a tool based on the LLM's output. This is where the 'stateful' aspect truly shines, allowing dynamic decision-making.

  5. 5

    Step 5: Compile and Invoke the Agent

    Compile your graph into a runnable agent using `app.compile()`. Once compiled, you can invoke your agent with an initial state and observe its behavior. This step demonstrates the full execution flow of your newly built stateful agent.

pythonsimple_agent.py
from typing import TypedDict, Annotated, List
import operator
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END

# Step 1: Define the Agent State
class AgentState(TypedDict):
    messages: Annotated[List[BaseMessage], operator.add]

# Initialize LLM (using a Multi AI platform model)
llm = ChatOpenAI(model="gpt-5-3-codex", temperature=0.7, base_url="https://api.multi-ai.ai/v1", api_key="YOUR_MULTI_AI_KEY")

# Step 2: Define Agent Nodes
def call_llm(state: AgentState):
    messages = state["messages"]
    response = llm.invoke(messages)
    return {"messages": [response]}

def tool_node(state: AgentState):
    # Simulate a tool call, e.g., a web search or calculator
    last_message = state["messages"][-1].content
    if "weather" in last_message.lower():
        tool_result = "The weather in London is 10°C and cloudy."
    else:
        tool_result = "Tool did not find a specific answer for your query."
    return {"messages": [HumanMessage(content=tool_result, name="tool_output")]}

# Step 3: Define conditional logic
def should_continue(state: AgentState):
    last_message = state["messages"][-1]
    # Simple conditional: if LLM suggests a tool, use it
    if "tool_call" in last_message.content.lower(): # Simplified for example
        return "continue_tool"
    return "end"

# Step 4: Create the Graph
workflow = StateGraph(AgentState)

workflow.add_node("llm", call_llm)
workflow.add_node("tool", tool_node)

workflow.add_edge(START, "llm")
workflow.add_conditional_edges(
    "llm",
    should_continue,
    {"continue_tool": "tool", "end": END}
)
workflow.add_edge("tool", END) # For simplicity, tool usage ends the interaction

# Step 5: Compile and Invoke
app = workflow.compile()

# Example usage
initial_state = {"messages": [HumanMessage(content="What is the weather like in London? (tool_call)")]}
output = app.invoke(initial_state)
print(output)

initial_state_no_tool = {"messages": [HumanMessage(content="Tell me a joke.")]}
output_no_tool = app.invoke(initial_state_no_tool)
print(output_no_tool)
GPT-5.3-CodexTry GPT-5.3-Codex for seamless LLM integration.
立即试用

Advanced LangGraph Techniques: Loops and Multi-Agent Orchestration

Beyond simple linear flows, LangGraph excels at handling more complex scenarios, including iterative processes and coordinating multiple specialized agents. The ability to define 'loops' within your graph is crucial for tasks requiring refinement, self-correction, or repeated actions based on evolving state. Imagine an agent that iteratively refines a document or a research agent that performs multiple searches until a satisfactory answer is found. Furthermore, LangGraph's architecture naturally supports multi-agent systems, where different agents, each with specific roles and tools, collaborate to achieve a common goal. This capability allows you to build stateful AI agents that mimic human team dynamics, enhancing problem-solving efficiency and robustness. Read also: How to Build AI Agents with LangChain: Complete Guide 2026

To illustrate, consider a multi-agent system where one agent is a 'Researcher' using search tools, and another is an 'Editor' refining the researcher's output. LangGraph allows you to define nodes for each agent, and conditional edges can route the state between them. For example, the Researcher might produce an initial draft, which then gets passed to the Editor. The Editor might then send it back to the Researcher for more information if a gap is identified, creating a powerful feedback loop. This kind of sophisticated orchestration is what makes LangGraph a leading framework for building advanced AI applications in 2026, leveraging models like DeepSeek V3.2 or Qwen3 Coder Plus for specialized tasks.

pythonmulti_agent_flow.py
from typing import TypedDict, Annotated, List
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END

# Define a more complex state for multi-agent interaction
class MultiAgentState(TypedDict):
    messages: Annotated[List[BaseMessage], operator.add]
    research_needed: bool
    edit_needed: bool

# Initialize LLMs for different roles
research_llm = ChatOpenAI(model="gemini-3-1-pro-preview", temperature=0.5, base_url="https://api.multi-ai.ai/v1", api_key="YOUR_MULTI_AI_KEY")
edit_llm = ChatOpenAI(model="gpt-5-3-codex", temperature=0.3, base_url="https://api.multi-ai.ai/v1", api_key="YOUR_MULTI_AI_KEY")

# Researcher Agent Node
def researcher_node(state: MultiAgentState):
    print("---RESEARCHER--- ")
    messages = state["messages"]
    response = research_llm.invoke([HumanMessage(content="Perform research on: " + messages[-1].content)])
    # Simulate research finding
    research_content = f"Research finding for '{messages[-1].content}': {response.content}\n" \
                       f"More info needed for a complete answer." if "complex" in messages[-1].content.lower() else f"Research complete: {response.content}"
    
    return {"messages": [AIMessage(content=research_content, name="researcher")],
            "research_needed": "complex" in messages[-1].content.lower(),
            "edit_needed": True}

# Editor Agent Node
def editor_node(state: MultiAgentState):
    print("---EDITOR--- ")
    messages = state["messages"]
    last_research = next((msg.content for msg in reversed(messages) if msg.name == "researcher"), "")
    
    edited_content = edit_llm.invoke([HumanMessage(content=f"Review and refine this research content for clarity and completeness: {last_research}")]).content
    
    # Simulate editor deciding if more research is needed
    needs_more_research = "more info needed" in last_research.lower()
    
    return {"messages": [AIMessage(content=edited_content, name="editor")],
            "research_needed": needs_more_research,
            "edit_needed": not needs_more_research}

# Conditional routing for the graph
def route_agent(state: MultiAgentState):
    if state["research_needed"]:
        return "researcher"
    elif state["edit_needed"]:
        return "editor"
    else:
        return "end"

# Build the multi-agent workflow
multi_agent_workflow = StateGraph(MultiAgentState)

multi_agent_workflow.add_node("researcher", researcher_node)
multi_agent_workflow.add_node("editor", editor_node)

multi_agent_workflow.set_entry_point("researcher")

multi_agent_workflow.add_conditional_edges(
    "researcher",
    lambda x: "editor" if x["edit_needed"] else "researcher", # Editor always reviews, but researcher might need to loop for more info
    {"editor": "editor", "researcher": "researcher"}
)
multi_agent_workflow.add_conditional_edges(
    "editor",
    lambda x: "researcher" if x["research_needed"] else "end",
    {"researcher": "researcher", "end": END}
)

app_multi_agent = multi_agent_workflow.compile()

# Example usage
print("\n--- Running Complex Task --- ")
initial_multi_state = {"messages": [HumanMessage(content="Write a detailed report on the impact of quantum computing by 2030 (complex task).")]}
final_output = app_multi_agent.invoke(initial_multi_state)
print("\nFinal Output:")
print(final_output["messages"][-1].content)
Gemini 3.1 Pro PreviewExplore advanced reasoning with Gemini 3.1 Pro Preview.
立即试用

Integrating Tools and External Services with LangGraph

A truly capable AI agent needs to interact with the outside world. LangGraph makes it straightforward to integrate various tools and external services, extending the capabilities of your LLM beyond its training data. Whether it's performing web searches, querying databases, or interacting with custom APIs, tools are essential for building practical and useful stateful AI agents. LangChain's extensive tool ecosystem is directly compatible with LangGraph, allowing you to leverage a vast array of pre-built integrations or define your own custom tools. This seamless integration ensures that your agents can access real-time information and perform specific actions, making them incredibly versatile.

To integrate a tool, you typically define a function that wraps the external service and expose it to your agent. The LLM can then be prompted to decide when and how to use these tools. LangGraph's conditional edges can be configured to detect tool calls in the LLM's output and route the execution to the appropriate tool-handling node. This iterative process of LLM thinking, tool execution, and state update is central to the ReACT (Reasoning and Acting) paradigm, which LangGraph supports natively. Incorporating models like Z.AI: GLM 4.6V or AionLabs: Aion-2.0 can further enhance the agent's ability to discern when and how to use tools effectively.

🔧
Seamless with LangChain toolsTool Integration
🧠
Persistent and customizableState Management
⚙️
Fine-grained graph-basedWorkflow Control
👥
Native orchestrationMulti-Agent Support

Best Practices for Building Robust LangGraph Agents in 2026

As you delve deeper into creating complex stateful AI agents with LangGraph, adhering to best practices will ensure your systems are robust, maintainable, and efficient. One key aspect is careful state management; design your `AgentState` to be as minimal yet comprehensive as possible, only including information truly necessary for decision-making and context. Overloading the state can lead to performance issues and increased token usage with LLMs. Another crucial practice involves modularizing your nodes. Each node should ideally perform a single, well-defined task, making debugging and testing significantly easier. This approach also promotes reusability, allowing you to compose different workflows from a set of common nodes. Read also: CrewAI Tutorial: Build AI Teams to Automate Complex Tasks

Furthermore, effective error handling and retry mechanisms are vital for production-grade agents. LangGraph allows you to define custom error handling within nodes or as part of conditional transitions. Implementing exponential backoff for external API calls within tool nodes, for example, can greatly improve the resilience of your agents. Finally, consider the human-in-the-loop (HITL) paradigm for critical decision points or ambiguous situations. LangGraph can easily integrate nodes that pause execution and prompt for human input, ensuring that complex or sensitive tasks are handled with appropriate oversight. Leveraging LLMs like GPT-5 Chat for reflective reasoning within your agent's nodes can also help improve decision quality and reduce errors.

LangGraph for Stateful AI Agents

优点

  • Explicit control over agent flow and state transitions.
  • Excellent for building complex, multi-step, and multi-agent workflows.
  • Native support for loops and conditional logic, enabling self-correction.
  • Seamless integration with LangChain's extensive tool ecosystem.
  • Provides a visual, graph-based mental model for agent design.
  • Highly performant for stateful orchestration, often faster than alternatives [AIMultiple](https://aimultiple.com/agentic-frameworks).
  • Supports human-in-the-loop for critical decision points.
  • Ideal for enterprise-grade applications requiring fine-grained control.

缺点

  • Steeper learning curve compared to simpler agent frameworks.
  • Requires careful state management to avoid complexity and performance issues.
  • Debugging complex graphs can be challenging without proper logging.
  • Over-reliance on conditional logic can lead to 'spaghetti code' if not well-structured.

FAQ: LangGraph for Stateful AI Development

Frequently Asked Questions

The primary advantage of LangGraph is its explicit state management and graph-based workflow definition. While LangChain provides agent abstractions, LangGraph gives you fine-grained control over how the agent moves between different steps, maintains context, and makes decisions. This is crucial for building stateful AI agents that require complex, multi-turn interactions or multi-agent collaboration, enabling robust and predictable behavior that standard agents might struggle to maintain. It also handles cyclic graphs and persistent memory much more effectively.
GLM 5Discover the capabilities of GLM 5 for your agent development.
立即试用

Conclusion: The Future of Stateful AI Agents with LangGraph

As we navigate the advancements of 2026, the ability to build stateful AI agents is no longer a luxury but a necessity for creating truly intelligent and effective AI applications. LangGraph empowers developers with the tools to design, implement, and deploy sophisticated agentic systems that can maintain context, execute complex logic, interact with external services, and even orchestrate multi-agent collaborations. By providing a clear, graph-based paradigm, LangGraph simplifies the development of what would otherwise be incredibly intricate systems, paving the way for a new generation of autonomous and highly capable AI. This LangGraph Tutorial has provided you with the foundational knowledge and practical steps to begin your journey in building these advanced AI solutions.

The flexibility and power of LangGraph, combined with the ever-improving capabilities of large language models like GPT-5.3-Codex and Gemini 3.1 Pro Preview, open up endless possibilities. From intelligent virtual assistants that remember your preferences to complex research agents that iteratively refine their findings, the potential impact of stateful AI agents is immense. We encourage you to experiment with the concepts and code examples provided, and to explore the extensive documentation and community resources available for LangGraph. The future of AI is stateful, and with LangGraph, you are well-equipped to be a part of it. Read also: LlamaIndex Tutorial: Build Knowledge Base with Local LLMs

Multi AI Editorial

发布: 2026年3月1日
Telegram 频道
返回博客

试用本文中的 AI 模型

一站式访问 100+ 神经网络。从免费套餐开始!

免费开始