
AI
Langgraph
A journey from single LLM calls to recursive agent state machines.
What is LangGraph?
A framework for building controllable, persistent agent workflows with built-in support for human interaction, streaming, and state management. It uses the Graph Data Structure to achieve this.
Levels of Autonomy in LLM applications
To understand LangGraph, we first need to look at how LLM applications have evolved in terms of autonomy.
1. Code
- Code has zero autonomy and is 100% deterministic.
- Everything is hard-coded; it is not a cognitive architecture.
- Disadvantage: You'd need to write rules for every possible scenario - making it impossible to handle real-world complexity.
2. LLM call
- A single LLM call means your app basically does one main thing - you give it an input, it processes it, and gives you back an output.
- Think of chatbots that just take your message and respond, or apps that translate text.
- This was a huge leap from hard-coded rules, but it's still simple and is only in the 2nd stage of autonomy.
- Disadvantage: Trying to get everything done in one shot often leads to confused or mixed responses.
3. Chains
- Think of chains like having multiple specialists instead of one generalist. We break it down into steps where each AI is really good at one thing.
- Imagine a customer service chatbot: The first AI reads your complaint, the second AI finds the right solution from help docs, and the third AI turns that solution into a friendly response.
- Disadvantage: These fixed sequences are like a rigid assembly line - they always follow the same steps defined by the human.
4. Router
- Now this is where it gets interesting - routers are like smart traffic cops. Instead of a fixed path, the AI itself decides what steps to take next.
- Imagine a personal assistant bot: it figures out if you need help with scheduling, research, or calculations, then routes your request to the right tool.
- Disadvantage: While it can choose different paths, it still can't remember previous conversations or learn from mistakes.
5. State Machine (Agent)
- This combines the Router level but with loops.
- Agent ~= control flow controlled by an LLM.
- Features: Human-in-the-loop (approval), Multi-agent systems, Advanced memory management, and the ability to go back in history.
This is where LangGraph comes into the picture.
| Level | Category | Description | Decide Output of Step | Decide Which Steps to Take | Decide What Steps are Available to Take |
|---|---|---|---|---|---|
| 1 | Human-Driven | Code | ๐ฉโ๐ป | ๐ฉโ๐ป | ๐ฉโ๐ป |
| 2 | LLM Call (one step only) | ๐ฆพ | ๐ฉโ๐ป | ๐ฉโ๐ป | |
| 3 | Chain (multiple steps) | ๐ฆพ | ๐ฉโ๐ป | ๐ฉโ๐ป | |
| 4 | Router (no cycles) | ๐ฆพ | ๐ฆพ | ๐ฉโ๐ป | |
| 5 | Agent-Executed | State Machine (cycles) | ๐ฆพ | ๐ฆพ | ๐ฉโ๐ป |
| 6 | Autonomous | ๐ฆพ | ๐ฆพ | ๐ฆพ |
Agents in LangGraph
The architecture of an agent in LangGraph can be visualized as a cycle of reasoning and action:
Core Building Blocks
While the Agent Loop is the high-level architecture, LangGraph implements this using three core components and a standardized implementation workflow:
Key features of LangGraph
- Looping and Branching Capabilities: Supports conditional statements and loop structures, allowing dynamic execution paths based on state.
- State Persistence: Automatically saves and manages state, supporting pause and resume for long-running conversations.
- Human-Machine Interaction Support: Allows inserting human review during execution, supporting state editing and modification.
- Streaming Processing: Supports streaming output and real-time feedback on execution status.
- Seamless Integration with LangChain: Reuses existing LangChain components and supports LCEL expressions.