LangGraph Supervisor
LangGraph Supervisor is a Python library that simplifies building hierarchical multi-agent systems using LangGraph. It provides a central supervisor agent responsible for orchestrating specialized worker agents, managing communication flow, and delegating tasks. The library is currently in version 0.0.31 and receives frequent updates, indicating active development.
Warnings
- deprecated The LangGraph team now recommends implementing the 'supervisor pattern directly via tools' for most use cases, rather than using this dedicated `langgraph-supervisor` library. This library may be less actively maintained or receive fewer new features compared to the manual approach.
- breaking In versions 0.0.26 and earlier, the `state_schema` parameter for `create_supervisor` defaulted to `AgentState`. From 0.0.26 onwards, it defaults to `None`. If your application relied on the implicit `AgentState`, you might experience issues.
- gotcha The library is in `0.0.x` versions, indicating that the API is not yet stable. Breaking changes and significant shifts in functionality can occur without major version bumps.
- breaking Version 0.0.31 includes a fix for `v1 ToolNode compat`, suggesting prior versions might have had compatibility issues with the `ToolNode` structure introduced in `langgraph` v1.x.
- gotcha LangGraph Supervisor requires Python version 3.10 or higher. Using older Python versions will result in installation or runtime errors.
Install
-
pip install langgraph-supervisor langchain-openai
Imports
- create_supervisor
from langgraph_supervisor import create_supervisor
- ChatOpenAI
from langchain_openai import ChatOpenAI
- create_react_agent
from langgraph.prebuilt import create_react_agent
Quickstart
import os
from langchain_openai import ChatOpenAI
from langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agent
from langgraph.graph import END
# Set your OpenAI API key from environment variable
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "")
# Initialize the LLM for agents and supervisor
model = ChatOpenAI(model="gpt-4o")
# Define simple tools
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
def web_search(query: str) -> str:
"""Search the web for information."""
# Placeholder for actual web search functionality
return f"Found results for '{query}': Example search data."
# Create specialized agents
math_agent = create_react_agent(
model=model,
tools=[add],
name="math_expert",
)
research_agent = create_react_agent(
model=model,
tools=[web_search],
name="research_expert",
)
# Create supervisor workflow
# The prompt parameter defines the supervisor's role and how to delegate.
workflow = create_supervisor(
[research_agent, math_agent],
model=model,
prompt=(
"You are a team supervisor managing a research expert and a math expert. "
"For research tasks, use research_agent. "
"For math tasks, use math_agent."
),
)
# To add memory and enable longer conversations, you would typically use a StateGraph and add checkpointing.
# For this quickstart, we'll compile and run a single turn.
app = workflow.compile()
# Example invocation
result = app.invoke({
"messages": [
{
"role": "user",
"content": "What is 10 + 5 and what's the capital of France?"
}
]
})
print(result["messages"][-1].content)