LangGraph Swarm
LangGraph Swarm provides a high-level API for creating and managing a swarm of AI agents, making it easier to build complex multi-agent systems using LangGraph. It is designed to abstract away common patterns in multi-agent orchestration. The current version is 0.1.0, and releases are expected to follow LangGraph's development cadence or as significant features are added.
Common errors
-
ModuleNotFoundError: No module named 'langgraph_swarm'
cause The `langgraph-swarm` library has not been installed in your Python environment.fixRun `pip install langgraph-swarm` to install the library. -
ValueError: Missing `OPENAI_API_KEY` environment variable.
cause You are using `langchain_openai.ChatOpenAI` or similar LLM without providing the necessary API key.fixSet the `OPENAI_API_KEY` environment variable (e.g., `export OPENAI_API_KEY='your_key'`) or pass it directly to the LLM constructor if supported. -
TypeError: 'str' object cannot be interpreted as an AgentNode
cause You're trying to add agents or define workflows using agent names (strings) where `AgentNode` instances are expected. The `add_agent` and `add_workflow` methods typically expect actual `AgentNode` objects.fixEnsure you pass `AgentNode` instances directly, not just their string `name` attributes, when methods expect `AgentNode` objects. Example: `graph.add_agent(my_agent_instance)` instead of `graph.add_agent(my_agent_instance.name)`.
Warnings
- breaking As a new library (v0.1.0), the API for LangGraph Swarm is subject to rapid change. Breaking changes may occur in minor versions as the project evolves and stabilizes.
- gotcha A solid understanding of LangGraph core concepts (nodes, edges, state, graph compilation) is essential for effective use of `langgraph-swarm`. This library builds on top of LangGraph.
- gotcha Agent nodes often require LLM instances (e.g., `ChatOpenAI`). For remote LLMs, appropriate API keys (e.g., `OPENAI_API_KEY`) must be set as environment variables.
Install
-
pip install langgraph-swarm
Imports
- AgentSwarm
from langgraph_swarm import AgentSwarm
- AgentNode
from langgraph_swarm import AgentNode
- SwarmGraph
from langgraph_swarm import SwarmGraph
Quickstart
import os
from langchain_openai import ChatOpenAI
from langgraph_swarm import AgentSwarm, AgentNode, SwarmGraph
from langgraph_swarm.nodes import LLMNode
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "")
if not os.environ["OPENAI_API_KEY"]:
print("Warning: OPENAI_API_KEY environment variable not set. Skipping quickstart.")
else:
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# Define the agents (nodes in the swarm)
research_agent = AgentNode(
name="Researcher",
description="Researches given topics and provides factual information.",
llm=llm, # Example LLM, replace with actual agent logic if needed
tools=[]
)
writer_agent = AgentNode(
name="Writer",
description="Writes creative content based on research.",
llm=llm,
tools=[]
)
# Create a SwarmGraph
graph = SwarmGraph()
# Add agents to the graph
graph.add_agent(research_agent)
graph.add_agent(writer_agent)
# Define the workflow (how agents interact)
graph.add_workflow(
entry_point=research_agent.name,
edges={research_agent.name: writer_agent.name},
# The writer agent should only activate if research is complete
# Add conditional logic or specific messages to trigger in a real scenario
exit_point=writer_agent.name
)
# Create the AgentSwarm instance
swarm = AgentSwarm(graph=graph, llm=llm) # LLM for internal swarm coordination if needed
# Invoke the swarm with an initial task
task = "Write a short summary about the benefits of multi-agent systems."
print(f"\n--- Invoking swarm with task: '{task}' ---\n")
result = swarm.invoke({"messages": [("user", task)]})
print("\n--- Swarm execution complete ---\n")
print(f"Final result: {result}")
# Expected output structure might vary, but should contain the agents' messages.
# print(result["messages"][-1].content) # Example access to final message