AG2 (formerly AutoGen): Open-Source AgentOS for AI Agents
AG2, distributed via the 'autogen' PyPI package, is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. It aims to streamline the development and research of agentic AI, offering features such as multi-agent conversations, support for various large language models (LLMs) and tool use, autonomous and human-in-the-loop workflows. The library is actively maintained with frequent minor releases, currently at version 0.11.5, and is on a roadmap towards a major v1.0 release.
Common errors
-
pydantic_core._pydantic_core.ValidationError: 1 validation error for _LLMConfig
cause Incorrect or incomplete configuration for `LLMConfig`, often missing required fields or having invalid types.fixReview your `LLMConfig` dictionary. Ensure all necessary keys like `api_type`, `model`, and `api_key` are present and correctly formatted, matching the requirements of your chosen LLM provider. -
UnboundLocalError: cannot access local variable 'task' where it is not associated with a value
cause This error typically arises in complex multi-agent orchestrations, particularly when a variable like 'task' is referenced before it has been assigned a value within a specific scope, potentially due to logic gaps in custom agent behaviors or conversation flows.fixInspect the agent's reply logic or custom functions where the 'task' variable is used. Ensure 'task' is always initialized or passed correctly before being accessed. Debug the agent's conversation flow to pinpoint when and why 'task' might be unassigned. -
TypeError on ToolCall return type
cause A type mismatch or unexpected return type from a tool call, potentially due to an API change or an issue in how the tool's output is processed by the agent.fixUpgrade to `ag2` version 0.11.1 or newer, as this specific `TypeError` was addressed in that release. If the issue persists with the latest version, verify the tool's expected return format against its actual output. -
ValueError: Agent names in a GroupChat must be unique.
cause When creating a `GroupChat`, two or more agents were instantiated with identical names, which is not allowed for proper identification within the chat.fixEnsure that every agent participating in a `GroupChat` has a unique `name` attribute. For example: `agent1 = ConversableAgent(name="Coder", ...)` and `agent2 = ConversableAgent(name="Reviewer", ...)`.
Warnings
- breaking The current `autogen.agentchat` framework is being transitioned to maintenance mode. The `autogen.beta` framework (with `Agent` as the core class) will become the official `AG2 v1.0`. Expect deprecation notices and a migration guide in upcoming minor versions (v0.12, v0.13, v0.14) before v1.0.
- gotcha Starting from `ag2` version 0.8, the `openai` package (and other LLM provider packages) is no longer installed by default. You must explicitly include it as an extra dependency during installation.
- gotcha When installing AG2 with extra dependencies on macOS, a 'no matches found' error may occur if the package name with extras is not enclosed in double quotes.
- gotcha AG2 agents, by default, prefer to perform code execution within a Docker container. If Docker is not running or properly configured, code execution tasks might fail.
Install
-
pip install "ag2[openai]" -
pip install "autogen[openai]"
Imports
- ConversableAgent
from autogen import ConversableAgent
- LLMConfig
from autogen import LLMConfig
- UserProxyAgent
from autogen import UserProxyAgent
- GroupChat
from autogen import GroupChat
- GroupChatManager
from autogen import GroupChatManager
Quickstart
import os
from autogen import ConversableAgent, LLMConfig
# Set your OpenAI API key as an environment variable
# Example: export OPENAI_API_KEY="YOUR_API_KEY"
llm_config = LLMConfig(
{
"api_type": "openai",
"model": "gpt-5-nano", # or any other supported model
"api_key": os.environ.get("OPENAI_API_KEY", "")
}
)
# Create our LLM agent
my_agent = ConversableAgent(
name="poetic_assistant",
system_message="You are a poetic AI assistant, respond in rhyme.",
llm_config=llm_config,
)
# Run the agent with a prompt and process the response
response = my_agent.run(
message="In one sentence, what's the big deal about AI?",
max_turns=3,
user_input=False, # Set to True for interactive input
)
print(response.process().json())