AutoGen Core
AutoGen Core provides the foundational interfaces and agent runtime implementation for the AutoGen multi-agent conversation framework. It offers core components like `ConversableAgent` and `GroupChat`, enabling basic agent communication and management. While `autogen-core` focuses on the underlying framework, the broader 'autogen' package typically provides a more complete, high-level multi-agent system, tools, and UI. Current version: 0.7.5. Release cadence is frequent, often tied to the broader AutoGen project updates.
Warnings
- gotcha The `autogen-core` library provides the `autogen.agentchat` module. For most users building multi-agent applications, installing the broader `autogen` package (`pip install autogen`) is recommended as it includes `autogen-core` along with additional high-level agents, tools, and features. Installing `autogen-core` alone only provides the foundational agent chat capabilities.
- gotcha Despite being named `autogen-core`, its modules are exposed under the `autogen` namespace, specifically `autogen.agentchat`. For example, `ConversableAgent` is imported via `from autogen.agentchat import ConversableAgent`, not `from autogen_core.agentchat import ConversableAgent`.
- gotcha Many core functionalities and examples for `autogen-core` agents implicitly rely on Language Model (LLM) providers (e.g., OpenAI, Azure OpenAI). Ensure `llm_config` is correctly set, typically requiring an API key like `OPENAI_API_KEY` to be configured in your environment variables for the LLM client to authenticate.
- breaking The AutoGen ecosystem is under active and rapid development. While efforts are made to maintain backward compatibility for `autogen-core`, internal API structures and even module placements (e.g., `GroupChat` moving to a submodule) can change between minor versions. This can lead to unexpected import errors or behavior changes.
Install
-
pip install autogen-core
Imports
- ConversableAgent
from autogen.agentchat import ConversableAgent
- UserProxyAgent
from autogen.agentchat import UserProxyAgent
- GroupChat
from autogen.agentchat.groupchat import GroupChat
- GroupChatManager
from autogen.agentchat.groupchat import GroupChatManager
Quickstart
import os
import asyncio
from autogen.agentchat import ConversableAgent
# Configure LLM (e.g., OpenAI API Key)
# For a runnable example, ensure OPENAI_API_KEY is set in your environment
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini", # Or "gpt-4", "gpt-3.5-turbo"
"api_key": os.environ.get("OPENAI_API_KEY", "")
}
],
"temperature": 0.7
}
# Create two agents
agent_a = ConversableAgent(
name="AgentA",
system_message="You are a helpful AI assistant. You can chat and respond to queries.",
llm_config=llm_config,
is_termination_msg=lambda msg: "terminate" in msg["content"].lower(),
human_input_mode="NEVER",
max_consecutive_auto_reply=3
)
agent_b = ConversableAgent(
name="AgentB",
system_message="You are an expert in explaining complex concepts clearly.",
llm_config=llm_config,
is_termination_msg=lambda msg: "terminate" in msg["content"].lower(),
human_input_mode="NEVER",
max_consecutive_auto_reply=3
)
async def run_chat():
print("\n--- Initiating chat between AgentA and AgentB ---")
chat_result = await agent_a.initiate_chat(
agent_b,
message="Explain the concept of quantum entanglement in simple terms."
)
print("\n--- Chat Summary ---")
# print(chat_result.chat_history) # Uncomment to see full history
if __name__ == "__main__":
# Ensure the API key is present for the example to work
if not os.environ.get("OPENAI_API_KEY"):
print("Warning: OPENAI_API_KEY environment variable not set. LLM calls may fail or raise errors.")
print("Please set it for the quickstart to fully function.")
asyncio.run(run_chat())