AutoGen Core

0.7.5 · active · verified Sat Apr 11

AutoGen Core provides the foundational interfaces and agent runtime implementation for the AutoGen multi-agent conversation framework. It offers core components like `ConversableAgent` and `GroupChat`, enabling basic agent communication and management. While `autogen-core` focuses on the underlying framework, the broader 'autogen' package typically provides a more complete, high-level multi-agent system, tools, and UI. Current version: 0.7.5. Release cadence is frequent, often tied to the broader AutoGen project updates.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates a basic conversation between two `ConversableAgent` instances using `autogen-core`. Agents are configured with an LLM (e.g., OpenAI) and communicate asynchronously. Ensure the `OPENAI_API_KEY` environment variable is set for the LLM configuration to be valid.

import os
import asyncio
from autogen.agentchat import ConversableAgent

# Configure LLM (e.g., OpenAI API Key)
# For a runnable example, ensure OPENAI_API_KEY is set in your environment
llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini", # Or "gpt-4", "gpt-3.5-turbo"
            "api_key": os.environ.get("OPENAI_API_KEY", "")
        }
    ],
    "temperature": 0.7
}

# Create two agents
agent_a = ConversableAgent(
    name="AgentA",
    system_message="You are a helpful AI assistant. You can chat and respond to queries.",
    llm_config=llm_config,
    is_termination_msg=lambda msg: "terminate" in msg["content"].lower(),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=3
)

agent_b = ConversableAgent(
    name="AgentB",
    system_message="You are an expert in explaining complex concepts clearly.",
    llm_config=llm_config,
    is_termination_msg=lambda msg: "terminate" in msg["content"].lower(),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=3
)

async def run_chat():
    print("\n--- Initiating chat between AgentA and AgentB ---")
    chat_result = await agent_a.initiate_chat(
        agent_b,
        message="Explain the concept of quantum entanglement in simple terms."
    )
    print("\n--- Chat Summary ---")
    # print(chat_result.chat_history) # Uncomment to see full history

if __name__ == "__main__":
    # Ensure the API key is present for the example to work
    if not os.environ.get("OPENAI_API_KEY"):
        print("Warning: OPENAI_API_KEY environment variable not set. LLM calls may fail or raise errors.")
        print("Please set it for the quickstart to fully function.")
    asyncio.run(run_chat())

view raw JSON →