AutoGen

0.10.0 · active · verified Wed Apr 15

AutoGen is a framework that allows for the development of LLM applications using multiple agents that can converse with each other to solve tasks. The `pyautogen` PyPI package (current version 0.10.0) acts as a proxy, primarily installing `autogen-agentchat` to provide the core multi-agent conversation capabilities. It has a relatively rapid release cadence, with minor versions often released monthly or more frequently, introducing new features and breaking changes.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates setting up two basic agents: an `AssistantAgent` powered by GPT-4o and a `UserProxyAgent` to initiate the conversation and manage execution. It shows how to configure LLM settings using an environment variable for API keys and sets up basic code execution and termination conditions. Remember to set the `OPENAI_API_KEY` environment variable.

import autogen
import os

# Configure API key from environment variable or direct config_list
config_list = [
    {
        "model": "gpt-4o",
        "api_key": os.environ.get("OPENAI_API_KEY", None)
    }
]

# Define agents
assistant = autogen.AssistantAgent(
    name="Assistant",
    llm_config={
        "config_list": config_list,
        "temperature": 0.7
    }
)

user_proxy = autogen.UserProxyAgent(
    name="UserProxy",
    human_input_mode="NEVER", # Set to "ALWAYS" or "TERMINATE" for human interaction
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "coding", # Create a 'coding' directory for code execution
        "use_docker": False # Set to True to use Docker for safer execution
    }
)

# Start a chat
user_proxy.initiate_chat(
    assistant,
    message="What is the capital of France?"
)

view raw JSON →