Qwen-Agent Library
Qwen-Agent is a Python library designed to enhance Large Language Models (LLMs) with advanced capabilities such as Agent Workflows, Retrieval-Augmented Generation (RAG), Function Calling, and Code Interpreters. It's actively developed, with version 0.0.34 released recently, maintaining a rapid release cadence with frequent minor updates.
Warnings
- gotcha API Key Configuration: Users often forget to set the correct environment variables (e.g., `DASHSCOPE_API_KEY` or `OPENAI_API_KEY`) or pass them incorrectly to the LLM configuration, leading to authentication errors.
- gotcha Multi-Modal Compute Platform (MCP) is an optional dependency. Features relying on MCP (e.g., certain multi-modal capabilities) may not work without a specific setup or additional installation steps.
- gotcha Default LLM call and token limits can lead to unexpected truncations or early termination of agent runs. Defaults are `QWEN_AGENT_MAX_LLM_CALL_PER_RUN=20` and `QWEN_AGENT_DEFAULT_MAX_INPUT_TOKENS=58k`.
- gotcha The `gradio` dependency for GUI features has seen version modifications. If you encounter issues with the GUI, it might be due to `gradio` version incompatibility.
Install
-
pip install qwen-agent -
pip install qwen-agent[gui]
Imports
- AssistantAgent
from qwen_agent.agents import AssistantAgent
- CodeInterpreter
from qwen_agent.tools.tool_code_interpreter import CodeInterpreter
- RAG
from qwen_agent.tools.tool_rag import RAG
Quickstart
import os
from qwen_agent.agents import AssistantAgent
from qwen_agent.tools.tool_code_interpreter import CodeInterpreter
# Configure your LLM access
# Ensure DASHSCOPE_API_KEY or OPENAI_API_KEY is set in your environment variables.
llm_config = {
'model': 'qwen-turbo',
'model_server': 'dashscope', # or 'openai', 'ollama', etc.
'api_key': os.environ.get('DASHSCOPE_API_KEY', os.environ.get('OPENAI_API_KEY', ''))
}
if not llm_config['api_key']:
raise ValueError("Please set DASHSCOPE_API_KEY or OPENAI_API_KEY environment variable.")
# Initialize the agent with tools
agent = AssistantAgent(
llm=llm_config,
tools=[CodeInterpreter()]
)
# Run the agent with a prompt
response = agent.run("Please write a python code snippet to calculate the sum of 123 and 456.")
print(response)