Nano SWE Agent
mini-SWE-agent is a minimalist yet powerful AI software engineering agent, designed to solve GitHub issues and assist in command-line tasks. It's built on a radically simple 100-line Python core, primarily leveraging bash for actions and supporting various models via LiteLLM. Widely adopted by institutions like Meta and Stanford, it focuses on performance, deployability across different environments (local, Docker), and is actively developed with frequent updates.
Warnings
- breaking Dependency `litellm` versions `1.82.7` and `1.82.8` were compromised. Update to a version that excludes these, or ensure your `litellm` installation is not one of these specific versions. The `mini-swe-agent` project has explicitly excluded these in its dependencies.
- gotcha The `openai` dependency has specific versions (`1.100.0`, `1.100.1`) that are excluded due to known issues. Ensure your `openai` package is not one of these versions.
- gotcha Older versions (pre-v2.2.5) could experience `FormatError`s, especially with weaker/smaller language models that make tool-calling mistakes, potentially hanging the agent.
- gotcha The output format of trajectory files changed with v2.0 (from `trajectory_format: mini-swe-agent-1.0` to `mini-swe-agent-1.1`). If you have existing scripts or tools parsing v1 trajectory files, they will need updates.
- gotcha In versions prior to `v2.2.4`, invoking `mini` for the first time after installation and calling `setup` could lead to an exception due to a missing default model name, as the configuration wasn't reloaded correctly.
Install
-
pip install mini-swe-agent -
pip install -e .
Imports
- DefaultAgent
from minisweagent.agents.default import DefaultAgent
- LitellmModel
from minisweagent.models.litellm_model import LitellmModel
- LocalEnvironment
from minisweagent.environments.local import LocalEnvironment
Quickstart
import os
from minisweagent.agents.default import DefaultAgent
from minisweagent.models.litellm_model import LitellmModel
from minisweagent.environments.local import LocalEnvironment
# Set your LLM API key and model name
os.environ["OPENAI_API_KEY"] = os.environ.get("MINI_SWE_AGENT_OPENAI_KEY", "sk-YOUR_OPENAI_KEY")
model_name = os.environ.get("MINI_SWE_AGENT_MODEL_NAME", "gpt-4o-mini") # Or "claude-3-sonnet", etc.
agent = DefaultAgent(
LitellmModel(model_name=model_name),
LocalEnvironment(),
)
task = "Write a Python function to calculate the nth Fibonacci number, with tests."
result = agent.run(task)
print(f"Agent finished with status: {result.get('exit_status')}")
print(f"Submission: {result.get('submission')}")