Atomic Agents
raw JSON → 2.7.4 verified Tue May 12 auth: no python install: verified quickstart: stale
Lightweight agent framework by BrainBlend-AI built on top of instructor and pydantic. Focuses on typed, composable agents with explicit schemas. v2.0 introduced major breaking changes: all class renames, .lib removed from import paths, generic type parameters added. Current version: 2.7.4 (Mar 2026). Uses instructor library as LLM abstraction layer — not a direct OpenAI/Anthropic dependency.
pip install atomic-agents Common errors
error ModuleNotFoundError: No module named 'atomic_agents.lib' ↓
cause In version 2.0, the '.lib' subpackage was removed from import paths.
fix
Update imports to remove '.lib', e.g., 'from atomic_agents import Agent'.
error ImportError: cannot import name 'Agent' from 'atomic_agents' ↓
cause In version 2.0, class names were changed; 'Agent' was likely renamed.
fix
Refer to the latest documentation to find the new class name and update the import accordingly.
error TypeError: 'AtomicAgent' object is not subscriptable ↓
cause In version 2.0, generic type parameters were added, requiring explicit type annotations.
fix
Specify input and output schemas when instantiating, e.g., 'AtomicAgent[InputSchema, OutputSchema]'.
error AttributeError: module 'atomic_agents' has no attribute 'run' ↓
cause The 'run' method is now an instance method of 'AtomicAgent' and not a module-level function.
fix
Create an 'AtomicAgent' instance and call 'run' on it, e.g., 'agent = AtomicAgent(); agent.run(input)'.
error ValueError: Missing required field 'chat_message' in input schema ↓
cause The 'chat_message' field is mandatory in the 'BasicChatInputSchema'.
fix
Ensure that the 'chat_message' field is provided when creating an input schema instance.
Warnings
breaking v2.0 renamed all core classes: BaseAgent → AtomicAgent, BaseAgentConfig → AgentConfig, AgentMemory → ChatHistory, BaseAgentInputSchema → BasicChatInputSchema, BaseAgentOutputSchema → BasicChatOutputSchema. All v1 imports raise ImportError. ↓
fix See full rename mapping at github.com/BrainBlend-AI/atomic-agents/blob/main/UPGRADE_DOC.md
breaking All .lib import paths removed in v2. 'from atomic_agents.lib.base.base_io_schema import BaseIOSchema' → 'from atomic_agents import BaseIOSchema'. 'from atomic_agents.lib.components.*' → 'from atomic_agents.context.*'. ↓
fix atomic_agents.lib.base.* → atomic_agents.*; atomic_agents.lib.components.* → atomic_agents.context.*; atomic_agents.lib.factories.* → atomic_agents.connectors.mcp.*
breaking run_async() behavior changed in v2. Previously a streaming generator, now returns a complete response. Use run_async_stream() for streaming. ↓
fix Replace async for chunk in agent.run_async() with response = await agent.run_async() or use run_async_stream() for streaming.
breaking BaseTool now uses generic type parameters in v2. Custom tools from v1 break — must add type parameters. ↓
fix See tool migration in UPGRADE_DOC.md for generic type parameter syntax.
gotcha atomic-agents requires instructor as its LLM client wrapper. Raw OpenAI/Anthropic clients do not work — must be wrapped with instructor.from_openai() or equivalent. ↓
fix client = instructor.from_openai(OpenAI()) then pass to AgentConfig(client=client, ...)
gotcha LLMs trained pre-2025 will generate v1 patterns (BaseAgent, .lib imports). These all break on v2. ↓
fix All BaseAgent → AtomicAgent. All atomic_agents.lib.* paths need updating per rename table.
breaking The 'instructor' library, a core dependency of 'atomic-agents', utilizes Python 3.10+ type annotation syntax (e.g., `str | Path`). This causes a `TypeError: Unable to evaluate type annotation 'str | Path'` or `TypeError: unsupported operand type(s) for |: 'type' and 'type'` when `atomic-agents` is run on Python versions prior to 3.10. ↓
fix Upgrade the Python environment to version 3.10 or newer.
gotcha The `openai.OpenAI()` client, used by `instructor` within `atomic-agents`, requires an API key to be configured. This can be done by setting the `OPENAI_API_KEY` environment variable or by passing `api_key='your_key'` directly to the `OpenAI` client constructor. ↓
fix Ensure the `OPENAI_API_KEY` environment variable is set, or initialize the OpenAI client with `client = instructor.from_openai(OpenAI(api_key='YOUR_OPENAI_API_KEY'))`.
Install
pip install atomic-agents openai instructor Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) - - 3.11s 116.8M
3.10 alpine (musl) - - 3.04s 116.8M
3.10 slim (glibc) - - 2.33s 118M
3.10 slim (glibc) - - 2.31s 118M
3.11 alpine (musl) - - 3.83s 127.2M
3.11 alpine (musl) - - 3.81s 127.2M
3.11 slim (glibc) - - 3.27s 129M
3.11 slim (glibc) - - 3.29s 129M
3.12 alpine (musl) - - 3.75s 258.7M
3.12 alpine (musl) - - 3.75s 258.7M
3.12 slim (glibc) - - 3.75s 242M
3.12 slim (glibc) - - 3.74s 242M
3.13 alpine (musl) - - 3.50s 258.5M
3.13 alpine (musl) - - 3.41s 258.5M
3.13 slim (glibc) - - 3.51s 242M
3.13 slim (glibc) - - 3.48s 242M
3.9 alpine (musl) - - - -
3.9 alpine (musl) - - - -
3.9 slim (glibc) - - - -
3.9 slim (glibc) - - - -
Imports
- AtomicAgent (v2 — current) wrong
from atomic_agents.agents.base_agent import BaseAgent, BaseAgentConfig from atomic_agents.lib.components.agent_memory import AgentMemory agent = BaseAgent( BaseAgentConfig(client=client, model='gpt-4o-mini', memory=AgentMemory()) )correctimport instructor from openai import OpenAI from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema from atomic_agents.context import ChatHistory, SystemPromptGenerator client = instructor.from_openai(OpenAI()) history = ChatHistory() agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema]( config=AgentConfig( client=client, model='gpt-4o-mini', history=history ) ) response = agent.run(BasicChatInputSchema(chat_message='Hello!')) print(response.chat_message) - run_async (v2 behavior change) wrong
# v1: run_async was a streaming generator async for chunk in agent.run_async(input_schema): print(chunk)correct# v2: run_async returns complete response response = await agent.run_async(BasicChatInputSchema(chat_message='Hello')) print(response.chat_message) # v2: use run_async_stream for streaming async for partial in agent.run_async_stream(BasicChatInputSchema(chat_message='Hello')): print(partial)
Quickstart stale last tested: 2026-04-23
# pip install atomic-agents openai instructor
import instructor
from openai import OpenAI
from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema
from atomic_agents.context import ChatHistory
client = instructor.from_openai(OpenAI())
agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema](
config=AgentConfig(
client=client,
model='gpt-4o-mini',
history=ChatHistory()
)
)
response = agent.run(BasicChatInputSchema(chat_message='What is quantum computing?'))
print(response.chat_message)