Atomic Agents
Lightweight agent framework by BrainBlend-AI built on top of instructor and pydantic. Focuses on typed, composable agents with explicit schemas. v2.0 introduced major breaking changes: all class renames, .lib removed from import paths, generic type parameters added. Current version: 2.7.4 (Mar 2026). Uses instructor library as LLM abstraction layer — not a direct OpenAI/Anthropic dependency.
Warnings
- breaking v2.0 renamed all core classes: BaseAgent → AtomicAgent, BaseAgentConfig → AgentConfig, AgentMemory → ChatHistory, BaseAgentInputSchema → BasicChatInputSchema, BaseAgentOutputSchema → BasicChatOutputSchema. All v1 imports raise ImportError.
- breaking All .lib import paths removed in v2. 'from atomic_agents.lib.base.base_io_schema import BaseIOSchema' → 'from atomic_agents import BaseIOSchema'. 'from atomic_agents.lib.components.*' → 'from atomic_agents.context.*'.
- breaking run_async() behavior changed in v2. Previously a streaming generator, now returns a complete response. Use run_async_stream() for streaming.
- breaking BaseTool now uses generic type parameters in v2. Custom tools from v1 break — must add type parameters.
- gotcha atomic-agents requires instructor as its LLM client wrapper. Raw OpenAI/Anthropic clients do not work — must be wrapped with instructor.from_openai() or equivalent.
- gotcha LLMs trained pre-2025 will generate v1 patterns (BaseAgent, .lib imports). These all break on v2.
Install
-
pip install atomic-agents -
pip install atomic-agents openai instructor
Imports
- AtomicAgent (v2 — current)
import instructor from openai import OpenAI from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema from atomic_agents.context import ChatHistory, SystemPromptGenerator client = instructor.from_openai(OpenAI()) history = ChatHistory() agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema]( config=AgentConfig( client=client, model='gpt-4o-mini', history=history ) ) response = agent.run(BasicChatInputSchema(chat_message='Hello!')) print(response.chat_message) - run_async (v2 behavior change)
# v2: run_async returns complete response response = await agent.run_async(BasicChatInputSchema(chat_message='Hello')) print(response.chat_message) # v2: use run_async_stream for streaming async for partial in agent.run_async_stream(BasicChatInputSchema(chat_message='Hello')): print(partial)
Quickstart
# pip install atomic-agents openai instructor
import instructor
from openai import OpenAI
from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema
from atomic_agents.context import ChatHistory
client = instructor.from_openai(OpenAI())
agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema](
config=AgentConfig(
client=client,
model='gpt-4o-mini',
history=ChatHistory()
)
)
response = agent.run(BasicChatInputSchema(chat_message='What is quantum computing?'))
print(response.chat_message)