Ollama Integration for Microsoft Agent Framework
agent-framework-ollama provides an adapter to integrate Ollama language models with the Microsoft Agent Framework. It enables agents to utilize local or remote Ollama instances for generating responses and processing natural language. As a beta package (current version 1.0.0b260409), its API is subject to frequent changes, and it is part of an active development effort within the broader Microsoft Agent Framework initiative.
Common errors
-
ModuleNotFoundError: No module named 'agent_framework_ollama'
cause The 'agent-framework-ollama' package has not been installed in your Python environment.fixInstall the package using pip: `pip install agent-framework-ollama` -
httpx.ConnectError: Connection refused
cause The Ollama server is not running, is not accessible at the specified `base_url`, or is blocked by a firewall.fixStart your Ollama server (e.g., `ollama serve`). Verify the `base_url` parameter in `OllamaAdapter` (or `OLLAMA_BASE_URL` env var) matches your Ollama instance, and check for any firewall restrictions. -
ollama.ResponseError: {'error': "model 'your-model-name' not found, try `ollama pull your-model-name`"}cause The specified Ollama model is not available on your running Ollama server or its name is misspelled. The server returned an error indicating the model needs to be pulled.fixPull the required model using the Ollama CLI: `ollama pull your-model-name` (replace 'your-model-name' with the actual model you intend to use, e.g., 'llama3'). Ensure the model name in your code matches the pulled model.
Warnings
- breaking As a beta package, the API of agent-framework-ollama is subject to frequent changes without strict adherence to semantic versioning. Methods, classes, and configurations may be renamed, altered, or removed in minor or even patch versions.
- gotcha The functionality of agent-framework-ollama relies on a running Ollama server instance and pre-downloaded models. Network connectivity issues, an incorrect `base_url`, or missing models are common causes of runtime errors.
- gotcha This library is tightly coupled with `agent-framework`, which is also in beta. Incompatibility issues can arise if `agent-framework-ollama` and `agent-framework` are not compatible versions, especially if `agent-framework` itself introduces breaking changes or requires specific underlying dependencies.
Install
-
pip install agent-framework-ollama
Imports
- OllamaAdapter
from agent_framework_ollama import OllamaAdapter
Quickstart
import asyncio
import os
from agent_framework.agent import Agent
from agent_framework.user import User
from agent_framework_ollama import OllamaAdapter
async def main():
# Pre-requisites:
# 1. Ollama server must be running (e.g., `ollama serve` in your terminal)
# 2. The desired model must be pulled (e.g., `ollama pull llama3`)
# Configure Ollama connection using environment variables or defaults
ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434")
ollama_model = os.environ.get("OLLAMA_MODEL", "llama3")
if not ollama_model:
print("Error: OLLAMA_MODEL environment variable not set, or no default provided.")
print("Please specify an Ollama model, e.g., 'llama3'.")
return
print(f"Attempting to connect to Ollama at {ollama_base_url} with model: {ollama_model}")
try:
# Initialize the OllamaAdapter
ollama_adapter = OllamaAdapter(
model=ollama_model,
base_url=ollama_base_url,
)
# Create an Agent using the OllamaAdapter
agent = Agent(
name="OllamaAgent",
adapter=ollama_adapter,
# Optional: Add hooks to log messages as they are processed
pre_process_messages=lambda msgs: print(f"\n[OllamaAgent received]: {[m.content for m in msgs if m.content]}"),
post_process_messages=lambda msgs: print(f"[OllamaAgent sent]: {[m.content for m in msgs if m.content]}")
)
# Create a User to interact with the agent
user = User(
name="User",
connect_to=agent
)
print("\n--- Starting conversation with OllamaAgent ---")
await user.send("Hello, OllamaAgent! Please introduce yourself and tell me what you can do.")
# In a more complex application, you'd manage message history or await a specific response.
# For this quickstart, we observe the interaction via the print hooks.
print("\n--- Conversation complete ---")
except Exception as e:
print(f"\nAn error occurred during agent execution: {e}")
print("Please ensure your Ollama server is running, the model is pulled, and the base_url is correct.")
if __name__ == "__main__":
asyncio.run(main())