Ollama Integration for Microsoft Agent Framework

1.0.0b260409 · active · verified Thu Apr 16

agent-framework-ollama provides an adapter to integrate Ollama language models with the Microsoft Agent Framework. It enables agents to utilize local or remote Ollama instances for generating responses and processing natural language. As a beta package (current version 1.0.0b260409), its API is subject to frequent changes, and it is part of an active development effort within the broader Microsoft Agent Framework initiative.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up an `OllamaAdapter` and integrate it with an `Agent` from the `agent-framework`. It assumes an Ollama server is running locally (or at `OLLAMA_BASE_URL`) and the specified model (default: `llama3`) is available. The interaction flow is logged to the console via `pre_process_messages` and `post_process_messages` hooks, providing visibility into the agent's communication.

import asyncio
import os
from agent_framework.agent import Agent
from agent_framework.user import User
from agent_framework_ollama import OllamaAdapter

async def main():
    # Pre-requisites:
    # 1. Ollama server must be running (e.g., `ollama serve` in your terminal)
    # 2. The desired model must be pulled (e.g., `ollama pull llama3`)

    # Configure Ollama connection using environment variables or defaults
    ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434")
    ollama_model = os.environ.get("OLLAMA_MODEL", "llama3")

    if not ollama_model:
        print("Error: OLLAMA_MODEL environment variable not set, or no default provided.")
        print("Please specify an Ollama model, e.g., 'llama3'.")
        return

    print(f"Attempting to connect to Ollama at {ollama_base_url} with model: {ollama_model}")

    try:
        # Initialize the OllamaAdapter
        ollama_adapter = OllamaAdapter(
            model=ollama_model,
            base_url=ollama_base_url,
        )

        # Create an Agent using the OllamaAdapter
        agent = Agent(
            name="OllamaAgent",
            adapter=ollama_adapter,
            # Optional: Add hooks to log messages as they are processed
            pre_process_messages=lambda msgs: print(f"\n[OllamaAgent received]: {[m.content for m in msgs if m.content]}"),
            post_process_messages=lambda msgs: print(f"[OllamaAgent sent]: {[m.content for m in msgs if m.content]}")
        )

        # Create a User to interact with the agent
        user = User(
            name="User",
            connect_to=agent
        )

        print("\n--- Starting conversation with OllamaAgent ---")
        await user.send("Hello, OllamaAgent! Please introduce yourself and tell me what you can do.")
        # In a more complex application, you'd manage message history or await a specific response.
        # For this quickstart, we observe the interaction via the print hooks.
        print("\n--- Conversation complete ---")

    except Exception as e:
        print(f"\nAn error occurred during agent execution: {e}")
        print("Please ensure your Ollama server is running, the model is pulled, and the base_url is correct.")

if __name__ == "__main__":
    asyncio.run(main())

view raw JSON →