Azure AI Agent Server Core

2.0.0b1 · active · verified Thu Apr 16

Azure AI Agent Server Core (azure-ai-agentserver-core) provides foundational utilities and a host framework for deploying Azure AI Hosted Agents. It enables developers to host their agents as containers in the cloud, allowing interaction via the `azure-ai-projects` SDK. The library is part of the broader Azure SDK for Python, with a current version of 2.0.0b1, reflecting active development and frequent beta releases within the ecosystem.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to define and run a simple mock agent using `azure-ai-agentserver-core`. It sets up a `FoundryCBAgent` with an `agent_run` method that can handle streaming or non-streaming responses. This example focuses on the server-side implementation of an agent that would be hosted in Azure AI Foundry. For actual deployment, environment variables for Azure authentication and project endpoint would be required.

import os
import datetime
from azure.ai.agentserver.core import FoundryCBAgent
from azure.ai.agentserver.core.models import (
    CreateResponse,
    Response as OpenAIResponse,
)
from azure.ai.agentserver.core.models.projects import (
    ItemContentOutputText,
    ResponsesAssistantMessageItemResource,
    ResponseTextDeltaEvent,
    ResponseTextDoneEvent,
)

def stream_events(text: str):
    assembled = ""
    for i, token in enumerate(text.split(" ")):
        piece = token if i == len(text.split(" ")) - 1 else token + " "
        assembled += piece
        yield ResponseTextDeltaEvent(delta=piece)
    yield ResponseTextDoneEvent(text=assembled)

async def agent_run(request_body: CreateResponse):
    print(f"Agent request received: {request_body.agent}")
    if request_body.stream:
        return stream_events("I am a mock agent with no intelligence in stream mode.")

    # Build assistant output content
    output_content = [
        ItemContentOutputText(
            text="I am a mock agent with no intelligence, responding to your request.",
            annotations=[],
        )
    ]
    response = OpenAIResponse(
        metadata={},
        temperature=0.0,
        top_p=0.0,
        user="mock_user",
        id="mock_id",
        created_at=datetime.datetime.now(),
        output=[
            ResponsesAssistantMessageItemResource(
                status="completed",
                content=output_content,
            )
        ],
    )
    return response

my_agent = FoundryCBAgent()
my_agent.agent_run = agent_run

if __name__ == "__main__":
    # Ensure environment variables are set for actual deployment
    # For local testing, you might not need all env vars for simple mock agents.
    # However, for cloud deployment, you'd need specifics like AZURE_AI_PROJECT_ENDPOINT
    # os.environ['AZURE_AI_PROJECT_ENDPOINT'] = 'your_project_endpoint'
    # os.environ['AZURE_CLIENT_ID'] = 'your_client_id'
    # os.environ['AZURE_TENANT_ID'] = 'your_tenant_id'
    # os.environ['AZURE_CLIENT_SECRET'] = 'your_client_secret'

    print("Starting Azure AI Agent Server Core mock agent...")
    my_agent.run()

view raw JSON →