OpenTelemetry OpenAI Agents Instrumentation
This library provides official OpenTelemetry instrumentation for the `openai-agents` SDK, converting the rich trace data emitted by the Agents runtime into the GenAI semantic conventions, enriching spans with request/response payload metadata, and recording duration/token usage metrics. It is currently at version `0.58.0` and maintains an active development pace with frequent updates, often related to compliance with evolving OpenTelemetry Generative AI semantic conventions.
Warnings
- breaking The OpenTelemetry Generative AI semantic conventions are under active development and may change. Frequent updates in minor versions (e.g., 0.53.x to 0.58.x) often include migrations and adjustments to these conventions, which can alter span attributes or names.
- gotcha By default, this instrumentation captures message content (prompts, completions, tool arguments) within span attributes, which can include sensitive user data. This behavior might conflict with privacy requirements or increase trace size significantly.
- gotcha This package (`opentelemetry-instrumentation-openai-agents`) specifically instruments the `openai-agents` SDK. There are other separate instrumentations for the general `openai` client (e.g., `opentelemetry-instrumentation-openai` or `opentelemetry-instrumentation-openai-v2`). Using the wrong instrumentation for your OpenAI integration will result in no telemetry.
Install
-
pip install opentelemetry-instrumentation-openai-agents openai-agents opentelemetry-sdk
Imports
- OpenAIAgentsInstrumentor
from opentelemetry.instrumentation.openai_agents import OpenAIAgentsInstrumentor
Quickstart
import os
from agents import Agent, Runner, function_tool
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.openai_agents import OpenAIAgentsInstrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def configure_otel() -> None:
# Configure an OpenTelemetry TracerProvider
provider = TracerProvider()
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument OpenAI Agents
OpenAIAgentsInstrumentor().instrument(tracer_provider=provider)
@function_tool
def get_weather(city: str) -> str:
"""Provides the weather for a given city."""
return f"The forecast for {city} is sunny with pleasant temperatures."
if __name__ == "__main__":
# Ensure OTLP collector is running, e.g., with Docker:
# docker run -d -p 4317:4317 -p 4318:4318 otel/opentelemetry-collector-contrib
# Configure OpenTelemetry
configure_otel()
# Example OpenAI Agent usage
assistant = Agent(
name="Travel Concierge",
instructions="You are a concise travel concierge.",
tools=[get_weather],
)
print("Running agent...")
# Provide dummy API key if required by 'openai-agents', though it might be configured via env vars.
# For real use, ensure OpenAI API key is set via environment variable, e.g., OPENAI_API_KEY
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-dummy-key-for-example')
result = Runner.run_sync(assistant, "I'm visiting Barcelona this weekend. How should I pack?")
print(f"Agent final output: {result.final_output}")
print("Traces should now be visible in your configured OpenTelemetry backend.")