OpenTelemetry OpenAI Instrumentation
This library provides OpenTelemetry instrumentation for the OpenAI Python SDK, enabling automatic tracing, metric collection (e.g., token usage, duration), and optional logging of prompt and completion content. It is actively maintained by Traceloop/OpenLLMetry and receives frequent updates, currently at version 0.58.0.
Warnings
- gotcha There are two distinct OpenTelemetry OpenAI instrumentation packages: `opentelemetry-instrumentation-openai` (this community package by Traceloop/OpenLLMetry) and `opentelemetry-instrumentation-openai-v2` (the official OpenTelemetry Contrib package). Ensure you install and use the intended library for your project as they have different origins and might have subtle differences in implementation or features.
- gotcha By default, this instrumentation does not capture the full content of prompts and completions due to privacy concerns. Only metadata like token counts and model names are recorded.
- breaking The OpenTelemetry GenAI Semantic Conventions are under active development. Recent versions of this library (`0.55.0` and above) implement `OpenTelemetry GenAI Semantic Conventions 0.5.0` or later, which may change attribute names or span structures. Existing instrumentations using older versions (e.g., v1.36.0 or prior) might not emit the latest conventions by default.
- gotcha Using this instrumentation with pre-forking servers (e.g., Gunicorn with multiple workers) can lead to issues with metric generation and inconsistent telemetry due to how OpenTelemetry SDK components handle background threads and locks after process forking.
- gotcha Some users have reported significant performance degradation (reduced throughput) when this instrumentation is enabled, particularly under load. While improvements have been made, it's essential to benchmark your application.
Install
-
pip install opentelemetry-instrumentation-openai openai opentelemetry-sdk opentelemetry-exporter-otlp
Imports
- OpenAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from openai import OpenAI
# Configure OpenTelemetry Tracer Provider
provider = TracerProvider()
processor = SimpleSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument the OpenAI SDK
OpenAIInstrumentor().instrument()
# Set your OpenAI API key (replace with actual key or use env var)
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_API_KEY')
# Initialize OpenAI client and make a call
client = OpenAI()
try:
print("Making an OpenAI chat completion call...")
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a short story about a brave knight."}
]
)
print("OpenAI response received.")
print(f"Story: {response.choices[0].message.content}")
except Exception as e:
print(f"An error occurred: {e}")
print("Ensure OPENAI_API_KEY is set and valid.")
# You should see OpenTelemetry traces printed to the console.