OpenTelemetry OpenAI Instrumentation
This library provides OpenTelemetry instrumentation for the OpenAI Python SDK, enabling automatic tracing, metric collection (e.g., token usage, duration), and optional logging of prompt and completion content. It is actively maintained by Traceloop/OpenLLMetry and receives frequent updates, currently at version 0.58.0.
Common errors
-
ModuleNotFoundError: No module named 'opentelemetry.instrumentation.openai'
cause The `opentelemetry-instrumentation-openai` package has not been installed, or it's installed in a different Python environment than the one running the application.fixEnsure the library is installed in your active environment: `pip install opentelemetry-instrumentation-openai` (for the Traceloop/OpenLLMetry version) or `pip install opentelemetry-instrumentation-openai-v2` (for the official OpenTelemetry contrib version). -
AttributeError: 'LegacyAPIResponse' object has no attribute 'model'
cause This error typically occurs with `opentelemetry-instrumentation-openai-v2` when using OpenAI's `with_raw_response` method, as the instrumentation expects a different response object structure than what `LegacyAPIResponse` provides.fixThis is a known bug in `opentelemetry-instrumentation-openai-v2` versions before a fix is applied. As a workaround, avoid using `with_raw_response` or manually handle the response to extract `model` before the instrumentation processes it. Check for updated versions of `opentelemetry-instrumentation-openai-v2` that may contain a fix. -
OpenTelemetry OpenAI no traces
cause The OpenTelemetry instrumentation for OpenAI is not properly initialized, the `instrument()` method was not called, or other OpenTelemetry components (like an exporter or processor) are misconfigured or missing, preventing spans from being generated or sent to a backend.fixEnsure `OpenAIInstrumentor().instrument()` is called *once* at application startup, before any OpenAI API calls are made. Also, verify your OpenTelemetry SDK, processor, and exporter are correctly set up and configured to send traces to your desired observability backend. -
ImportError: cannot import name 'OpenAIInstrumentor' from 'opentelemetry.instrumentation.openai'
cause This error occurs if you are trying to import `OpenAIInstrumentor` from the old package path (`opentelemetry.instrumentation.openai`) when you have installed the newer, official OpenTelemetry version (`opentelemetry-instrumentation-openai-v2`), which places the instrumentor in a different module path (`opentelemetry.instrumentation.openai_v2`).fixIf you are using the official OpenTelemetry contrib package (`opentelemetry-instrumentation-openai-v2`), update your import statement to `from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor`.
Warnings
- gotcha There are two distinct OpenTelemetry OpenAI instrumentation packages: `opentelemetry-instrumentation-openai` (this community package by Traceloop/OpenLLMetry) and `opentelemetry-instrumentation-openai-v2` (the official OpenTelemetry Contrib package). Ensure you install and use the intended library for your project as they have different origins and might have subtle differences in implementation or features.
- gotcha By default, this instrumentation does not capture the full content of prompts and completions due to privacy concerns. Only metadata like token counts and model names are recorded.
- breaking The OpenTelemetry GenAI Semantic Conventions are under active development. Recent versions of this library (`0.55.0` and above) implement `OpenTelemetry GenAI Semantic Conventions 0.5.0` or later, which may change attribute names or span structures. Existing instrumentations using older versions (e.g., v1.36.0 or prior) might not emit the latest conventions by default.
- gotcha Using this instrumentation with pre-forking servers (e.g., Gunicorn with multiple workers) can lead to issues with metric generation and inconsistent telemetry due to how OpenTelemetry SDK components handle background threads and locks after process forking.
- gotcha Some users have reported significant performance degradation (reduced throughput) when this instrumentation is enabled, particularly under load. While improvements have been made, it's essential to benchmark your application.
Install
-
pip install opentelemetry-instrumentation-openai openai opentelemetry-sdk opentelemetry-exporter-otlp
Imports
- OpenAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from openai import OpenAI
# Configure OpenTelemetry Tracer Provider
provider = TracerProvider()
processor = SimpleSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument the OpenAI SDK
OpenAIInstrumentor().instrument()
# Set your OpenAI API key (replace with actual key or use env var)
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_API_KEY')
# Initialize OpenAI client and make a call
client = OpenAI()
try:
print("Making an OpenAI chat completion call...")
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a short story about a brave knight."}
]
)
print("OpenAI response received.")
print(f"Story: {response.choices[0].message.content}")
except Exception as e:
print(f"An error occurred: {e}")
print("Ensure OPENAI_API_KEY is set and valid.")
# You should see OpenTelemetry traces printed to the console.