OpenTelemetry OpenAI Instrumentation (v2)
This library provides official OpenTelemetry instrumentation for the OpenAI Python API library (version 1.0.0 and above). It enables automatic tracing of LLM requests, capturing model name, token usage, finish reason, duration, and errors without modifying existing OpenAI client code. It also supports logging of messages and metrics, and is maintained as part of the OpenTelemetry Python Contrib project.
Warnings
- breaking This instrumentation (`opentelemetry-instrumentation-openai-v2`) is designed for the OpenAI Python SDK v1.0.0 and above. The OpenAI SDK v1.0.0 introduced extensive breaking changes, including a complete rewrite of the client API. Users migrating from older OpenAI SDK versions (pre-1.0.0) or older OpenTelemetry OpenAI instrumentations must update their code and use this `-v2` package.
- gotcha The package is in beta (`b0`) status, which indicates that API stability is not guaranteed and breaking changes may occur in minor versions without notice. Furthermore, semantic convention updates (e.g., to 1.30.0 as noted in `2.3b0` release) can change attribute names used in telemetry.
- gotcha Message content, such as prompts, completions, and function arguments/return values, is not captured by default due to privacy and data sensitivity concerns.
- gotcha Early beta versions (including `2.3b0`) included fixes for `AttributeError` when handling `LegacyAPIResponse` (from `with_raw_response`) and crashes with streaming `with_raw_response`. This indicates that using less common or raw response patterns with the OpenAI client might have previously led to instrumentation errors.
Install
-
pip install opentelemetry-instrumentation-openai-v2 opentelemetry-sdk openai
Imports
- OpenAIInstrumentor
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
from openai import OpenAI
# Configure OpenTelemetry Tracer Provider
def setup_and_instrument_otel():
resource = Resource.create({"service.name": "my-openai-app"})
provider = TracerProvider(resource=resource)
processor = SimpleSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument OpenAI
OpenAIInstrumentor().instrument()
print("OpenTelemetry and OpenAI instrumentation initialized.")
if __name__ == "__main__":
# Set OpenAI API key from environment variable
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_API_KEY')
if not os.environ['OPENAI_API_KEY'] or os.environ['OPENAI_API_KEY'] == 'sk-YOUR_OPENAI_API_KEY':
print("Please set the OPENAI_API_KEY environment variable.")
exit(1)
setup_and_instrument_otel()
client = OpenAI()
try:
print("\nMaking an OpenAI chat completion call...")
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Tell me a short story about OpenTelemetry."}
]
)
print("OpenAI call successful. Check console for traces.")
# print(chat_completion.choices[0].message.content)
except Exception as e:
print(f"An error occurred during OpenAI call: {e}")
# To see traces, you'd typically export to a collector, e.g., Jaeger, instead of ConsoleSpanExporter.
# This example prints to console to show basic functionality.