{"id":2163,"library":"opentelemetry-instrumentation-openai","title":"OpenTelemetry OpenAI Instrumentation","description":"This library provides OpenTelemetry instrumentation for the OpenAI Python SDK, enabling automatic tracing, metric collection (e.g., token usage, duration), and optional logging of prompt and completion content. It is actively maintained by Traceloop/OpenLLMetry and receives frequent updates, currently at version 0.58.0.","status":"active","version":"0.58.0","language":"en","source_language":"en","source_url":"https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-openai","tags":["opentelemetry","observability","tracing","openai","llm","ai","instrumentation"],"install":[{"cmd":"pip install opentelemetry-instrumentation-openai openai opentelemetry-sdk opentelemetry-exporter-otlp","lang":"bash","label":"Install core and OpenAI dependencies"}],"dependencies":[{"reason":"Required for interacting with the OpenAI API, which this library instruments.","package":"openai"},{"reason":"Core OpenTelemetry SDK components for tracing and metrics.","package":"opentelemetry-sdk"},{"reason":"Recommended exporter for sending telemetry data to an OTLP-compatible collector.","package":"opentelemetry-exporter-otlp","optional":true}],"imports":[{"symbol":"OpenAIInstrumentor","correct":"from opentelemetry.instrumentation.openai import OpenAIInstrumentor"}],"quickstart":{"code":"import os\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor\nfrom opentelemetry.instrumentation.openai import OpenAIInstrumentor\nfrom openai import OpenAI\n\n# Configure OpenTelemetry Tracer Provider\nprovider = TracerProvider()\nprocessor = SimpleSpanProcessor(ConsoleSpanExporter())\nprovider.add_span_processor(processor)\ntrace.set_tracer_provider(provider)\n\n# Instrument the OpenAI SDK\nOpenAIInstrumentor().instrument()\n\n# Set your OpenAI API key (replace with actual key or use env var)\nos.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_API_KEY')\n\n# Initialize OpenAI client and make a call\nclient = OpenAI()\n\ntry:\n    print(\"Making an OpenAI chat completion call...\")\n    response = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n            {\"role\": \"user\", \"content\": \"Tell me a short story about a brave knight.\"}\n        ]\n    )\n    print(\"OpenAI response received.\")\n    print(f\"Story: {response.choices[0].message.content}\")\nexcept Exception as e:\n    print(f\"An error occurred: {e}\")\n    print(\"Ensure OPENAI_API_KEY is set and valid.\")\n\n# You should see OpenTelemetry traces printed to the console.","lang":"python","description":"This quickstart demonstrates how to set up OpenTelemetry with the `opentelemetry-instrumentation-openai` library. It initializes a basic `TracerProvider` with a `ConsoleSpanExporter` to print traces to the console, then instruments the OpenAI client. Any subsequent OpenAI API calls will automatically generate spans."},"warnings":[{"fix":"Verify which package you intend to use. For new projects, the `opentelemetry-instrumentation-openai-v2` is often recommended as the official OpenTelemetry implementation.","message":"There are two distinct OpenTelemetry OpenAI instrumentation packages: `opentelemetry-instrumentation-openai` (this community package by Traceloop/OpenLLMetry) and `opentelemetry-instrumentation-openai-v2` (the official OpenTelemetry Contrib package). Ensure you install and use the intended library for your project as they have different origins and might have subtle differences in implementation or features.","severity":"gotcha","affected_versions":"All versions"},{"fix":"To enable capturing message content as log events or span attributes, set the environment variable `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true` before running your application. Be mindful of potential PII in your telemetry data if you enable this.","message":"By default, this instrumentation does not capture the full content of prompts and completions due to privacy concerns. Only metadata like token counts and model names are recorded.","severity":"gotcha","affected_versions":"All versions"},{"fix":"To opt into the latest experimental GenAI conventions, set the environment variable `OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental`. Review the OpenTelemetry GenAI semantic conventions documentation for specific changes.","message":"The OpenTelemetry GenAI Semantic Conventions are under active development. Recent versions of this library (`0.55.0` and above) implement `OpenTelemetry GenAI Semantic Conventions 0.5.0` or later, which may change attribute names or span structures. Existing instrumentations using older versions (e.g., v1.36.0 or prior) might not emit the latest conventions by default.","severity":"breaking","affected_versions":"0.55.0 and later"},{"fix":"When using pre-forking servers, consider using programmatic auto-instrumentation within each worker process, or configure the server to use a single worker for telemetry-sensitive processes. Alternatively, explore alternative deployment strategies that avoid process forking after instrumentation setup.","message":"Using this instrumentation with pre-forking servers (e.g., Gunicorn with multiple workers) can lead to issues with metric generation and inconsistent telemetry due to how OpenTelemetry SDK components handle background threads and locks after process forking.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Regularly update the library to the latest version, as performance optimizations are ongoing. Profile your application with and without instrumentation to understand the overhead. If performance remains an issue, consider selective instrumentation or alternative tracing strategies.","message":"Some users have reported significant performance degradation (reduced throughput) when this instrumentation is enabled, particularly under load. While improvements have been made, it's essential to benchmark your application.","severity":"gotcha","affected_versions":"Older versions (pre-0.53.4) were more affected, but performance impact can still occur."}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}