OpenInference OpenAI Instrumentation

0.1.44 · active · verified Sat Apr 11

OpenInference OpenAI Instrumentation is a Python auto-instrumentation library designed for OpenAI's Python SDK. It automatically generates OpenTelemetry-compatible traces from OpenAI API calls, enabling developers to send these traces to an OpenTelemetry collector, such as Arize Phoenix, for observability and analysis. The library is actively maintained with frequent updates across the OpenInference ecosystem.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to instrument OpenAI API calls using `openinference-instrumentation-openai` and send the resulting traces to an OpenTelemetry collector. It sets up a `TracerProvider` to export traces via HTTP OTLP and then instruments the OpenAI client. You should ensure an OpenTelemetry collector (like Arize Phoenix, running `python -m phoenix.server.main serve`) is running to receive traces. Remember to set your `OPENAI_API_KEY` environment variable.

import os
import openai
from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

# Set your OpenAI API key from environment variables
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')

# Configure OpenTelemetry Tracer Provider to send traces to a collector (e.g., Phoenix)
endpoint = "http://127.0.0.1:6006/v1/traces" # Default Phoenix endpoint
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
# Optionally, also print spans to the console for debugging
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

# Instrument the OpenAI SDK
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

if __name__ == "__main__":
    client = openai.OpenAI()
    try:
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Write a haiku about observability."}],
            max_tokens=20,
            stream=False # Set to True and add stream_options={'include_usage': True} for streaming with token counts
        )
        print("OpenAI API call successful.")
        print(f"Response: {response.choices[0].message.content}")
    except openai.AuthenticationError:
        print("Error: OpenAI API key is missing or invalid. Please set OPENAI_API_KEY.")
    except Exception as e:
        print(f"An unexpected error occurred: {e}")

view raw JSON →