OpenInference Instrumentation

0.1.46 · active · verified Thu Apr 09

OpenInference Instrumentation provides Python utilities for collecting traces from AI/ML applications, extending OpenTelemetry to offer detailed observability for LLMs and related frameworks. It integrates with any OpenTelemetry-compatible backend like Arize Phoenix or Langfuse. The current version is 0.1.46 and the project maintains an active release cadence, with frequent updates across its various framework-specific sub-packages.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up OpenInference instrumentation for OpenAI, configure an OpenTelemetry tracer, and use OpenInference context managers for adding session, user, and custom metadata. The traces are exported to an OTLP collector (e.g., Arize Phoenix).

import os
import openai
from openinference.instrumentation.openai import OpenAIInstrumentor
from openinference.instrumentation.span_data import using_session, using_user, using_metadata
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

# 1. Configure OpenTelemetry Tracer Provider
# Traces will be sent to an OTLP collector, e.g., Arize Phoenix (default at http://127.0.0.1:6006/v1/traces)
# Ensure your collector is running before executing this code.
endpoint = os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT", "http://127.0.0.1:6006/v1/traces")
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

# 2. Instrument your application with OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# 3. Set OpenAI API Key (replace with your actual key or environment variable)
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "sk-YOUR_OPENAI_API_KEY")

# 4. Use OpenInference context managers and make an LLM call
client = openai.OpenAI()

with using_session("user_session_abc"), \
     using_user("test_user_123"), \
     using_metadata(key="deployment_env", value="staging"): 

    print("Making OpenAI chat completion call...")
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "user", "content": "What is the capital of France?"}
        ]
    )
    print(f"Response: {response.choices[0].message.content}")

print("Traces should now be visible in your OpenTelemetry collector.")

view raw JSON →