OpenLLMetry (Standalone Instrumentors)
OpenLLMetry is a project name, not a single installable PyPI package. It refers to two related but distinct things that must not be confused: (1) The traceloop-sdk — the high-level Traceloop SDK (already documented separately) that wraps OpenLLMetry auto-instrumentation behind Traceloop.init(). (2) Standalone opentelemetry-instrumentation-* packages — individual OTel instrumentors for specific LLM providers/frameworks, published from the traceloop/openllmetry GitHub monorepo. These can be used WITHOUT traceloop-sdk in any existing OpenTelemetry setup. ECOSYSTEM CONFUSION: There is a second, competing instrumentation ecosystem called OpenInference (from Arize-ai/openinference), which publishes openinference-instrumentation-* packages. Both OpenLLMetry and OpenInference instrumentors instrument the same providers (OpenAI, LangChain, etc.), use different span attribute schemas, and route to different preferred backends. They are NOT interchangeable.
Warnings
- breaking There is no 'openllmetry' package on PyPI. pip install openllmetry fails. The correct packages are individual opentelemetry-instrumentation-* packages (for standalone use) or traceloop-sdk (for managed use). This is the most common mistake from reading OpenLLMetry documentation without checking package names.
- breaking Two competing ecosystems publish instrumentors for the same providers under similar-looking package names: opentelemetry-instrumentation-openai (OpenLLMetry/Traceloop, imports from opentelemetry.instrumentation.*) vs openinference-instrumentation-openai (OpenInference/Arize, imports from openinference.instrumentation.*). Mixing instrumentors from both ecosystems in the same app generates spans with conflicting attribute schemas. Most backends are only tuned for one schema.
- breaking Instrumentors must be called BEFORE the target library is imported or used. Calling OpenAIInstrumentor().instrument() after openai has already been imported and used will not retroactively patch existing client instances. New instances created after .instrument() may work, but this is unreliable.
- gotcha TRACELOOP_TRACE_CONTENT=false must be set BEFORE .instrument() is called. Setting it after instrumentation has been applied has no effect — the content capture is baked in at patch time.
- gotcha Datadog's LLM Observability supports OpenLLMetry (opentelemetry-instrumentation-*) starting at version 0.47+. It explicitly does NOT support OpenInference (openinference-instrumentation-*). Other backends have different compatibility: Phoenix natively supports OpenInference but can accept OpenLLMetry spans via a span processor bridge.
Install
-
pip install opentelemetry-instrumentation-openai -
pip install opentelemetry-instrumentation-anthropic -
pip install opentelemetry-instrumentation-langchain -
pip install opentelemetry-instrumentation-llamaindex -
pip install opentelemetry-instrumentation-chromadb opentelemetry-instrumentation-pinecone opentelemetry-instrumentation-weaviate
Imports
- OpenAI instrumentor (OpenLLMetry)
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
Quickstart
# Standalone usage — no traceloop-sdk required
# Requires only: opentelemetry-sdk + opentelemetry-exporter-otlp + individual instrumentors
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# 1. Set up TracerProvider and exporter (any OTLP backend)
tracer_provider = TracerProvider()
exporter = OTLPSpanExporter(
endpoint='http://localhost:4318/v1/traces', # Jaeger, Grafana, Phoenix, Datadog, etc.
)
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(tracer_provider)
# 2. Instrument providers — must call before importing/using the library
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.instrumentation.langchain import LangChainInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
# 3. Use your libraries normally — spans are captured automatically
import openai
client = openai.OpenAI(api_key=os.environ['OPENAI_API_KEY'])
response = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': 'Hello!'}]
)
# To suppress prompt/completion content from traces (PII/privacy):
os.environ['TRACELOOP_TRACE_CONTENT'] = 'false'
# Must be set before .instrument() is called
# Using with traceloop-sdk (alternative — SDK manages the provider):
# from traceloop.sdk import Traceloop
# Traceloop.init(app_name='my-app') # handles all of the above automatically