Traceloop SDK (OpenLLMetry)
raw JSON →Python SDK for OpenLLMetry — an open-source, OpenTelemetry-based observability library for LLM applications built and maintained by Traceloop. Provides auto-instrumentation for 20+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama) and frameworks (LangChain, LlamaIndex, CrewAI), plus manual @workflow/@task/@agent decorators. KEY RELATIONSHIP: traceloop-sdk is the convenience wrapper around OpenLLMetry instrumentation packages. 'openllmetry' is NOT a separate installable package — it's the umbrella project name for the GitHub monorepo. The installable SDK is 'traceloop-sdk'. By default, traces are exported to Traceloop cloud, but the SDK is fully vendor-agnostic via TRACELOOP_BASE_URL — it can export to any OTLP backend (Datadog, Grafana, Jaeger, Honeycomb, etc.) without using Traceloop's platform at all.
pip install traceloop-sdk Common errors
error ModuleNotFoundError: No module named 'traceloop' ↓
error ConnectionRefusedError: [Errno 111] Connection refused ↓
TRACELOOP_BASE_URL, or verify your TRACELOOP_BASE_URL and TRACELOOP_API_KEY configuration. error Failed to export spans: HTTP status code: 401 ↓
TRACELOOP_API_KEY environment variable with a valid API key, or pass it explicitly to Traceloop.init(api_key='your_api_key'). error traceloop-sdk no traces for openai ↓
Traceloop.init() is called as early as possible in your application's lifecycle, ideally before any LLM client instances are created or used. Warnings
breaking Traceloop.init() MUST be called before importing provider libraries (openai, anthropic, langchain, etc.). The SDK instruments providers via OpenTelemetry monkey-patching applied at init time. If you import openai before calling Traceloop.init(), the OpenAI client is not patched and zero spans are generated — no error, no warning, just missing traces. ↓
breaking @aworkflow decorator is deprecated and will be removed in a future version. @workflow now handles both sync and async functions — use it for both. ↓
gotcha disable_batch defaults to False (batch mode). In production this is correct — batching reduces overhead. In development/local testing, batches are held and flushed on a timer, so traces may not appear in your backend for several seconds after function completion. Confusingly, the process may exit before the batch flushes, causing traces to vanish entirely. ↓
gotcha There is no installable package named 'openllmetry' on PyPI. The project is called OpenLLMetry and the GitHub repo is traceloop/openllmetry, but the Python package is 'traceloop-sdk'. pip install openllmetry will fail or install a wrong package. Individual instrumentors (for specific providers without the full SDK) are installable as separate packages: e.g. opentelemetry-instrumentation-openai. ↓
gotcha TRACELOOP_HEADERS uses a non-standard key=value format (not HTTP header format). Example: 'Authorization=Bearer token' NOT 'Authorization: Bearer token'. Using colons instead of equals signs causes the header to not be sent, resulting in 401/403 errors at the OTLP endpoint with no clear error message. ↓
breaking The Traceloop SDK, or one of its core dependencies, requires the 'httpx' library. A 'ModuleNotFoundError: No module named 'httpx'' indicates that 'httpx' was not installed successfully alongside the SDK. ↓
breaking The script failed with a 'ModuleNotFoundError: No module named 'openai''. This indicates that the 'openai' package was not installed in the environment where the script was executed. ↓
Install compatibility stale last tested: 2026-05-12
Imports
- Traceloop wrong
import openllmetrycorrectfrom traceloop.sdk import Traceloop - decorators wrong
from traceloop.sdk import workflowcorrectfrom traceloop.sdk.decorators import workflow, task, agent, tool
Quickstart stale last tested: 2026-05-12
import os
# CRITICAL: Import and init Traceloop BEFORE importing openai, langchain, etc.
# The SDK patches provider modules at init time — late import means no instrumentation.
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow, task
# Option A: Export to Traceloop cloud
Traceloop.init(
app_name='my-llm-app',
api_key=os.environ['TRACELOOP_API_KEY'],
disable_batch=True # disable in dev — sends spans immediately, not in batches
)
# Option B: Export to any OTLP backend (no Traceloop account needed)
# Set via env vars before init:
# TRACELOOP_BASE_URL=http://localhost:4318 (Jaeger, Grafana Agent, etc.)
# TRACELOOP_HEADERS='Authorization=Bearer <token>'
Traceloop.init(app_name='my-llm-app') # reads env vars automatically
# Provider imports AFTER init
import openai
client = openai.OpenAI()
# Manual tracing with decorators
@workflow(name='answer_question')
def answer_question(question: str) -> str:
response = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': question}]
)
return response.choices[0].message.content
# @task for sub-steps within a workflow
@task(name='format_prompt')
def format_prompt(question: str) -> str:
return f'Answer concisely: {question}'
result = answer_question('What is OpenTelemetry?')