Traceloop SDK (OpenLLMetry)

0.52.5 · active · verified Sat Feb 28

Python SDK for OpenLLMetry — an open-source, OpenTelemetry-based observability library for LLM applications built and maintained by Traceloop. Provides auto-instrumentation for 20+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama) and frameworks (LangChain, LlamaIndex, CrewAI), plus manual @workflow/@task/@agent decorators. KEY RELATIONSHIP: traceloop-sdk is the convenience wrapper around OpenLLMetry instrumentation packages. 'openllmetry' is NOT a separate installable package — it's the umbrella project name for the GitHub monorepo. The installable SDK is 'traceloop-sdk'. By default, traces are exported to Traceloop cloud, but the SDK is fully vendor-agnostic via TRACELOOP_BASE_URL — it can export to any OTLP backend (Datadog, Grafana, Jaeger, Honeycomb, etc.) without using Traceloop's platform at all.

Warnings

Install

Imports

Quickstart

Import order is load-bearing. Traceloop.init() must be called before importing provider libraries — the SDK uses OTel monkey-patching which only works if the provider hasn't been imported yet. disable_batch=True is strongly recommended in development.

import os

# CRITICAL: Import and init Traceloop BEFORE importing openai, langchain, etc.
# The SDK patches provider modules at init time — late import means no instrumentation.
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow, task

# Option A: Export to Traceloop cloud
Traceloop.init(
    app_name='my-llm-app',
    api_key=os.environ['TRACELOOP_API_KEY'],
    disable_batch=True  # disable in dev — sends spans immediately, not in batches
)

# Option B: Export to any OTLP backend (no Traceloop account needed)
# Set via env vars before init:
# TRACELOOP_BASE_URL=http://localhost:4318  (Jaeger, Grafana Agent, etc.)
# TRACELOOP_HEADERS='Authorization=Bearer <token>'
Traceloop.init(app_name='my-llm-app')  # reads env vars automatically

# Provider imports AFTER init
import openai
client = openai.OpenAI()

# Manual tracing with decorators
@workflow(name='answer_question')
def answer_question(question: str) -> str:
    response = client.chat.completions.create(
        model='gpt-4o',
        messages=[{'role': 'user', 'content': question}]
    )
    return response.choices[0].message.content

# @task for sub-steps within a workflow
@task(name='format_prompt')
def format_prompt(question: str) -> str:
    return f'Answer concisely: {question}'

result = answer_question('What is OpenTelemetry?')

view raw JSON →