Langfuse

3.14.5 · active · verified Sat Feb 28

Open-source LLM observability and evaluation platform. Python SDK provides tracing via @observe decorator, OpenTelemetry integration, and a low-level client for manual trace/span management. Works with any LLM framework — not tied to LangChain. Self-hostable (Docker/Kubernetes) or cloud (EU/US regions). MAJOR VERSION NOTE: SDK was completely rewritten in v3 (released June 2025). v3 is OpenTelemetry-based with a new singleton client pattern. All v2 import paths, class names, and initialization patterns are broken in v3. pip install langfuse installs v3 as of Feb 2026.

Warnings

Install

Imports

Quickstart

Langfuse() must be called once at startup to initialize the singleton. get_client() retrieves it anywhere. In v3, the client is NOT created per-request. Always call langfuse.flush() before script exit or in shutdown hooks.

import os
os.environ['LANGFUSE_SECRET_KEY'] = 'sk-lf-...'
os.environ['LANGFUSE_PUBLIC_KEY'] = 'pk-lf-...'
os.environ['LANGFUSE_BASE_URL'] = 'https://cloud.langfuse.com'  # EU

from langfuse import Langfuse, get_client
from langfuse.decorators import observe

# Initialize singleton once at startup
Langfuse()

# Verify connection
langfuse = get_client()
if langfuse.auth_check():
    print('Connected!')

# @observe traces any function
@observe()
def my_llm_call(prompt: str) -> str:
    import openai
    client = openai.OpenAI()
    response = client.chat.completions.create(
        model='gpt-4o',
        messages=[{'role': 'user', 'content': prompt}]
    )
    return response.choices[0].message.content

result = my_llm_call('Hello!')

# Flush traces before exit in short-lived scripts
langfuse.flush()

# LangChain integration (v3 import path)
from langfuse.langchain import CallbackHandler
handler = CallbackHandler()
# Pass handler to chain: chain.invoke({...}, config={'callbacks': [handler]})

view raw JSON →