Langfuse
Open-source LLM observability and evaluation platform. Python SDK provides tracing via @observe decorator, OpenTelemetry integration, and a low-level client for manual trace/span management. Works with any LLM framework — not tied to LangChain. Self-hostable (Docker/Kubernetes) or cloud (EU/US regions). MAJOR VERSION NOTE: SDK was completely rewritten in v3 (released June 2025). v3 is OpenTelemetry-based with a new singleton client pattern. All v2 import paths, class names, and initialization patterns are broken in v3. pip install langfuse installs v3 as of Feb 2026.
Warnings
- breaking SDK v3 (June 2025) is a complete rewrite. v2 import paths, class names, and initialization patterns all changed. The most critical breaks: (1) Langfuse() is now a singleton initializer, not a per-request client. (2) from langfuse.callback import CallbackHandler → from langfuse.langchain import CallbackHandler. (3) @observe import moved. pip install langfuse now installs v3.
- breaking Self-hosted Langfuse: Python SDK v3 requires Langfuse platform version ≥ 3.125.0. Running SDK v3 against a self-hosted platform older than 3.125.0 causes silent failures or API errors.
- breaking LANGFUSE_BASE_URL has no universal default. EU and US cloud use different endpoints. Not setting this env var causes all API calls to fail — often with a connection timeout rather than a clear auth error.
- gotcha Traces are sent asynchronously. In short-lived scripts (CLI tools, batch jobs, test suites), the process exits before traces flush, resulting in missing data in the UI with no error.
- gotcha There is a separate stub package 'langfuse-sdk' on PyPI (version 1.0.0) that is NOT the current SDK — it's an old redirect stub that predates the rename. pip install langfuse-sdk installs an abandoned package.
Install
-
pip install langfuse -
pip install 'langfuse<3'
Imports
- Langfuse / get_client (v3)
from langfuse import Langfuse, get_client
- @observe decorator (v3)
from langfuse.decorators import observe
- CallbackHandler for LangChain (v3)
from langfuse.langchain import CallbackHandler
Quickstart
import os
os.environ['LANGFUSE_SECRET_KEY'] = 'sk-lf-...'
os.environ['LANGFUSE_PUBLIC_KEY'] = 'pk-lf-...'
os.environ['LANGFUSE_BASE_URL'] = 'https://cloud.langfuse.com' # EU
from langfuse import Langfuse, get_client
from langfuse.decorators import observe
# Initialize singleton once at startup
Langfuse()
# Verify connection
langfuse = get_client()
if langfuse.auth_check():
print('Connected!')
# @observe traces any function
@observe()
def my_llm_call(prompt: str) -> str:
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': prompt}]
)
return response.choices[0].message.content
result = my_llm_call('Hello!')
# Flush traces before exit in short-lived scripts
langfuse.flush()
# LangChain integration (v3 import path)
from langfuse.langchain import CallbackHandler
handler = CallbackHandler()
# Pass handler to chain: chain.invoke({...}, config={'callbacks': [handler]})