OpenTelemetry Langchain Instrumentation
This library provides OpenTelemetry instrumentation for applications built with LangChain, enabling comprehensive tracing of LLM interactions and associated components. It is part of the OpenLLMetry project, which extends OpenTelemetry for enhanced LLM observability. The project maintains a rapid release cadence, with frequent updates often driven by advancements in OpenTelemetry's Generative AI semantic conventions.
Warnings
- breaking Frequent updates to OpenTelemetry Generative AI semantic conventions can introduce breaking changes in span attribute names and trace structures. Recent versions (e.g., v0.53.0, v0.55.0 onwards) specifically align with new semantic conventions.
- gotcha By default, this instrumentation logs sensitive data such as prompts, completions, and embeddings directly to span attributes. This provides visibility but may expose private user data.
- gotcha The `LangchainInstrumentor().instrument()` call only enables the instrumentation. For traces to be collected and exported, a complete OpenTelemetry SDK setup (TracerProvider, SpanProcessor, SpanExporter) is required.
Install
-
pip install opentelemetry-instrumentation-langchain
Imports
- LangchainInstrumentor
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# 1. Configure OpenTelemetry SDK (essential for any OTel instrumentation)
# Set up a basic console exporter for demonstration
resource = Resource.create({"service.name": "my-langchain-app"})
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(tracer_provider)
# 2. Instrument Langchain
LangchainInstrumentor().instrument()
# Optional: Disable sensitive content logging if needed (e.g., for production)
# os.environ['TRACELOOP_TRACE_CONTENT'] = 'false'
# 3. Use Langchain as usual
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') # Replace with actual key or secure handling
llm = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI assistant."),
("user", "Tell me a short story about {topic}.")
])
output_parser = StrOutputParser()
chain = prompt | llm | output_parser
if __name__ == "__main__":
print("Running LangChain application...")
response = chain.invoke({"topic": "a cat detective"})
print("\nLangChain Response:")
print(response)
print("\nCheck console output for OpenTelemetry traces.")