OpenTelemetry Langchain Instrumentation

0.58.0 · active · verified Thu Apr 09

This library provides OpenTelemetry instrumentation for applications built with LangChain, enabling comprehensive tracing of LLM interactions and associated components. It is part of the OpenLLMetry project, which extends OpenTelemetry for enhanced LLM observability. The project maintains a rapid release cadence, with frequent updates often driven by advancements in OpenTelemetry's Generative AI semantic conventions.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to instrument a basic LangChain application using `opentelemetry-instrumentation-langchain`. It includes the necessary OpenTelemetry SDK setup with a `ConsoleSpanExporter` to print traces to the console, followed by the `LangchainInstrumentor().instrument()` call. A simple LangChain expression language (LCEL) chain is then invoked to generate a story, and its execution will be automatically traced by OpenTelemetry.

import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.langchain import LangchainInstrumentor

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# 1. Configure OpenTelemetry SDK (essential for any OTel instrumentation)
# Set up a basic console exporter for demonstration
resource = Resource.create({"service.name": "my-langchain-app"})
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(tracer_provider)

# 2. Instrument Langchain
LangchainInstrumentor().instrument()

# Optional: Disable sensitive content logging if needed (e.g., for production)
# os.environ['TRACELOOP_TRACE_CONTENT'] = 'false'

# 3. Use Langchain as usual
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') # Replace with actual key or secure handling

llm = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful AI assistant."),
    ("user", "Tell me a short story about {topic}.")
])
output_parser = StrOutputParser()

chain = prompt | llm | output_parser

if __name__ == "__main__":
    print("Running LangChain application...")
    response = chain.invoke({"topic": "a cat detective"})
    print("\nLangChain Response:")
    print(response)
    print("\nCheck console output for OpenTelemetry traces.")

view raw JSON →