OpenInference LangChain Instrumentation
The `openinference-instrumentation-langchain` library provides automatic instrumentation for LangChain applications, enabling detailed observability for AI workflows. It implements OpenInference semantic conventions on top of OpenTelemetry, standardizing traces for LLM calls, agent reasoning, tool invocations, and retrieval operations. This allows for seamless integration with any OpenTelemetry-compatible backend, such as Arize Phoenix, to visualize and analyze your AI application's performance. The library is actively maintained, with a current version of 0.1.62, and receives regular updates.
Warnings
- gotcha Direct `model.invoke()` calls in LangChain may result in unstructured trace output in some UI backends. While traces are captured, the detailed message-by-message history might not be rendered in the expected structured format, appearing as raw JSON.
- gotcha When using `openinference-instrumentation-langchain` with `langgraph_swarm`, an `AssertionError` can occur due to the tracer expecting message IDs to be lists but receiving `None` instead. This can disrupt logging, though trace data might still be sent.
- gotcha Complex asynchronous flows in LangChain applications may prevent OpenTelemetry's context from propagating automatically across async boundaries, leading to fragmented or orphaned spans. This is a common challenge with OpenTelemetry in highly concurrent Python applications.
Install
-
pip install openinference-instrumentation-langchain langchain langchain-openai opentelemetry-sdk opentelemetry-exporter-otlp arize-phoenix -
pip install openinference-instrumentation-langchain langchain-classic langchain-openai opentelemetry-sdk opentelemetry-exporter-otlp arize-phoenix
Imports
- LangChainInstrumentor
from openinference.instrumentation.langchain import LangChainInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from openinference.instrumentation.langchain import LangChainInstrumentor
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
# Set up OpenTelemetry
resource = Resource.create({"service.name": "my-langchain-app"})
tracer_provider = TracerProvider(resource=resource)
span_exporter = OTLPSpanExporter(endpoint=os.environ.get('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://localhost:6006/v1/traces'))
tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter))
trace.set_tracer_provider(tracer_provider)
# Instrument LangChain
LangChainInstrumentor().instrument()
# Ensure OpenAI API key is set for the example
os.environ["OPENAI_API_KEY"] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_KEY_HERE') # Replace with actual key or ensure env var is set
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
@tool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
llm = ChatOpenAI(temperature=0, model="gpt-4o-mini")
tools = [multiply, add]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
if __name__ == "__main__":
print("Running agent...")
response = agent_executor.invoke({"input": "What is 123 multiplied by 456?"})
print(f"Agent Response: {response['output']}")
print("Traces should be visible in your configured OpenTelemetry collector (e.g., Phoenix at http://localhost:6006).")