OpenInference LangChain Instrumentation

0.1.62 · active · verified Sat Apr 11

The `openinference-instrumentation-langchain` library provides automatic instrumentation for LangChain applications, enabling detailed observability for AI workflows. It implements OpenInference semantic conventions on top of OpenTelemetry, standardizing traces for LLM calls, agent reasoning, tool invocations, and retrieval operations. This allows for seamless integration with any OpenTelemetry-compatible backend, such as Arize Phoenix, to visualize and analyze your AI application's performance. The library is actively maintained, with a current version of 0.1.62, and receives regular updates.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to instrument a simple LangChain agent with `openinference-instrumentation-langchain`. It sets up a basic OpenTelemetry `TracerProvider` to export traces to a local OTLP collector (like Arize Phoenix, typically running on `http://localhost:6006/v1/traces`). The `LangChainInstrumentor().instrument()` call enables automatic tracing of LangChain operations. An `OPENAI_API_KEY` environment variable is required to run the example successfully.

import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from openinference.instrumentation.langchain import LangChainInstrumentor

from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

# Set up OpenTelemetry
resource = Resource.create({"service.name": "my-langchain-app"})
tracer_provider = TracerProvider(resource=resource)
span_exporter = OTLPSpanExporter(endpoint=os.environ.get('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://localhost:6006/v1/traces'))
tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument LangChain
LangChainInstrumentor().instrument()

# Ensure OpenAI API key is set for the example
os.environ["OPENAI_API_KEY"] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_KEY_HERE') # Replace with actual key or ensure env var is set

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers together."""
    return a * b

@tool
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

llm = ChatOpenAI(temperature=0, model="gpt-4o-mini")
tools = [multiply, add]
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

if __name__ == "__main__":
    print("Running agent...")
    response = agent_executor.invoke({"input": "What is 123 multiplied by 456?"})
    print(f"Agent Response: {response['output']}")
    print("Traces should be visible in your configured OpenTelemetry collector (e.g., Phoenix at http://localhost:6006).")

view raw JSON →