{"id":3722,"library":"openinference-instrumentation-langchain","title":"OpenInference LangChain Instrumentation","description":"The `openinference-instrumentation-langchain` library provides automatic instrumentation for LangChain applications, enabling detailed observability for AI workflows. It implements OpenInference semantic conventions on top of OpenTelemetry, standardizing traces for LLM calls, agent reasoning, tool invocations, and retrieval operations. This allows for seamless integration with any OpenTelemetry-compatible backend, such as Arize Phoenix, to visualize and analyze your AI application's performance. The library is actively maintained, with a current version of 0.1.62, and receives regular updates.","status":"active","version":"0.1.62","language":"en","source_language":"en","source_url":"https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-langchain","tags":["observability","opentelemetry","langchain","llm","instrumentation","ai","tracing"],"install":[{"cmd":"pip install openinference-instrumentation-langchain langchain langchain-openai opentelemetry-sdk opentelemetry-exporter-otlp arize-phoenix","lang":"bash","label":"For LangChain 1.x with OpenAI and local Phoenix"},{"cmd":"pip install openinference-instrumentation-langchain langchain-classic langchain-openai opentelemetry-sdk opentelemetry-exporter-otlp arize-phoenix","lang":"bash","label":"For LangChain Classic (0.x) with OpenAI and local Phoenix"}],"dependencies":[{"reason":"Requires Python versions >=3.10, <3.15.","package":"python","optional":false},{"reason":"Core dependency for instrumenting LangChain 1.x applications.","package":"langchain","optional":false},{"reason":"Core dependency for instrumenting legacy LangChain 0.x applications.","package":"langchain-classic","optional":true},{"reason":"Required for OpenTelemetry tracing functionality.","package":"opentelemetry-sdk","optional":false},{"reason":"Required for exporting OpenTelemetry traces.","package":"opentelemetry-exporter-otlp","optional":false},{"reason":"Recommended for local visualization and analysis of traces.","package":"arize-phoenix","optional":true}],"imports":[{"note":"The primary class to enable LangChain auto-instrumentation.","symbol":"LangChainInstrumentor","correct":"from openinference.instrumentation.langchain import LangChainInstrumentor"}],"quickstart":{"code":"import os\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import SimpleSpanProcessor\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter\nfrom openinference.instrumentation.langchain import LangChainInstrumentor\n\nfrom langchain.agents import AgentExecutor, create_tool_calling_agent\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.tools import tool\nfrom langchain_openai import ChatOpenAI\n\n# Set up OpenTelemetry\nresource = Resource.create({\"service.name\": \"my-langchain-app\"})\ntracer_provider = TracerProvider(resource=resource)\nspan_exporter = OTLPSpanExporter(endpoint=os.environ.get('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://localhost:6006/v1/traces'))\ntracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter))\ntrace.set_tracer_provider(tracer_provider)\n\n# Instrument LangChain\nLangChainInstrumentor().instrument()\n\n# Ensure OpenAI API key is set for the example\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get('OPENAI_API_KEY', 'sk-YOUR_OPENAI_KEY_HERE') # Replace with actual key or ensure env var is set\n\n@tool\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two numbers together.\"\"\"\n    return a * b\n\n@tool\ndef add(a: int, b: int) -> int:\n    \"\"\"Add two numbers together.\"\"\"\n    return a + b\n\nllm = ChatOpenAI(temperature=0, model=\"gpt-4o-mini\")\ntools = [multiply, add]\nprompt = ChatPromptTemplate.from_messages([\n    (\"system\", \"You are a helpful assistant.\"),\n    (\"human\", \"{input}\"),\n])\nagent = create_tool_calling_agent(llm, tools, prompt)\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n\nif __name__ == \"__main__\":\n    print(\"Running agent...\")\n    response = agent_executor.invoke({\"input\": \"What is 123 multiplied by 456?\"})\n    print(f\"Agent Response: {response['output']}\")\n    print(\"Traces should be visible in your configured OpenTelemetry collector (e.g., Phoenix at http://localhost:6006).\")\n","lang":"python","description":"This quickstart demonstrates how to instrument a simple LangChain agent with `openinference-instrumentation-langchain`. It sets up a basic OpenTelemetry `TracerProvider` to export traces to a local OTLP collector (like Arize Phoenix, typically running on `http://localhost:6006/v1/traces`). The `LangChainInstrumentor().instrument()` call enables automatic tracing of LangChain operations. An `OPENAI_API_KEY` environment variable is required to run the example successfully."},"warnings":[{"fix":"This is a backend display issue or a limitation in how certain `invoke` patterns are interpreted. Using higher-level LangChain constructs like agents often yields better-formatted traces. Verify your observability backend's display capabilities for raw OpenTelemetry spans.","message":"Direct `model.invoke()` calls in LangChain may result in unstructured trace output in some UI backends. While traces are captured, the detailed message-by-message history might not be rendered in the expected structured format, appearing as raw JSON.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Monitor for updates to `openinference-instrumentation-langchain` or `langgraph_swarm` that address this. If encountering this, check the official GitHub issues for workarounds or specific version recommendations. The issue has been observed with Python 3.13.","message":"When using `openinference-instrumentation-langchain` with `langgraph_swarm`, an `AssertionError` can occur due to the tracer expecting message IDs to be lists but receiving `None` instead. This can disrupt logging, though trace data might still be sent.","severity":"gotcha","affected_versions":"Potentially all versions with `langgraph_swarm` integration."},{"fix":"Manually manage context propagation in complex async scenarios using `context.attach()` and `context.detach()` or by passing the current `Context` explicitly. Consider using `contextvars` for async-aware context management if not already handled by OpenTelemetry's integration with your async framework.","message":"Complex asynchronous flows in LangChain applications may prevent OpenTelemetry's context from propagating automatically across async boundaries, leading to fragmented or orphaned spans. This is a common challenge with OpenTelemetry in highly concurrent Python applications.","severity":"gotcha","affected_versions":"All versions, due to nature of async context propagation in OpenTelemetry."}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}