{"id":2146,"library":"openinference-instrumentation","title":"OpenInference Instrumentation","description":"OpenInference Instrumentation provides Python utilities for collecting traces from AI/ML applications, extending OpenTelemetry to offer detailed observability for LLMs and related frameworks. It integrates with any OpenTelemetry-compatible backend like Arize Phoenix or Langfuse. The current version is 0.1.46 and the project maintains an active release cadence, with frequent updates across its various framework-specific sub-packages.","status":"active","version":"0.1.46","language":"en","source_language":"en","source_url":"https://github.com/Arize-ai/openinference/tree/main/python/openinference-instrumentation","tags":["observability","LLM","tracing","OpenTelemetry","AI","ML","instrumentation"],"install":[{"cmd":"pip install openinference-instrumentation openinference-instrumentation-openai openai opentelemetry-sdk opentelemetry-exporter-otlp","lang":"bash","label":"Install core utilities and OpenAI instrumentation"},{"cmd":"pip install openinference-instrumentation # For core utilities like context managers","lang":"bash","label":"Install core library only"}],"dependencies":[{"reason":"Required Python version range.","package":"python","version":"<3.15,>=3.9","optional":false},{"reason":"Core OpenTelemetry SDK for tracing.","package":"opentelemetry-sdk","optional":false},{"reason":"Exporter for sending traces via OTLP (e.g., to Arize Phoenix, Langfuse, or other OTel collectors).","package":"opentelemetry-exporter-otlp","optional":false}],"imports":[{"note":"Context manager for tracking user sessions across requests.","symbol":"using_session","correct":"from openinference.instrumentation.span_data import using_session"},{"note":"Context manager for associating traces with specific users.","symbol":"using_user","correct":"from openinference.instrumentation.span_data import using_user"},{"note":"Context manager for adding custom metadata to traces.","symbol":"using_metadata","correct":"from openinference.instrumentation.span_data import using_metadata"},{"note":"Example of an instrumentation class from a framework-specific sub-package.","symbol":"OpenAIInstrumentor","correct":"from openinference.instrumentation.openai import OpenAIInstrumentor"}],"quickstart":{"code":"import os\nimport openai\nfrom openinference.instrumentation.openai import OpenAIInstrumentor\nfrom openinference.instrumentation.span_data import using_session, using_user, using_metadata\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk import trace as trace_sdk\nfrom opentelemetry.sdk.trace.export import SimpleSpanProcessor\n\n# 1. Configure OpenTelemetry Tracer Provider\n# Traces will be sent to an OTLP collector, e.g., Arize Phoenix (default at http://127.0.0.1:6006/v1/traces)\n# Ensure your collector is running before executing this code.\nendpoint = os.environ.get(\"OTEL_EXPORTER_OTLP_ENDPOINT\", \"http://127.0.0.1:6006/v1/traces\")\ntracer_provider = trace_sdk.TracerProvider()\ntracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))\n\n# 2. Instrument your application with OpenAIInstrumentor\nOpenAIInstrumentor().instrument(tracer_provider=tracer_provider)\n\n# 3. Set OpenAI API Key (replace with your actual key or environment variable)\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"sk-YOUR_OPENAI_API_KEY\")\n\n# 4. Use OpenInference context managers and make an LLM call\nclient = openai.OpenAI()\n\nwith using_session(\"user_session_abc\"), \\\n     using_user(\"test_user_123\"), \\\n     using_metadata(key=\"deployment_env\", value=\"staging\"): \n\n    print(\"Making OpenAI chat completion call...\")\n    response = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\"role\": \"user\", \"content\": \"What is the capital of France?\"}\n        ]\n    )\n    print(f\"Response: {response.choices[0].message.content}\")\n\nprint(\"Traces should now be visible in your OpenTelemetry collector.\")\n","lang":"python","description":"This quickstart demonstrates how to set up OpenInference instrumentation for OpenAI, configure an OpenTelemetry tracer, and use OpenInference context managers for adding session, user, and custom metadata. The traces are exported to an OTLP collector (e.g., Arize Phoenix)."},"warnings":[{"fix":"Ensure `opentelemetry.sdk.trace.TracerProvider` is configured with desired `Resource` attributes and set as the global `TracerProvider` *before* importing or instantiating any OpenInference instrumentors. Use `opentelemetry.sdk.resources.Resource.create({'langfuse.environment': 'your_env'})`.","message":"OpenInference auto-instrumentation (e.g., for CrewAI, LiteLLM) may not inherit `OTEL_RESOURCE_ATTRIBUTES` (like `langfuse.environment`) unless the `TracerProvider` is explicitly configured with these attributes *before* importing and initializing the instrumentors. If the instrumentor is imported first, it might create a default `TracerProvider`, causing traces to default to a 'default' environment.","severity":"gotcha","affected_versions":"All versions where auto-instrumentation is used without pre-configuring TracerProvider."},{"fix":"As of the report, no direct fix in the library, a workaround might involve more manual control over span creation or conditionally calling the OpenAI API outside the instrumented scope if suppression is critical.","message":"The `openinference-instrumentation-openai` instrumentor might not fully respect the OpenTelemetry `suppress_instrumentation` context flag. Spans might still be created for OpenAI API calls even when `suppress_instrumentation=True` is active in the context.","severity":"gotcha","affected_versions":"Reported in 0.1.x versions (e.g., issue filed January 2026)."},{"fix":"Upgrade OpenAI SDK to `openai>=1.26` and pass `stream_options={'include_usage': True}` when making streaming chat completion calls.","message":"To correctly obtain token counts when streaming with OpenAI, `openai>=1.26` is required, and `stream_options={'include_usage': True}` must be explicitly passed to the `client.chat.completions.create` method. Without this, token counts for streaming responses may be missing.","severity":"gotcha","affected_versions":"All versions when using streaming with OpenAI SDK < 1.26 or without `stream_options`."},{"fix":"Always install the specific instrumentation package for the framework you are using, e.g., `pip install openinference-instrumentation-openai`.","message":"The base `openinference-instrumentation` package provides core utilities like context managers (`using_session`, `using_metadata`). However, for auto-instrumentation of specific LLM frameworks or SDKs (e.g., OpenAI, LangChain, LlamaIndex), you must install and import the corresponding `openinference-instrumentation-<framework>` sub-package. Installing only the base package will not provide framework-specific auto-instrumentation.","severity":"gotcha","affected_versions":"All versions."},{"fix":"Check for updates to `openinference-instrumentation-openai-agents` or consider manual instrumentation to ensure all relevant tool information is captured in agent spans.","message":"Older versions of `openinference-instrumentation-openai-agents` might not log the tools configured on an agent as part of the agent's input. Instead, tools are only logged if they appear in a response (i.e., when a tool is actually called). This can lead to an incomplete view of agent capabilities in the trace UI.","severity":"gotcha","affected_versions":"Reported in older 1.x versions (e.g., issue filed June 2025)."}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}