{"id":2160,"library":"opentelemetry-instrumentation-llamaindex","title":"OpenTelemetry LlamaIndex Instrumentation","description":"This library provides OpenTelemetry tracing for applications built with LlamaIndex. It allows developers to observe the full lifecycle of LLM-based applications, including RAG pipelines, agents, and underlying LLM calls, by generating OpenTelemetry-compliant spans. The project is actively maintained with frequent releases, often aligning with the evolving OpenTelemetry GenAI semantic conventions.","status":"active","version":"0.58.0","language":"en","source_language":"en","source_url":"https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-llamaindex","tags":["opentelemetry","llamaindex","instrumentation","observability","tracing","llm","ai","rag"],"install":[{"cmd":"pip install opentelemetry-instrumentation-llamaindex llama-index-core openai","lang":"bash","label":"Install core and example dependencies"}],"dependencies":[{"reason":"Required for LlamaIndex core functionalities which this library instruments.","package":"llama-index-core","optional":false},{"reason":"Commonly used LLM provider for LlamaIndex applications; required for the quickstart example.","package":"openai","optional":true},{"reason":"Core OpenTelemetry SDK components (TracerProvider, SpanProcessor, etc.) are implicitly required for a functional tracing setup.","package":"opentelemetry-sdk","optional":false},{"reason":"Requires Python 3.10 or newer, but less than 4.","package":"Python","optional":false}],"imports":[{"symbol":"LlamaIndexInstrumentor","correct":"from opentelemetry.instrumentation.llamaindex import LlamaIndexInstrumentor"}],"quickstart":{"code":"import os\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import ( \n    ConsoleSpanExporter, \n    SimpleSpanProcessor\n)\nfrom opentelemetry.instrumentation.llamaindex import LlamaIndexInstrumentor\n\n# For the LlamaIndex example\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.llms.openai import OpenAI\n\n# --- OpenTelemetry Setup (for console output) ---\n# Resource for your service\nresource = Resource.create({\"service.name\": \"llamaindex-app\"})\n\n# Configure TracerProvider\nprovider = TracerProvider(resource=resource)\ntrace.set_tracer_provider(provider)\n\n# Configure Span Exporter to print traces to console\nexporter = ConsoleSpanExporter()\nspan_processor = SimpleSpanProcessor(exporter)\nprovider.add_span_processor(span_processor)\n\n# --- Instrument LlamaIndex ---\nLlamaIndexInstrumentor().instrument()\nprint(\"LlamaIndex instrumentation enabled.\")\n\n# --- LlamaIndex Application Example ---\n# Ensure OpenAI API key is set for the example\n# Replace with your actual key or set as an environment variable\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"sk-YOUR_OPENAI_API_KEY\")\n\n# Create a dummy document for LlamaIndex\ndummy_data_dir = \"./data\"\nos.makedirs(dummy_data_dir, exist_ok=True)\nwith open(os.path.join(dummy_data_dir, \"test_doc.txt\"), \"w\") as f:\n    f.write(\"The quick brown fox jumps over the lazy dog. LlamaIndex is great for RAG applications.\")\n\n# Load documents and create an index\ndocuments = SimpleDirectoryReader(dummy_data_dir).load_data()\nllm = OpenAI(model=\"gpt-3.5-turbo\")\nindex = VectorStoreIndex.from_documents(documents, llm=llm)\nquery_engine = index.as_query_engine(llm=llm)\n\nprint(\"\\nPerforming LlamaIndex query...\")\nresponse = query_engine.query(\"What is LlamaIndex good for?\")\nprint(f\"LlamaIndex Response: {response}\")\n\nprint(\"\\nTraces should be visible in the console.\")\n\n# Clean up dummy data (optional)\n# import shutil\n# if os.path.exists(dummy_data_dir):\n#     shutil.rmtree(dummy_data_dir)\n","lang":"python","description":"This quickstart demonstrates how to set up OpenTelemetry to collect traces from a LlamaIndex application. It initializes a `TracerProvider` with a `ConsoleSpanExporter` (for easy demonstration), then enables the `LlamaIndexInstrumentor`. A simple LlamaIndex query is performed, and its operations are traced and printed to the console. Remember to install `llama-index-core` and an LLM provider like `openai`."},"warnings":[{"fix":"Review the latest OpenTelemetry GenAI semantic conventions documentation and update any custom dashboards, alerts, or queries that rely on specific span attribute names. You might need to adjust your observability backend's processing rules.","message":"The OpenTelemetry GenAI semantic conventions are actively evolving. Recent versions (0.53.x and later) of `opentelemetry-instrumentation-llamaindex` have migrated span attributes to align with these newer conventions (e.g., OpenTelemetry GenAI Semantic Conventions 0.5.0).","severity":"breaking","affected_versions":">=0.53.0"},{"fix":"For complete end-to-end tracing of your LLM application, ensure you also install and enable specific OpenTelemetry instrumentations for your LLM providers (e.g., `opentelemetry-instrumentation-openai`) if you interact with them directly.","message":"This instrumentation specifically targets the LlamaIndex library. It does not automatically instrument all underlying LLM calls (e.g., directly made `openai` or `anthropic` client calls outside of LlamaIndex's abstraction).","severity":"gotcha","affected_versions":"All"},{"fix":"If data privacy is a concern, consult the OpenTelemetry documentation for how to configure redaction or filtering of sensitive attributes. For `opentelemetry-instrumentation-llamaindex`, check for configuration options to disable or mask specific attribute collection if available, or implement custom `SpanProcessor` logic.","message":"By default, the LlamaIndex instrumentation captures and logs sensitive data such as prompts, completions, and embedding inputs/outputs as span attributes. This data will be visible in your tracing backend.","severity":"gotcha","affected_versions":"All"},{"fix":"Always ensure your application code initializes and configures the OpenTelemetry SDK components before enabling any instrumentations. Refer to the OpenTelemetry Python SDK documentation for proper setup of `TracerProvider`, `Resource`, `SpanProcessor`, and an appropriate `SpanExporter` (e.g., OTLP, Jaeger, Zipkin).","message":"Simply calling `LlamaIndexInstrumentor().instrument()` is insufficient for traces to be collected and exported. A full OpenTelemetry SDK setup, including a `TracerProvider`, `SpanProcessor`, and `SpanExporter`, must be configured and registered.","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}