LlamaIndex Instrumentation
The `llama-index-instrumentation` library provides robust observability tools for LlamaIndex applications, enabling tracing and logging capabilities, particularly through OpenTelemetry. It helps developers understand the flow and performance of their RAG pipelines. The current version is 0.5.0, and it's part of the frequently updated LlamaIndex ecosystem.
Warnings
- gotcha Instrumentation requires a properly configured OpenTelemetry SDK `TracerProvider` to be set globally via `opentelemetry.trace.set_tracer_provider()`. Without this, no traces will be collected or exported, even if `instrumentation.enable()` is called.
- deprecated Python 3.9 support has been deprecated across the LlamaIndex ecosystem. While older versions of `llama-index-instrumentation` might still function, official support is being phased out.
- breaking LlamaIndex has largely migrated from using `ServiceContext` for global configurations (LLM, Embeddings, etc.) to a new `Settings` object. While `ServiceContext` might still exist in some legacy paths, new development should use `Settings`.
Install
-
pip install llama-index-instrumentation -
pip install opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-exporter-jaeger
Imports
- OpenTelemetryInstrumentation
from llama_index_instrumentation.opentelemetry import OpenTelemetryInstrumentation
- Settings
from llama_index.core.settings import Settings
- TracerProvider
from opentelemetry.sdk.trace import TracerProvider
Quickstart
import os
from llama_index.core.llms import MockLLM
from llama_index.core.settings import Settings
from llama_index_instrumentation.opentelemetry import OpenTelemetryInstrumentation
# OpenTelemetry setup (CRITICAL for traces to be visible)
# For a real application, you'd configure an OTLP/Jaeger exporter, etc.
# This example prints traces to console.
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
resource = Resource.create({"service.name": "my-llama-app"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(provider)
# 1. Initialize OpenTelemetry Instrumentation
instrumentation = OpenTelemetryInstrumentation(tracer_provider=trace.get_tracer_provider())
# 2. Enable instrumentation
instrumentation.enable()
# 3. Configure LlamaIndex LLM
Settings.llm = MockLLM()
# 4. Perform a LlamaIndex operation
response = Settings.llm.complete("Tell me a short story about a dragon.")
print(f"LLM Response: {response.text[:50]}...")
# 5. (Optional) Disable instrumentation when no longer needed
instrumentation.disable()