OpenTelemetry Instrumentation for Voyage AI
This library provides OpenTelemetry instrumentation for the Voyage AI Python client, enabling automatic collection of traces, metrics, and logs for AI model interactions. It supports the OpenTelemetry GenAI semantic conventions. The project is actively maintained with frequent releases, often on a weekly or bi-weekly cadence, reflecting ongoing development and alignment with evolving OpenTelemetry standards.
Warnings
- breaking OpenTelemetry semantic conventions, especially for Generative AI (GenAI), are actively evolving. This instrumentation frequently updates to align with these changes, which can lead to changes in span and attribute names between minor versions.
- gotcha By default, sensitive message content (like prompts and responses) might not be captured in spans for security and privacy reasons.
- gotcha This instrumentation library (like most OpenTelemetry Python instrumentations) is currently in a beta development status. While stable for many use cases, its APIs and behavior might evolve until a 1.0 stable release.
- gotcha For telemetry data to actually be emitted, you must not only install the instrumentation library but also configure a full OpenTelemetry SDK (`opentelemetry-sdk`) with a `TracerProvider` and a `SpanProcessor` connected to an exporter. Without the SDK, the instrumentation will be a no-op.
Install
-
pip install opentelemetry-instrumentation-voyageai opentelemetry-sdk opentelemetry-exporter-otlp voyageai
Imports
- VoyageAIInstrumentor
from opentelemetry.instrumentation.voyageai import VoyageAIInstrumentor
Quickstart
import os
import voyageai
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.voyageai import VoyageAIInstrumentor
# 1. Configure OpenTelemetry SDK
resource = Resource.create(attributes={
"service.name": "voyageai-app",
"application": "my-llm-app"
})
trace_provider = TracerProvider(resource=resource)
# For simplicity, using OTLP HTTP exporter. Adjust endpoint as needed.
span_exporter = OTLPSpanExporter(
endpoint=os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318/v1/traces")
)
trace_provider.add_span_processor(BatchSpanProcessor(span_exporter))
trace.set_tracer_provider(trace_provider)
# Optional: Enable capturing full message content (e.g., prompts and responses)
os.environ["OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT"] = "true"
# 2. Instrument Voyage AI
VoyageAIInstrumentor().instrument()
# 3. Use Voyage AI client
# Ensure VOYAGE_API_KEY is set in your environment
client = voyageai.Client(api_key=os.environ.get("VOYAGE_API_KEY", ""))
if not client.api_key:
print("Error: VOYAGE_API_KEY environment variable not set. Please set it to run the example.")
else:
try:
print("Sending embedding request...")
embedding_response = client.embed(
texts=["What is the capital of France?", "Tell me about OpenTelemetry."],
model="voyage-large-2"
)
print("Embedding response received (first 5 dimensions of first embedding):")
if embedding_response.embeddings:
print(embedding_response.embeddings[0][:5], "...")
except Exception as e:
print(f"An error occurred during Voyage AI embedding: {e}")
# Ensure spans are flushed before exit
trace_provider.shutdown()
print("Tracing finished. Check your OTLP endpoint for traces.")