OpenTelemetry Mistral AI Instrumentation
This library provides OpenTelemetry auto-instrumentation for the official Mistral AI Python SDK, allowing for tracing of LLM calls, capturing performance, token usage, and other observability data. It is part of the broader OpenLLMetry project and sees frequent releases, often on a weekly or bi-weekly basis, to keep up with changes in semantic conventions and underlying LLM SDKs.
Warnings
- breaking OpenTelemetry Semantic Conventions, especially for Generative AI (gen_ai.* attributes), are actively evolving. This instrumentation frequently updates to conform to the latest conventions, which may introduce changes to span attribute names or structures.
- gotcha By default, this instrumentation logs prompts, completions, and embeddings as span attributes. These may contain highly sensitive user data.
- gotcha This library only provides the instrumentation for `mistralai`. A full OpenTelemetry setup, including a `TracerProvider` and an exporter (e.g., OTLP exporter), is required to process and send the generated telemetry data to an observability backend. Without this, no traces will be collected or visible.
- gotcha The instrumentation requires the `mistralai` library to be installed and compatible with the instrumentation's version. Incompatible `mistralai` SDK versions may lead to partial or no instrumentation.
Install
-
pip install opentelemetry-instrumentation-mistralai mistralai
Imports
- MistralAiInstrumentor
from opentelemetry.instrumentation.mistralai import MistralAiInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.mistralai import MistralAiInstrumentor
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
# Configure OpenTelemetry SDK
resource = Resource.create({"service.name": "mistralai-app"})
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(tracer_provider)
# Instrument the Mistral AI library
MistralAiInstrumentor().instrument()
# Set your Mistral AI API key (replace with your actual key or env var)
mistral_api_key = os.environ.get("MISTRAL_API_KEY", "YOUR_MISTRAL_API_KEY")
if mistral_api_key == "YOUR_MISTRAL_API_KEY":
print("Warning: MISTRAL_API_KEY environment variable not set. Using a placeholder.")
print("Please set MISTRAL_API_KEY to run a real Mistral AI call.")
else:
client = MistralClient(api_key=mistral_api_key)
try:
# Example Mistral AI API call
chat_response = client.chat(
model="mistral-tiny",
messages=[
ChatMessage(role="user", content="What is the capital of France?")
]
)
print(f"Mistral AI Response: {chat_response.choices[0].message.content}")
except Exception as e:
print(f"Error making Mistral AI call: {e}")
print("Instrumentation initialized. Check console for OpenTelemetry spans.")