OpenTelemetry MCP Instrumentation (OpenLLMetry)

0.58.0 · active · verified Fri Apr 10

The `opentelemetry-instrumentation-mcp` package provides automatic OpenTelemetry instrumentation for a wide array of Large Language Models (LLMs), vector databases, and other AI frameworks. As part of the Traceloop `openllmetry` project, it acts as a meta-package, consolidating over 30 individual instrumentations for popular libraries like OpenAI, LangChain, Anthropic, and Pinecone into a single installation. It is currently at version 0.58.0 and follows a rapid release cadence with frequent updates.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize OpenLLMetry using `openllmetry.sdk.init()` after installing the `opentelemetry-instrumentation-mcp` meta-package. It then makes a sample call to OpenAI, which will be automatically traced. Ensure your `OPENAI_API_KEY` is set as an environment variable and an OpenTelemetry collector is configured (e.g., via `OTEL_EXPORTER_OTLP_ENDPOINT`).

import os
from openllmetry.sdk import init
from openai import OpenAI

# Initialize OpenLLMetry to enable instrumentation
init()

# Ensure OPENAI_API_KEY is set in your environment
openai_api_key = os.environ.get('OPENAI_API_KEY', '')
if not openai_api_key:
    print("Warning: OPENAI_API_KEY not set. Skipping OpenAI call.")
else:
    print("OpenAI API Key found. Making a sample call...")
    client = OpenAI(api_key=openai_api_key)
    try:
        completion = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "user", "content": "What is OpenTelemetry?"}
            ]
        )
        print(f"OpenAI response: {completion.choices[0].message.content[:50]}...")
        print("Traces for this call should be visible in your configured OpenTelemetry collector.")
    except Exception as e:
        print(f"Error during OpenAI call: {e}")

view raw JSON →