OpenTelemetry IBM Watsonx Instrumentation

0.58.0 · active · verified Fri Apr 10

This library provides OpenTelemetry tracing for applications using IBM Watsonx. It instruments interactions with Watsonx AI services, capturing LLM calls, parameters, and responses as OpenTelemetry spans. The project is actively maintained with frequent releases, often aligning with updates to OpenTelemetry GenAI semantic conventions. Current version: 0.58.0.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up OpenTelemetry with `opentelemetry-instrumentation-watsonx`, initialize the instrumentation, and then use the `ibm-watson-machine-learning` client to generate text. It assumes you have a basic Watsonx setup with an API key and project ID configured. The traces will be printed to the console.

import os

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.watsonx import WatsonxInstrumentation

# Setup basic OpenTelemetry tracing (output to console)
resource = Resource.create({"service.name": "my-watsonx-app"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(provider)

# Initialize Watsonx instrumentation
WatsonxInstrumentation().instrument()

# Import the Watsonx client AFTER instrumentation
from ibm_watson_machine_learning.foundation_models import Model
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
from ibm_watson_machine_learning.foundation_models.inference import TextGenerationParameters

# Watsonx client setup (replace with your actual API key and project ID)
# It's recommended to use environment variables for sensitive info
api_key = os.environ.get("WATSONX_API_KEY", "YOUR_WATSONX_API_KEY")
project_id = os.environ.get("WATSONX_PROJECT_ID", "YOUR_WATSONX_PROJECT_ID")
url = os.environ.get("WATSONX_URL", "https://us-south.ml.cloud.ibm.com")

if "YOUR_WATSONX_API_KEY" in api_key or "YOUR_WATSONX_PROJECT_ID" in project_id:
    print("WARNING: Please set WATSONX_API_KEY and WATSONX_PROJECT_ID environment variables for a runnable example.")

credentials = {
    "url": url,
    "apikey": api_key
}

parameters = TextGenerationParameters(
    max_new_tokens=50,
    min_new_tokens=10,
    repetition_penalty=1.1
)

# Example model initialization and text generation
try:
    model = Model(
        model_id=ModelTypes.LLAMA_2_70B_CHAT, # Or other supported model
        credentials=credentials,
        parameters=parameters,
        project_id=project_id,
    )
    
    prompt = "What is the capital of France?"
    print(f"\nGenerating text for prompt: '{prompt}'")
    response = model.generate_text(prompt=prompt)
    print(f"Generated text: {response}")

except Exception as e:
    print(f"Error during Watsonx API call: {e}")
    print("Ensure your WATSONX_API_KEY, WATSONX_PROJECT_ID, and WATSONX_URL are correctly set and have access.")

# Spans will be printed to console by ConsoleSpanExporter

view raw JSON →