OpenTelemetry IBM Watsonx Instrumentation
This library provides OpenTelemetry tracing for applications using IBM Watsonx. It instruments interactions with Watsonx AI services, capturing LLM calls, parameters, and responses as OpenTelemetry spans. The project is actively maintained with frequent releases, often aligning with updates to OpenTelemetry GenAI semantic conventions. Current version: 0.58.0.
Warnings
- gotcha OpenTelemetry GenAI semantic conventions are actively evolving. Frequent updates to `opentelemetry-instrumentation-watsonx` often include changes to span attribute names, types, or structures to align with these conventions. This can impact existing monitoring dashboards, alerts, or analytics queries if not updated accordingly.
- gotcha The `WatsonxInstrumentation().instrument()` call must occur before any code that interacts with the `ibm-watson-machine-learning` library. If the `ibm-watson-machine-learning` client is imported or used before instrumentation is enabled, its operations will not be traced.
- gotcha This instrumentation package is part of the broader `openllmetry` project, which bundles various LLM instrumentations. While `opentelemetry-instrumentation-watsonx` can be used standalone, ensure compatibility if you are also using other `openllmetry` instrumentations or the `openllmetry` SDK, as version bumps and changes are often coordinated across the ecosystem.
Install
-
pip install opentelemetry-instrumentation-watsonx
Imports
- WatsonxInstrumentation
from opentelemetry.instrumentation.watsonx import WatsonxInstrumentation
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.instrumentation.watsonx import WatsonxInstrumentation
# Setup basic OpenTelemetry tracing (output to console)
resource = Resource.create({"service.name": "my-watsonx-app"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(provider)
# Initialize Watsonx instrumentation
WatsonxInstrumentation().instrument()
# Import the Watsonx client AFTER instrumentation
from ibm_watson_machine_learning.foundation_models import Model
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
from ibm_watson_machine_learning.foundation_models.inference import TextGenerationParameters
# Watsonx client setup (replace with your actual API key and project ID)
# It's recommended to use environment variables for sensitive info
api_key = os.environ.get("WATSONX_API_KEY", "YOUR_WATSONX_API_KEY")
project_id = os.environ.get("WATSONX_PROJECT_ID", "YOUR_WATSONX_PROJECT_ID")
url = os.environ.get("WATSONX_URL", "https://us-south.ml.cloud.ibm.com")
if "YOUR_WATSONX_API_KEY" in api_key or "YOUR_WATSONX_PROJECT_ID" in project_id:
print("WARNING: Please set WATSONX_API_KEY and WATSONX_PROJECT_ID environment variables for a runnable example.")
credentials = {
"url": url,
"apikey": api_key
}
parameters = TextGenerationParameters(
max_new_tokens=50,
min_new_tokens=10,
repetition_penalty=1.1
)
# Example model initialization and text generation
try:
model = Model(
model_id=ModelTypes.LLAMA_2_70B_CHAT, # Or other supported model
credentials=credentials,
parameters=parameters,
project_id=project_id,
)
prompt = "What is the capital of France?"
print(f"\nGenerating text for prompt: '{prompt}'")
response = model.generate_text(prompt=prompt)
print(f"Generated text: {response}")
except Exception as e:
print(f"Error during Watsonx API call: {e}")
print("Ensure your WATSONX_API_KEY, WATSONX_PROJECT_ID, and WATSONX_URL are correctly set and have access.")
# Spans will be printed to console by ConsoleSpanExporter