OpenTelemetry Together AI Instrumentation
This library provides OpenTelemetry instrumentation for applications interacting with Together AI's endpoints. It is part of the OpenLLMetry project, which extends OpenTelemetry with AI-related instrumentations to capture LLM-specific data like prompts, completions, and token usage. The library is actively maintained with a rapid release cadence, with version 0.58.0 released on 2026-04-09.
Warnings
- breaking Frequent updates to OpenTelemetry GenAI semantic conventions (e.g., in versions 0.53.0 to 0.58.0) may introduce breaking changes to span attribute names and structures. Ensure your observability backend and custom dashboards are compatible with the latest semantic conventions.
- gotcha By default, this instrumentation logs sensitive data such as prompts, completions, and embeddings to span attributes. This is for visibility but can pose a privacy risk or increase trace size significantly.
- gotcha This library is an OpenTelemetry *instrumentation* and requires a full OpenTelemetry SDK setup (including a `TracerProvider`, `SpanProcessor`, and `SpanExporter`) to actually collect and export traces. Without proper SDK configuration, the instrumentation will run but produce no visible telemetry.
Install
-
pip install opentelemetry-instrumentation-together
Imports
- TogetherAiInstrumentor
from opentelemetry.instrumentation.together import TogetherAiInstrumentor
Quickstart
import os
import together
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessor
from opentelemetry.instrumentation.together import TogetherAiInstrumentor
# Configure OpenTelemetry SDK
provider = TracerProvider()
processor = BatchSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument Together AI
TogetherAiInstrumentor().instrument()
# Set your Together AI API key (replace with your actual key or environment variable)
# For demonstration, we'll use a placeholder and mock the API call.
together.api_key = os.environ.get('TOGETHER_API_KEY', 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
# --- Mock Together AI call for a runnable example ---
# In a real application, you would make an actual call to Together AI
# For testing, you can use a library like 'unittest.mock' or 'pytest-mock'
# to prevent actual API calls.
class MockCompletion:
def __init__(self, choices=None):
self.choices = choices or [MockChoice()]
class MockChoice:
def __init__(self, text="Mocked Together AI completion.", logprobs=None, finish_reason="stop"):
self.text = text
self.logprobs = logprobs
self.finish_reason = finish_reason
# Temporarily patch together.Complete.create for the example
original_create = together.Complete.create
def mock_create(*args, **kwargs):
print("Mocking together.Complete.create...")
# Simulate a delay for realism in tracing
import time
time.sleep(0.1)
return MockCompletion()
together.Complete.create = mock_create
try:
print("Making a (mocked) Together AI call...")
# Example Together AI call (this will be traced)
response = together.Complete.create(
prompt="Tell me a short story about a brave knight.",
model="togethercomputer/llama-2-7b-chat",
max_tokens=50
)
print(f"Response: {response.choices[0].text}")
finally:
# Restore original method after example
together.Complete.create = original_create
# Ensure traces are flushed for console exporter
provider.shutdown()
print("Check console for OpenTelemetry traces.")