OpenTelemetry Semantic Conventions for AI
This library provides Python constants for OpenTelemetry semantic conventions specifically tailored for AI applications, including Large Language Models (LLMs) and Vector Databases. It defines standardized attribute names and values to ensure consistent and interoperable observability data across various AI services and tools. The current version is 0.5.1, and it maintains an active release cadence, typically releasing updates monthly or bi-monthly as conventions evolve.
Warnings
- breaking Version 0.5.0 introduced a significant restructure, moving away from providing instrumentation utilities (like `langchain_monitor`). This library now solely provides semantic convention constants. If you previously used `opentelemetry.instrumentation.*` modules from this package, they are removed.
- gotcha The semantic conventions are strict. Using incorrect attribute names, data types, or values for attributes will not typically raise an error but will result in non-compliant traces that may not be correctly interpreted by observability tools.
- gotcha As a `0.x.x` version library, the underlying semantic conventions for AI are still evolving. This means attribute names, values, or structures might change in future minor versions, potentially requiring updates to your code.
Install
-
pip install opentelemetry-semantic-conventions-ai
Imports
- SpanAttributes
from opentelemetry.semconv.ai import SpanAttributes
- LLMRequestTypeValues
from opentelemetry.semconv.ai import LLMRequestTypeValues
- VectorDBAttributes
from opentelemetry.semconv.ai import VectorDBAttributes
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from opentelemetry.semconv.ai import SpanAttributes, LLMRequestTypeValues
# Configure OpenTelemetry SDK
resource = Resource.create(
{
"service.name": os.environ.get("OTEL_SERVICE_NAME", "my-llm-app"),
"service.instance.id": "instance-1",
}
)
provider = TracerProvider(resource=resource)
# For demonstration, export spans to console
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(provider)
# Get a tracer
tracer = trace.get_tracer(__name__)
# Simulate an LLM API call and add semantic conventions
with tracer.start_as_current_span("llm.chat.completion") as span:
span.set_attribute(SpanAttributes.LLM_VENDOR, "OpenAI")
span.set_attribute(SpanAttributes.LLM_MODEL_NAME, "gpt-4")
span.set_attribute(SpanAttributes.LLM_REQUEST_TYPE, LLMRequestTypeValues.CHAT.value)
span.set_attribute(SpanAttributes.LLM_PROMPT_MESSAGES,
[{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Tell me a joke.'}])
span.set_attribute(SpanAttributes.LLM_RESPONSE_MODEL, "gpt-4-0613")
span.set_attribute(SpanAttributes.LLM_COMPLETIONS,
[{'role': 'assistant', 'content': 'Why did the scarecrow win an award? Because he was outstanding in his field!'}])
span.set_attribute(SpanAttributes.LLM_USAGE_TOTAL_TOKENS, 50)
print("Span with AI semantic conventions created and exported to console.")