OpenLit
OpenLit is an OpenTelemetry-native auto-instrumentation library for monitoring LLM applications and GPUs, facilitating the integration of observability into GenAI projects. It offers automatic tracing, metrics, and evaluations for over 50 LLM providers, frameworks, and vector databases. The library is actively maintained with frequent releases, currently at version 1.40.3.
Warnings
- gotcha OpenLit's auto-instrumentation requires `openlit.init()` to be called *before* importing or instantiating any AI library clients (e.g., OpenAI, LangChain). Clients initialized prior to `openlit.init()` will not be instrumented.
- breaking The `application_name` parameter in `openlit.init()` has been deprecated. It is replaced by `service_name` for consistency with OpenTelemetry semantic conventions.
- gotcha Configuration parameters are prioritized: environment variables take precedence over CLI arguments, which take precedence over parameters passed directly to `openlit.init()`. Unexpected behavior might occur if conflicting configurations are present.
- gotcha If `otlp_endpoint` is not provided in `openlit.init()` or via the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable, OpenLit will output traces directly to the console instead of sending them to an external observability backend. This is intended for development but can lead to missing data in production.
- gotcha Large prompts, especially in RAG contexts, can lead to high memory usage due to large span events. This can impact performance and resource consumption.
- gotcha OpenLit, being an actively developed library in the rapidly evolving GenAI space, can introduce frequent updates, including changes to OpenTelemetry semantic conventions, which may require attention to maintain consistent observability data.
Install
-
pip install openlit -
pip install openlit[gpu]
Imports
- openlit
import openlit
Quickstart
import os
import openlit
from openai import OpenAI
# Configure OpenLIT (either via env vars or direct arguments to init)
# For local development, omitting otlp_endpoint will print traces to console.
os.environ['OPENLIT_APPLICATION_NAME'] = os.environ.get('OPENLIT_APPLICATION_NAME', 'my-genai-app')
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = os.environ.get('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://127.0.0.1:4318')
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY') # Replace with actual key or set env var
# Initialize OpenLIT for auto-instrumentation
# Make sure this call happens *before* importing/instantiating LLM clients
openlit.init()
# Example with OpenAI
client = OpenAI()
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is OpenTelemetry?"}]
)
print(response.choices[0].message.content)
except Exception as e:
print(f"An error occurred: {e}")
print("Ensure OPENAI_API_KEY is set and OTLP endpoint is reachable if not using console output.")