{"id":4151,"library":"openlit","title":"OpenLit","description":"OpenLit is an OpenTelemetry-native auto-instrumentation library for monitoring LLM applications and GPUs, facilitating the integration of observability into GenAI projects. It offers automatic tracing, metrics, and evaluations for over 50 LLM providers, frameworks, and vector databases. The library is actively maintained with frequent releases, currently at version 1.40.3.","status":"active","version":"1.40.3","language":"en","source_language":"en","source_url":"https://github.com/openlit/openlit/tree/main/openlit/python","tags":["observability","LLM","OpenTelemetry","GPU","AI","GenAI","tracing","metrics","monitoring","auto-instrumentation"],"install":[{"cmd":"pip install openlit","lang":"bash","label":"Basic Installation"},{"cmd":"pip install openlit[gpu]","lang":"bash","label":"Installation with GPU Monitoring (for NVIDIA GPUs)"}],"dependencies":[{"reason":"Requires Python 3.9 or newer, but less than 4.0.0.","package":"python","optional":false},{"reason":"Required for GPU monitoring functionality when using the 'gpu' extra.","package":"nvidia-ml-py","optional":true}],"imports":[{"symbol":"openlit","correct":"import openlit"}],"quickstart":{"code":"import os\nimport openlit\nfrom openai import OpenAI\n\n# Configure OpenLIT (either via env vars or direct arguments to init)\n# For local development, omitting otlp_endpoint will print traces to console.\nos.environ['OPENLIT_APPLICATION_NAME'] = os.environ.get('OPENLIT_APPLICATION_NAME', 'my-genai-app')\nos.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = os.environ.get('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://127.0.0.1:4318')\nos.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY') # Replace with actual key or set env var\n\n# Initialize OpenLIT for auto-instrumentation\n# Make sure this call happens *before* importing/instantiating LLM clients\nopenlit.init()\n\n# Example with OpenAI\nclient = OpenAI()\n\ntry:\n    response = client.chat.completions.create(\n        model=\"gpt-4o\",\n        messages=[{\"role\": \"user\", \"content\": \"What is OpenTelemetry?\"}]\n    )\n    print(response.choices[0].message.content)\nexcept Exception as e:\n    print(f\"An error occurred: {e}\")\n    print(\"Ensure OPENAI_API_KEY is set and OTLP endpoint is reachable if not using console output.\")","lang":"python","description":"This quickstart demonstrates how to initialize OpenLit for automatic instrumentation of an OpenAI LLM call. Ensure `openlit.init()` is called before any LLM client instantiation. By default, if `OTEL_EXPORTER_OTLP_ENDPOINT` is not set, traces will be printed to the console for development purposes. For production, configure the `OTEL_EXPORTER_OTLP_ENDPOINT` and authentication headers."},"warnings":[{"fix":"Place `import openlit` and `openlit.init()` at the very beginning of your application's entry point, before any other AI library imports or instantiations.","message":"OpenLit's auto-instrumentation requires `openlit.init()` to be called *before* importing or instantiating any AI library clients (e.g., OpenAI, LangChain). Clients initialized prior to `openlit.init()` will not be instrumented.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Replace `application_name='my-app'` with `service_name='my-app'` in your `openlit.init()` calls. The corresponding environment variable is `OTEL_SERVICE_NAME`.","message":"The `application_name` parameter in `openlit.init()` has been deprecated. It is replaced by `service_name` for consistency with OpenTelemetry semantic conventions.","severity":"breaking","affected_versions":"Versions 1.40.0+"},{"fix":"Be mindful of where you set your configuration. For production, environment variables (`OTEL_EXPORTER_OTLP_ENDPOINT`, `OTEL_EXPORTER_OTLP_HEADERS`, `OTEL_SERVICE_NAME`) are generally recommended. For local development, explicit `init` parameters or console output might be preferred.","message":"Configuration parameters are prioritized: environment variables take precedence over CLI arguments, which take precedence over parameters passed directly to `openlit.init()`. Unexpected behavior might occur if conflicting configurations are present.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always explicitly set `otlp_endpoint` in `openlit.init()` or the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable when deploying to a production or staging environment.","message":"If `otlp_endpoint` is not provided in `openlit.init()` or via the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable, OpenLit will output traces directly to the console instead of sending them to an external observability backend. This is intended for development but can lead to missing data in production.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Consider setting `trace_content=False` in `openlit.init()` to disable capturing the full content of prompts and completions, or implement content truncation for very large inputs if detailed content tracing is still required.","message":"Large prompts, especially in RAG contexts, can lead to high memory usage due to large span events. This can impact performance and resource consumption.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Monitor release notes and changelogs for updates. Pinning versions in your `requirements.txt` is advisable to prevent unexpected behavior from automatic updates. Regularly test your instrumentation after upgrading.","message":"OpenLit, being an actively developed library in the rapidly evolving GenAI space, can introduce frequent updates, including changes to OpenTelemetry semantic conventions, which may require attention to maintain consistent observability data.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}