{"id":6751,"library":"openinference-instrumentation-litellm","title":"OpenInference LiteLLM Instrumentation","description":"OpenInference LiteLLM Instrumentation provides automatic OpenTelemetry-compatible tracing for applications using the LiteLLM SDK or LiteLLM Proxy. It captures traces for various LiteLLM functions, including `completion()`, `acompletion()`, `embedding()`, and `image_generation()`. This library is part of the Arize AI OpenInference project, which maintains a frequent release cadence across its instrumentation packages, ensuring up-to-date support for various LLM frameworks and providers.","status":"active","version":"0.1.30","language":"en","source_language":"en","source_url":"https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-litellm","tags":["observability","opentelemetry","llm","litellm","instrumentation","tracing","ai"],"install":[{"cmd":"pip install openinference-instrumentation-litellm litellm opentelemetry-sdk opentelemetry-exporter-otlp","lang":"bash","label":"Install with core OpenTelemetry dependencies"}],"dependencies":[{"reason":"This library instruments LiteLLM; LiteLLM is required to be installed.","package":"litellm","optional":false},{"reason":"Required for OpenTelemetry tracing infrastructure.","package":"opentelemetry-sdk","optional":false},{"reason":"Common exporter for sending OpenTelemetry traces to a collector.","package":"opentelemetry-exporter-otlp","optional":false}],"imports":[{"symbol":"LiteLLMInstrumentor","correct":"from openinference.instrumentation.litellm import LiteLLMInstrumentor"},{"symbol":"TracerProvider","correct":"from opentelemetry.sdk.trace import TracerProvider"},{"symbol":"SimpleSpanProcessor","correct":"from opentelemetry.sdk.trace.export import SimpleSpanProcessor"},{"symbol":"OTLPSpanExporter","correct":"from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter"}],"quickstart":{"code":"import os\nimport litellm\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import SimpleSpanProcessor\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk.resources import Resource\nfrom openinference.instrumentation.litellm import LiteLLMInstrumentor\n\n# Configure OpenTelemetry Tracer Provider\nresource = Resource.create({\n    \"service.name\": \"my-litellm-app\"\n})\n\ntracer_provider = TracerProvider(resource=resource)\n\n# Example: Export traces to a local OpenTelemetry Collector (e.g., Phoenix)\n# Ensure a collector is running, e.g., 'python -m phoenix.server.main serve'\nOTEL_EXPORTER_OTLP_ENDPOINT = os.environ.get(\"OTEL_EXPORTER_OTLP_ENDPOINT\", \"http://127.0.0.1:6006/v1/traces\")\ntracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(OTEL_EXPORTER_OTLP_ENDPOINT)))\n\n# Set the global tracer provider\nfrom opentelemetry import trace\ntrace.set_tracer_provider(tracer_provider)\n\n# Instrument LiteLLM\nLiteLLMInstrumentor().instrument(tracer_provider=tracer_provider)\n\n# Set LiteLLM API key (e.g., for OpenAI model)\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"YOUR_OPENAI_API_KEY_HERE\")\n\ntry:\n    print(\"Making a LiteLLM completion call...\")\n    completion_response = litellm.completion(\n        model=\"gpt-3.5-turbo\",\n        messages=[{\"content\": \"What's the capital of France?\", \"role\": \"user\"}]\n    )\n    print(\"Completion received:\", completion_response.choices[0].message.content)\n\n    print(\"\\nMaking a LiteLLM embedding call...\")\n    embedding_response = litellm.embedding(\n        model=\"text-embedding-ada-002\",\n        input=[\"Hello, world!\"]\n    )\n    print(\"Embedding received (first 10 chars):\", str(embedding_response.data[0].embedding)[:10] + \"...\")\n\nexcept Exception as e:\n    print(f\"An error occurred: {e}\")\n    print(\"Please ensure your API key is set correctly and the model is accessible.\")\n\nfinally:\n    # It's important to shut down the tracer provider to ensure all spans are exported.\n    print(\"\\nShutting down tracer provider...\")\n    tracer_provider.shutdown()\n    print(\"Traces exported.\")","lang":"python","description":"This quickstart demonstrates how to set up OpenInference LiteLLM Instrumentation with a basic OpenTelemetry configuration, making an LLM call via LiteLLM. It initializes a `TracerProvider`, configures an `OTLPSpanExporter` (e.g., for a local Phoenix collector), instruments LiteLLM, and then executes a sample `completion` and `embedding` call. Remember to set your `OPENAI_API_KEY` environment variable."},"warnings":[{"fix":"This is by design; handle image URLs in your observability tool's UI for visualization.","message":"When tracing image generation calls, the instrumentation currently sets the output as a URL attribute rather than rendering the image directly within the trace. Displaying the image requires a UI-side change in the observability tool.","severity":"gotcha","affected_versions":"All versions up to 0.1.30"},{"fix":"Inspect generated spans in your tracing UI to understand the exact format of 'input' and 'output' attributes for streamed and non-streamed calls.","message":"There may be inconsistencies in how output (parsed vs. raw object) is set in span attributes for streamed versus non-streamed LiteLLM calls. While recent updates (v0.1.21) improved full JSON output, manual verification of span attributes for different call types is recommended.","severity":"gotcha","affected_versions":"<0.1.21 (partially resolved), potentially ongoing for specific edge cases"},{"fix":"If tracing `litellm.responses` is critical, you may need to add manual OpenTelemetry spans around these calls or await official instrumentation support.","message":"The `litellm.responses` function (which is labeled as 'beta' in LiteLLM) is not directly instrumented by `openinference-instrumentation-litellm`.","severity":"gotcha","affected_versions":"All versions up to 0.1.30"},{"fix":"Consult the LiteLLM v1.0.0+ migration guide and update your application code to handle the new LiteLLM API surface, ensuring compatibility with the instrumentation.","message":"If you are using LiteLLM version 1.0.0 or higher, be aware that LiteLLM itself introduced breaking changes, including requiring `openai>=1.0.0`, changes to error types (e.g., `openai.InvalidRequestError` to `openai.BadRequestError`), and response objects inheriting from `BaseModel` instead of `OpenAIObject`. While these are LiteLLM's changes, they impact how your application interacts with instrumented LiteLLM calls.","severity":"breaking","affected_versions":"LiteLLM >= 1.0.0"},{"fix":"Ensure the instrumentation setup code is executed at the very beginning of your application's lifecycle, typically immediately after importing necessary modules and before any LiteLLM specific code runs. For complex setups, consider using `-r` flag for module loading.","message":"For proper auto-instrumentation, the `LiteLLMInstrumentor().instrument()` call and the OpenTelemetry `TracerProvider` setup must occur *before* any LiteLLM calls are made in your application. Loading instrumentation too late can result in untraced operations.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-15T00:00:00.000Z","next_check":"2026-07-14T00:00:00.000Z","problems":[]}