LiteLLM (Unclecode fork)

raw JSON →
1.81.13 verified Mon Apr 27 auth: no python

A pre-compromise fork of the original LiteLLM, providing a unified interface to call 100+ LLM providers (OpenAI, Anthropic, Cohere, Azure, Bedrock, etc.) with standardized input/output formats. Current version 1.81.13, actively maintained with monthly releases.

pip install unclecode-litellm
error ModuleNotFoundError: No module named 'litellm'
cause Installed the fork under the name 'unclecode-litellm' but importing as 'unclecode_litellm' or not installing at all.
fix
Run 'pip install unclecode-litellm' and import as 'import litellm' (package name is 'litellm').
error openai.RateLimitError: You exceeded your current quota
cause API key missing or insufficient quota. The fork does not wrap rate limit errors differently.
fix
Verify API key is set and has credits. Use litellm.utils.RateLimitError for catching.
error AttributeError: module 'litellm' has no attribute 'completion'
cause Unclecode fork removed some aliases. Use litellm.litellm_core.completion?
fix
Import directly with 'from litellm import completion' or call litellm.completion (standard path works).
gotcha Environment variable LITELLM_LOG controls logging level. Not setting it may result in excessive debug output.
fix Set os.environ['LITELLM_LOG'] = 'WARN' before importing litellm.
breaking This fork changed the default timeout from 600s to 60s. Some long-running completions may fail with timeout.
fix Pass request_timeout=600 to completion() or set litellm.request_timeout = 600.
deprecated The 'litellm.set_verbose' method is deprecated. Use 'litellm._turn_on_debug()' or set LITELLM_LOG=DEBUG.
fix Use litellm._turn_on_debug() or set environment variable LITELLM_LOG=DEBUG.
gotcha Anthropic models default to a max_tokens of 256. Many users expect 4096 or higher.
fix Always set max_tokens=1024 (or desired value) when using Anthropic models.

Call any provider by changing the model string (e.g., 'claude-2', 'command-nightly').

import litellm

response = litellm.completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello world"}],
    api_key=os.environ.get("OPENAI_API_KEY", ""),
)
print(response.choices[0].message.content)