{"id":4972,"library":"langchain-litellm","title":"LangChain LiteLLM Integration","description":"langchain-litellm is an integration package that connects LangChain with LiteLLM, a library designed to simplify calling and managing over 100 Large Language Models (LLMs) from various providers (e.g., Anthropic, Azure, Huggingface). It provides a unified interface for chat models, embeddings, and OCR document loading within the LangChain framework. The library is actively maintained with frequent patch and minor releases, adhering to semantic versioning, and is currently at version 0.6.4.","status":"active","version":"0.6.4","language":"en","source_language":"en","source_url":"https://github.com/langchain-ai/langchain-litellm","tags":["LangChain","LiteLLM","LLM","AI","Integrations","Chat Models","Embeddings","OCR"],"install":[{"cmd":"pip install langchain-litellm","lang":"bash","label":"Install stable version"}],"dependencies":[{"reason":"Required Python version range","package":"python","version":">=3.10, <4.0"},{"reason":"Core LangChain framework dependency","package":"langchain","version":"latest"},{"reason":"Core LiteLLM library dependency for LLM unification","package":"litellm","version":"latest"}],"imports":[{"symbol":"ChatLiteLLM","correct":"from langchain_litellm import ChatLiteLLM"},{"symbol":"ChatLiteLLMRouter","correct":"from langchain_litellm import ChatLiteLLMRouter"},{"symbol":"LiteLLMEmbeddings","correct":"from langchain_litellm import LiteLLMEmbeddings"},{"symbol":"LiteLLMEmbeddingsRouter","correct":"from langchain_litellm import LiteLLMEmbeddingsRouter"},{"symbol":"LiteLLMOCRLoader","correct":"from langchain_litellm import LiteLLMOCRLoader"}],"quickstart":{"code":"import os\nfrom langchain_litellm import ChatLiteLLM\nfrom langchain_core.messages import HumanMessage\n\n# Set your API key for LiteLLM's underlying provider (e.g., OpenAI)\n# For a real application, use a secure method to manage API keys.\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"sk-your-openai-key\")\n\n# Instantiate ChatLiteLLM, specifying the model in LiteLLM's format\n# (e.g., 'openai/gpt-3.5-turbo' for OpenAI)\nchat_model = ChatLiteLLM(model=\"openai/gpt-3.5-turbo\")\n\n# Invoke the chat model\nresponse = chat_model.invoke([HumanMessage(content=\"Hello, how are you?\")])\n\nprint(response.content)\n\n# Example for LiteLLMEmbeddings\nfrom langchain_litellm import LiteLLMEmbeddings\n\n# Note: API key can be passed explicitly if not in environment for embeddings\nembeddings = LiteLLMEmbeddings(\n    model=\"openai/text-embedding-3-small\",\n    api_key=os.environ.get(\"OPENAI_API_KEY\", \"sk-your-openai-key\")\n)\n\ntext = \"This is a test document.\"\nembedding = embeddings.embed_query(text)\nprint(f\"Embedding length: {len(embedding)}\")","lang":"python","description":"This quickstart demonstrates how to instantiate and use ChatLiteLLM for basic chat completions and LiteLLMEmbeddings for text embedding. Ensure the relevant API key (e.g., OPENAI_API_KEY) is set in your environment or passed directly to the constructor. The `model` parameter should specify the desired LLM provider and model in LiteLLM's unified format."},"warnings":[{"fix":"Upgrade `langchain-litellm` to v0.6.2 or later (`pip install -U langchain-litellm`) to ensure malicious `litellm` versions are excluded. Always pin `litellm` and `langchain-litellm` versions in production environments (`litellm==x.y.z`, `langchain-litellm==a.b.c`) and review dependencies.","message":"Critical supply chain attack on `litellm` (versions 1.82.7 and 1.82.8) in March 2026. These versions contained credential-stealing malware. `langchain-litellm` version 0.6.2 and above explicitly excludes these compromised `litellm` versions from its dependencies.","severity":"breaking","affected_versions":"litellm==1.82.7, litellm==1.82.8 (indirectly via older langchain-litellm versions)"},{"fix":"Review the specific behavior with Claude models and tool calling if 'thinking' is enabled. Test tool invocation carefully after upgrading to v0.6.4.","message":"When using Claude models, `tool_choice` might be automatically downgraded to `auto` if 'thinking' is enabled. This can alter expected tool-use behavior.","severity":"gotcha","affected_versions":"v0.6.4 and later"},{"fix":"Ensure the `model` parameter is correctly formatted (e.g., `openai/gpt-3.5-turbo`, `anthropic/claude-3-opus-20240229`) and the corresponding API key (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`) is set in environment variables or passed explicitly.","message":"A common error is `litellm.BadRequestError: LLM Provider NOT provided`. This occurs when the underlying LLM provider for LiteLLM is not correctly specified or configured, often due to missing `model` parameters or incorrect API key environment variables.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Consult LiteLLM's documentation on proxy usage and `langchain-litellm`'s specific configurations for connecting to a proxy. You might need to configure LiteLLM directly or wrap the `ChatLiteLLM` instance with custom HTTP client logic.","message":"Integrating `langchain-litellm` with a LiteLLM Proxy often requires special handling for authentication headers (e.g., `Authorization: Bearer <token>`). LangChain's internal HTTP request mechanisms might not easily expose the ability to inject these custom headers, leading to integration difficulties.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure your environment's Pydantic version is compatible. If encountering issues with structured outputs or metadata handling, specifically review changes in `v0.6.4` related to Pydantic and update your code accordingly.","message":"Version `0.6.4` included a fix to `extract reasoning tokens and handle pydantic usage in metadata`. This could imply subtle changes or sensitivities related to Pydantic versions and how structured outputs or metadata are processed, which is a frequent source of issues in the LangChain ecosystem.","severity":"gotcha","affected_versions":"Potentially affects applications relying on specific Pydantic behavior with metadata prior to v0.6.4."}],"env_vars":null,"last_verified":"2026-04-12T00:00:00.000Z","next_check":"2026-07-11T00:00:00.000Z"}