LangChain LiteLLM Integration

0.6.4 · active · verified Sun Apr 12

langchain-litellm is an integration package that connects LangChain with LiteLLM, a library designed to simplify calling and managing over 100 Large Language Models (LLMs) from various providers (e.g., Anthropic, Azure, Huggingface). It provides a unified interface for chat models, embeddings, and OCR document loading within the LangChain framework. The library is actively maintained with frequent patch and minor releases, adhering to semantic versioning, and is currently at version 0.6.4.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to instantiate and use ChatLiteLLM for basic chat completions and LiteLLMEmbeddings for text embedding. Ensure the relevant API key (e.g., OPENAI_API_KEY) is set in your environment or passed directly to the constructor. The `model` parameter should specify the desired LLM provider and model in LiteLLM's unified format.

import os
from langchain_litellm import ChatLiteLLM
from langchain_core.messages import HumanMessage

# Set your API key for LiteLLM's underlying provider (e.g., OpenAI)
# For a real application, use a secure method to manage API keys.
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "sk-your-openai-key")

# Instantiate ChatLiteLLM, specifying the model in LiteLLM's format
# (e.g., 'openai/gpt-3.5-turbo' for OpenAI)
chat_model = ChatLiteLLM(model="openai/gpt-3.5-turbo")

# Invoke the chat model
response = chat_model.invoke([HumanMessage(content="Hello, how are you?")])

print(response.content)

# Example for LiteLLMEmbeddings
from langchain_litellm import LiteLLMEmbeddings

# Note: API key can be passed explicitly if not in environment for embeddings
embeddings = LiteLLMEmbeddings(
    model="openai/text-embedding-3-small",
    api_key=os.environ.get("OPENAI_API_KEY", "sk-your-openai-key")
)

text = "This is a test document."
embedding = embeddings.embed_query(text)
print(f"Embedding length: {len(embedding)}")

view raw JSON →