LlamaIndex LangChain LLMs Integration

0.8.0 · active · verified Thu Apr 16

The `llama-index-llms-langchain` library provides a seamless integration layer, allowing users to leverage LangChain's extensive collection of Large Language Models (LLMs) within the LlamaIndex framework. It acts as a bridge, enabling LangChain LLM instances to conform to LlamaIndex's LLM interface. The current version is `0.8.0`, and it receives regular updates aligned with LlamaIndex core releases.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to instantiate a LangChain LLM (using `ChatOpenAI` as an example), wrap it using `LangChainLLM` from `llama-index-llms-langchain`, and then configure it as the default LLM within LlamaIndex's `Settings`. It shows both completion and chat interactions.

import os
from langchain_openai import ChatOpenAI
from llama_index.llms.langchain import LangChainLLM
from llama_index.core import Settings

# 1. Initialize a LangChain LLM instance
# Make sure to install 'langchain-openai' (pip install langchain-openai)
# and set your OPENAI_API_KEY environment variable.
# Using os.environ.get for safe execution in environments without the key.
lc_llm = ChatOpenAI(temperature=0.0, model="gpt-3.5-turbo", api_key=os.environ.get("OPENAI_API_KEY", "test_key"))

# 2. Wrap the LangChain LLM with LlamaIndex's LangChainLLM wrapper
llm = LangChainLLM(llm=lc_llm)

# 3. Use the wrapped LLM with LlamaIndex
# You can either set it globally or pass it directly to components.
Settings.llm = llm

# Example: Generate a completion
response = Settings.llm.complete("Tell me a short story about a magical cat.")
print(response.text)

# Example: Generate a chat response
from llama_index.core.llms import ChatMessage, MessageRole
chat_response = Settings.llm.chat([
    ChatMessage(role=MessageRole.USER, content="What is the capital of France?")
])
print(chat_response.message.content)

view raw JSON →