LangChain Community Integrations
LangChain Community provides a collection of third-party integrations for the LangChain ecosystem. These integrations implement base interfaces defined in LangChain Core, enabling connectivity to various LLM providers, document loaders, vector stores, and other tools within any LangChain application. It is actively maintained and currently at version 0.4.1.
Warnings
- breaking Minor versions of `langchain-community` (e.x., 0.x.y to 0.y.z) may introduce breaking changes. Unlike `langchain` and `langchain-core`, it does not strictly adhere to semantic versioning due to the nature of community contributions and third-party integrations.
- breaking Many third-party integrations (e.g., specific Document Loaders, Vector Stores, LLMs) were moved from the main `langchain` package to `langchain-community` or dedicated provider packages (e.g., `langchain-openai`, `langchain-anthropic`) starting with LangChain v0.2. Attempting to import from old paths will result in `ImportError`.
- gotcha Using specific integrations within `langchain-community` often requires installing additional, sometimes optional, Python packages (e.g., `beautifulsoup4` for `WebBaseLoader`, `faiss-cpu` for `FAISS`, `openai` for `langchain-openai` classes). A `ModuleNotFoundError` for a seemingly related package means a sub-dependency is missing.
Install
-
pip install langchain-community
Imports
- TextLoader
from langchain_community.document_loaders import TextLoader
- FakeEmbeddings
from langchain_community.embeddings import FakeEmbeddings
- FAISS
from langchain_community.vectorstores import FAISS
- ChatOpenAI
from langchain_openai import ChatOpenAI
Quickstart
import os
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings import FakeEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI # Requires 'pip install langchain-openai'
# Create a dummy text file for demonstration
with open("example.txt", "w") as f:
f.write("LangChain is a framework for developing applications powered by large language models (LLMs).")
f.write("\nIt enables applications that are context-aware and can reason over data.")
# Set your OpenAI API key (replace with actual key or environment variable)
# For a real application, use `os.environ.get("OPENAI_API_KEY", "")`
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "sk-YOUR_OPENAI_KEY_HERE")
# 1. Load documents using a loader from langchain-community
loader = TextLoader("example.txt")
documents = loader.load()
print(f"Loaded {len(documents)} document(s).")
# 2. Create embeddings (using a fake one for simplicity in 'community' quickstart)
# For real use, install a provider package like `langchain-openai` and use its embeddings.
embeddings = FakeEmbeddings()
# 3. Create a vector store from documents and embeddings
vectorstore = FAISS.from_documents(documents, embeddings)
print("Vector store created.")
# 4. Perform a similarity search as a retriever
retriever = vectorstore.as_retriever()
# 5. Define a Chat Model (from a dedicated provider package, e.g., langchain-openai)
# Ensure OPENAI_API_KEY is set in your environment.
chat_model = ChatOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
# 6. Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are an AI assistant. Answer the question based ONLY on the provided context."),
("human", "Context: {context}\nQuestion: {question}")
])
# 7. Build a RAG chain
chain = (
{"context": retriever, "question": StrOutputParser()}
| prompt
| chat_model
| StrOutputParser()
)
# 8. Invoke the chain
question = "What is LangChain's primary purpose?"
response = chain.invoke(question)
print(f"\nQuestion: {question}")
print(f"Answer: {response}")
# Clean up the dummy file
os.remove("example.txt")