LangChain Community Integrations

0.4.1 · active · verified Sat Mar 28

LangChain Community provides a collection of third-party integrations for the LangChain ecosystem. These integrations implement base interfaces defined in LangChain Core, enabling connectivity to various LLM providers, document loaders, vector stores, and other tools within any LangChain application. It is actively maintained and currently at version 0.4.1.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to load documents using `TextLoader` from `langchain-community`, create a simple vector store with `FAISS` and `FakeEmbeddings` (both from `langchain-community`), and then use an OpenAI Chat Model (from `langchain-openai`) with a basic RAG (Retrieval Augmented Generation) chain. It highlights using components from `langchain-community` alongside a common LLM provider package.

import os
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings import FakeEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI # Requires 'pip install langchain-openai'

# Create a dummy text file for demonstration
with open("example.txt", "w") as f:
    f.write("LangChain is a framework for developing applications powered by large language models (LLMs).")
    f.write("\nIt enables applications that are context-aware and can reason over data.")

# Set your OpenAI API key (replace with actual key or environment variable)
# For a real application, use `os.environ.get("OPENAI_API_KEY", "")`
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "sk-YOUR_OPENAI_KEY_HERE")

# 1. Load documents using a loader from langchain-community
loader = TextLoader("example.txt")
documents = loader.load()
print(f"Loaded {len(documents)} document(s).")

# 2. Create embeddings (using a fake one for simplicity in 'community' quickstart)
# For real use, install a provider package like `langchain-openai` and use its embeddings.
embeddings = FakeEmbeddings()

# 3. Create a vector store from documents and embeddings
vectorstore = FAISS.from_documents(documents, embeddings)
print("Vector store created.")

# 4. Perform a similarity search as a retriever
retriever = vectorstore.as_retriever()

# 5. Define a Chat Model (from a dedicated provider package, e.g., langchain-openai)
# Ensure OPENAI_API_KEY is set in your environment.
chat_model = ChatOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# 6. Create a prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an AI assistant. Answer the question based ONLY on the provided context."),
    ("human", "Context: {context}\nQuestion: {question}")
])

# 7. Build a RAG chain
chain = (
    {"context": retriever, "question": StrOutputParser()}
    | prompt
    | chat_model
    | StrOutputParser()
)

# 8. Invoke the chain
question = "What is LangChain's primary purpose?"
response = chain.invoke(question)
print(f"\nQuestion: {question}")
print(f"Answer: {response}")

# Clean up the dummy file
os.remove("example.txt")

view raw JSON →