Perplexity LangChain Integration
langchain-perplexity is an official LangChain integration package that connects Perplexity AI's powerful conversational models and search capabilities with the LangChain framework. It allows developers to build LLM applications with real-time web search, streaming, and advanced search controls using Perplexity's API. The library is actively maintained as part of the LangChain ecosystem, with current version 1.1.0, and typically follows the release cadence of other LangChain partner packages.
Common errors
-
ModuleNotFoundError: No module named 'langchain_perplexity'
cause The `langchain-perplexity` package is not installed in your Python environment.fixRun `pip install langchain-perplexity` to install the package. -
ValueError: Did not find perplexity_api_key, please add an environment variable `PERPLEXITY_API_KEY` or pass `perplexity_api_key` as a named parameter.
cause The Perplexity API key is not provided through an environment variable or directly in the `ChatPerplexity` constructor.fixSet the environment variable: `export PERPLEXITY_API_KEY="YOUR_API_KEY"` (or `os.environ["PERPLEXITY_API_KEY"] = "YOUR_API_KEY"` in Python code), or pass it during initialization: `ChatPerplexity(perplexity_api_key="YOUR_API_KEY", model="sonar")`. -
json.decoder.JSONDecodeError: Expecting value: line X column Y (char Z) (or similar JSON parsing error with structured output)
cause Perplexity models, particularly reasoning-capable ones, might embed non-JSON content (like internal thought processes in `<think>` tags) within their response, which interferes with LangChain's `with_structured_output` parsing.fixTemporarily avoid using `with_structured_output` with Perplexity models if this issue occurs. Inspect the raw model output for extraneous text. Consider implementing a custom output parser that pre-processes the model's text to remove non-JSON elements before parsing.
Warnings
- breaking The LangChain ecosystem, including `langchain-core`, undergoes frequent updates and breaking changes. This may lead to needing to update import paths, class names, or method signatures in your `langchain-perplexity` integration code to maintain compatibility.
- gotcha Perplexity API key must be correctly configured. The library expects the API key to be set either as an environment variable named `PERPLEXITY_API_KEY` (or `PPLX_API_KEY` for older patterns) or passed directly to the `ChatPerplexity` constructor via the `perplexity_api_key` parameter.
- gotcha Using `ChatPerplexity` with LangChain's `with_structured_output` functionality can fail if Perplexity models (especially 'reasoning' models) include their intermediate thought processes (e.g., text wrapped in `<think>...</think>`) in the output, breaking JSON parsing.
Install
-
pip install langchain-perplexity
Imports
- ChatPerplexity
from langchain_perplexity import ChatPerplexity
- PerplexitySearchRetriever
from langchain_perplexity.retrievers import PerplexitySearchRetriever
- PerplexitySearchResults
from langchain_perplexity.tools import PerplexitySearchResults
- WebSearchOptions
from langchain_perplexity import WebSearchOptions
Quickstart
import os
from langchain_perplexity import ChatPerplexity
from langchain_core.prompts import ChatPromptTemplate
# Set your Perplexity API key as an environment variable
# It's recommended to load this from a .env file or similar in production
os.environ["PERPLEXITY_API_KEY"] = os.environ.get("PERPLEXITY_API_KEY", "your_perplexity_api_key_here")
# Initialize the chat model
chat = ChatPerplexity(model="sonar")
# Define a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}")
])
# Create a chain
chain = prompt | chat
# Invoke the chain with a question
response = chain.invoke({"input": "What breakthroughs in fusion energy have been announced this year?"})
print(response.content)
# Example with Pro Search (requires 'sonar-pro' model and WebSearchOptions)
# from langchain_perplexity import WebSearchOptions
# pro_chat = ChatPerplexity(
# model="sonar-pro",
# # web_search_options=WebSearchOptions(search_type="pro") # Uncomment if specific search_type needed
# )
# pro_response = pro_chat.invoke("How does the electoral college work?")
# print(pro_response.content)