Perplexity LangChain Integration

1.1.0 · active · verified Thu Apr 16

langchain-perplexity is an official LangChain integration package that connects Perplexity AI's powerful conversational models and search capabilities with the LangChain framework. It allows developers to build LLM applications with real-time web search, streaming, and advanced search controls using Perplexity's API. The library is actively maintained as part of the LangChain ecosystem, with current version 1.1.0, and typically follows the release cadence of other LangChain partner packages.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up and use `ChatPerplexity` for conversational AI with real-time web search capabilities. It includes setting the API key and invoking a simple chat chain.

import os
from langchain_perplexity import ChatPerplexity
from langchain_core.prompts import ChatPromptTemplate

# Set your Perplexity API key as an environment variable
# It's recommended to load this from a .env file or similar in production
os.environ["PERPLEXITY_API_KEY"] = os.environ.get("PERPLEXITY_API_KEY", "your_perplexity_api_key_here")

# Initialize the chat model
chat = ChatPerplexity(model="sonar")

# Define a prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}")
])

# Create a chain
chain = prompt | chat

# Invoke the chain with a question
response = chain.invoke({"input": "What breakthroughs in fusion energy have been announced this year?"})
print(response.content)

# Example with Pro Search (requires 'sonar-pro' model and WebSearchOptions)
# from langchain_perplexity import WebSearchOptions
# pro_chat = ChatPerplexity(
#     model="sonar-pro",
# #    web_search_options=WebSearchOptions(search_type="pro") # Uncomment if specific search_type needed
# )
# pro_response = pro_chat.invoke("How does the electoral college work?")
# print(pro_response.content)

view raw JSON →