LangServe
LangServe is a Python library that simplifies the deployment of LangChain runnables and agents as REST APIs. It builds on FastAPI to provide a robust server with a built-in playground UI for testing. The library is under active development, with version 0.3.3 being the latest, and maintains rapid compatibility updates with its core dependencies like `langchain-core`.
Common errors
-
ModuleNotFoundError: No module named 'fastapi' OR ModuleNotFoundError: No module named 'uvicorn'
cause `fastapi` and `uvicorn` are peer dependencies for running a server, but not strict requirements of the `langserve` package itself.fixInstall them manually: `pip install fastapi uvicorn` -
AttributeError: module 'pydantic' has no attribute 'json_schema' OR TypeError: 'Type[Any]' object is not subscriptable
cause This indicates a Pydantic version mismatch. LangServe v0.3.0+ requires Pydantic V2, but another dependency might be forcing Pydantic V1, or your code is using V1 syntax.fixEnsure all dependencies requiring Pydantic are compatible with V2. Upgrade your `pydantic` package to `pydantic>=2` and verify other `langchain` ecosystem packages are also updated to their latest versions. -
AttributeError: 'BaseChatModel' object has no attribute 'invoke' OR TypeError: 'RunnableSequence' object is not callable
cause Incompatibility between `langserve` and `langchain-core` versions. The `langchain-core` API underwent significant changes (e.g., introduction of `invoke`, `batch`, `stream`).fixUpgrade both `langserve` and `langchain-core` to their latest compatible versions (`pip install --upgrade langserve langchain-core`). For `langserve==0.3.3`, ensure `langchain-core>=1.0.0`. -
ModuleNotFoundError: No module named 'langchain_openai'
cause Language model integrations are often in separate packages (e.g., `langchain-openai`, `langchain-anthropic`) and are not part of `langserve` or `langchain-core`.fixInstall the specific integration package you need: `pip install langchain-openai` for OpenAI models.
Warnings
- breaking LangServe v0.3.0 and later upgraded to Pydantic V2. This can cause `TypeError`, `AttributeError`, or `ValidationError` issues if other installed packages (or your own code) are still using Pydantic V1, or if you try to use V1 syntax.
- breaking LangServe v0.3.3 is explicitly compatible with `langchain-core` V1. Older `langserve` versions might not work correctly, leading to `AttributeError` or unexpected behavior when interacting with modern LangChain components.
- gotcha While LangServe integrates with FastAPI, `fastapi` and `uvicorn` are not strict dependencies of the `langserve` package itself. Running a LangServe API server requires these to be installed separately.
- deprecated Prior to v0.2.0, some internal module structures or import paths might have differed due to rapid LangChain ecosystem evolution. Directly importing internal components from older LangChain or LangServe versions might lead to `ModuleNotFoundError` or `ImportError` after upgrading.
- gotcha LangServe (especially in versions <= 0.2) might not fully support OpenAPI docs generation when Pydantic V2 is used, though API endpoints and the playground typically function as expected.
Install
-
pip install langserve -
pip install "langserve[server]" "langserve[client]" langchain-openai
Imports
- add_routes
from langchain.callbacks.langserve import add_routes
from langserve import add_routes
- FastAPI
from fastapi import FastAPI
- ChatOpenAI
from langchain_openai import ChatOpenAI
- ChatPromptTemplate
from langchain_core.prompts import ChatPromptTemplate
- Runnable
from langchain.schema.runnable import Runnable
from langchain_core.runnables import Runnable
Quickstart
import os
from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langserve import add_routes
import uvicorn
# Set your OpenAI API key from environment variable for local testing.
# In a production environment, use proper secrets management.
openai_api_key = os.environ.get('OPENAI_API_KEY', 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple API server using LangChain's Runnable interfaces",
)
# A simple runnable (e.g., just an LLM)
add_routes(
app,
ChatOpenAI(api_key=openai_api_key),
path="/openai",
)
# A more complex runnable (e.g., prompt + LLM)
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI(api_key=openai_api_key)
chain = prompt | model
add_routes(
app,
chain,
path="/joke",
)
if __name__ == "__main__":
# To run this, save it as a Python file (e.g., server.py) and execute:
# uvicorn server:app --reload
uvicorn.run(app, host="0.0.0.0", port=8000)