LangServe

0.3.3 · active · verified Thu Apr 16

LangServe is a Python library that simplifies the deployment of LangChain runnables and agents as REST APIs. It builds on FastAPI to provide a robust server with a built-in playground UI for testing. The library is under active development, with version 0.3.3 being the latest, and maintains rapid compatibility updates with its core dependencies like `langchain-core`.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart sets up a basic LangServe API server with two endpoints: one for a raw OpenAI Chat model and another for a simple 'joke' chain (prompt + OpenAI Chat model). It uses FastAPI and uvicorn. Ensure `OPENAI_API_KEY` is set in your environment or replace the dummy key. You'll need `fastapi`, `uvicorn`, and `langchain-openai` installed.

import os
from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langserve import add_routes
import uvicorn

# Set your OpenAI API key from environment variable for local testing.
# In a production environment, use proper secrets management.
openai_api_key = os.environ.get('OPENAI_API_KEY', 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')

app = FastAPI(
    title="LangChain Server",
    version="1.0",
    description="A simple API server using LangChain's Runnable interfaces",
)

# A simple runnable (e.g., just an LLM)
add_routes(
    app,
    ChatOpenAI(api_key=openai_api_key),
    path="/openai",
)

# A more complex runnable (e.g., prompt + LLM)
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI(api_key=openai_api_key)
chain = prompt | model

add_routes(
    app,
    chain,
    path="/joke",
)

if __name__ == "__main__":
    # To run this, save it as a Python file (e.g., server.py) and execute:
    # uvicorn server:app --reload
    uvicorn.run(app, host="0.0.0.0", port=8000)

view raw JSON →