{"id":1532,"library":"langgraph-api","title":"LangGraph API","description":"langgraph-api provides a convenient API layer for serving LangGraph agents as RESTful or RPC endpoints. It simplifies the deployment of complex, stateful LLM agents by integrating with FastAPI and Pydantic, making it easy to expose agent functionality over HTTP. It is currently at version 0.7.98 and is part of the rapidly evolving LangChain ecosystem, implying frequent updates.","status":"active","version":"0.7.98","language":"en","source_language":"en","source_url":"https://github.com/langchain-ai/langgraph-api","tags":["langgraph","fastapi","api","llm","agent","langchain"],"install":[{"cmd":"pip install langgraph-api","lang":"bash","label":"Install core library"},{"cmd":"pip install langchain-openai","lang":"bash","label":"Install an LLM provider (example)"}],"dependencies":[{"reason":"Core dependency for defining agents.","package":"langgraph"},{"reason":"The web framework used to build the API.","package":"fastapi"},{"reason":"The ASGI server used to run the API.","package":"uvicorn"},{"reason":"Used for data validation and serialization.","package":"pydantic"}],"imports":[{"symbol":"create_langgraph_api","correct":"from langgraph_api import create_langgraph_api"},{"note":"Primarily used for type hinting or advanced configuration.","symbol":"LangGraphAPI","correct":"from langgraph_api import LangGraphAPI"}],"quickstart":{"code":"import os\nfrom langchain_openai import ChatOpenAI\nfrom langchain.agents import AgentExecutor, create_react_agent\nfrom langchain_core.prompts import PromptTemplate\nfrom langchain_core.tools import tool\nfrom langgraph_api import create_langgraph_api\nfrom fastapi import FastAPI\n\n# Set API key for OpenAI, or any other LLM provider\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"sk-mock-key-for-test\")\n\n# 1. Define a tool\n@tool\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two integers together.\"\"\"\n    return a * b\n\ntools = [multiply]\n\n# 2. Define an LLM\nllm = ChatOpenAI(temperature=0)\n\n# 3. Define a prompt template\nprompt = PromptTemplate.from_template(\"\"\"\nYou are a helpful assistant.\nAnswer the following questions as best you can.\nYou have access to the following tools:\n{tools}\n{agent_scratchpad}\n\"\"\")\n\n# 4. Create a React agent\nagent = create_react_agent(llm, tools, prompt)\n\n# 5. Create an AgentExecutor (LangGraph runnable)\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n\n# 6. Create the LangGraph API app\napp = FastAPI(title=\"My Agent API\")\nlanggraph_api_app = create_langgraph_api(app, agent_executor, path=\"/agent\")\n\n# To run this application:\n# Save the code as 'main.py' (or any other name)\n# Execute from your terminal: uvicorn main:app --port 8000 --reload\n# Then access via: http://localhost:8000/agent/invoke or /agent/stream_log\n\nprint(\"LangGraph API application created. To run, use: uvicorn your_module_name:app --port 8000\")\nprint(\"Example: uvicorn main:app --port 8000 --reload\")\n","lang":"python","description":"Demonstrates how to create a simple LangGraph `AgentExecutor` and then wrap it using `create_langgraph_api` to expose it as a FastAPI application. This makes the agent accessible via HTTP endpoints like `/agent/invoke` and `/agent/stream_log`."},"warnings":[{"fix":"Regularly consult the official documentation and GitHub releases for breaking changes when upgrading versions. Pin specific patch versions in your `requirements.txt` to avoid unexpected breakage.","message":"The library is in active development (currently in `0.x.x` versions) and does not strictly adhere to semantic versioning. This means breaking changes can occur in minor or patch releases, requiring frequent updates to your code.","severity":"breaking","affected_versions":"<1.0.0"},{"fix":"Thoroughly understand LangGraph's state management. If shared state or specific session isolation is required, implement a custom `state_getter` and `state_setter` functions when creating the API, ensuring state is correctly managed per user or session.","message":"By default, `langgraph-api` manages agent state per request/session. Without careful configuration (e.g., providing a custom `state_getter`), concurrent users or requests might not have isolated state or might interact with the agent's state in unexpected ways, leading to cross-talk or incorrect responses.","severity":"gotcha","affected_versions":"All"},{"fix":"Implement robust authentication and authorization using FastAPI's security features (e.g., OAuth2, API Keys) or integrate with an API gateway. This is critical for any production deployment.","message":"The API layer itself does not provide built-in authentication or authorization mechanisms. Deploying a `langgraph-api` endpoint without external security measures can expose your agent to unauthorized access.","severity":"gotcha","affected_versions":"All"},{"fix":"Ensure the runnable passed to `create_langgraph_api` is a properly configured `AgentExecutor` from LangGraph or a custom Runnable that matches the expected `input_messages` and `output_messages` interface.","message":"`create_langgraph_api` is specifically designed to wrap `BaseAgentExecutor` or other LangGraph runnables that conform to a particular input/output schema. Attempting to wrap generic LangChain runnables or custom graphs that do not meet these expectations might lead to runtime errors or unexpected behavior.","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}