LangChain MCP Adapters
This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph agents. It automatically converts MCP tools, manages connections to multiple MCP servers, and seamlessly integrates them into LangChain workflows. The current version is 0.2.2 and it appears to have an active development and release cadence, with version 0.2.0 released in December 2025 and ongoing updates.
Warnings
- gotcha Connecting to multiple MCP servers can lead to high token consumption due to all tool schemas being preloaded into the LLM's system prompt. This 'token overhead' can be significant, especially with many verbose tool definitions.
- gotcha Each MCP server can have distinct authentication requirements (API keys, OAuth, etc.), leading to 'Auth Fragmentation' and complex credential management across multiple development and production environments.
- gotcha Schema misalignment between MCP tool input/output JSON schemas and LangChain's expectations, or invalid connection configurations for `MultiServerMCPClient`, can lead to `ZodError` or silent failures.
- gotcha Managing different transport protocols (stdio, HTTP, SSE) and handling connection issues (e.g., server startup delays, unreachable servers) can add complexity to setup and debugging.
- gotcha When connecting to multiple MCP servers, tools from different servers might have conflicting names, leading to ambiguity or unexpected behavior if not properly handled.
Install
-
pip install langchain-mcp-adapters langchain-core
Imports
- MultiServerMCPClient
from langchain_mcp_adapters.client import MultiServerMCPClient
- load_mcp_tools
from langchain_mcp_adapters.tools import load_mcp_tools
- ClientSession
from mcp import ClientSession
- StdioServerParameters
from mcp import StdioServerParameters
Quickstart
import os
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from langchain_core.messages import HumanMessage
# NOTE: For this example to be runnable, you need a running MCP server.
# For a 'math' server, you could use fastmcp:
# # math_server.py
# from fastmcp import FastMCP
# mcp = FastMCP("Math")
# @mcp.tool()
# def add(a: int, b: int) -> int:
# """Add two numbers"""
# return a + b
# if __name__ == "__main__":
# mcp.run(transport="stdio")
# And start it from your terminal: python /path/to/math_server.py
async def main():
# Set your LLM API key as an environment variable
# e.g., export OPENAI_API_KEY="your_key_here"
# Or, for Anthropic: export ANTHROPIC_API_KEY="your_key_here"
if not os.environ.get('OPENAI_API_KEY') and not os.environ.get('ANTHROPIC_API_KEY'):
print("Please set OPENAI_API_KEY or ANTHROPIC_API_KEY environment variable.")
return
client = MultiServerMCPClient(
{
"math": {
"transport": "stdio", # Local subprocess communication
"command": "python", # Path to python interpreter
"args": ["/path/to/your/math_server.py"], # ABSOLUTE path to your math_server.py
},
"weather": {
"transport": "http", # HTTP-based remote server
"url": "http://localhost:8000/mcp", # Ensure your weather server is running on port 8000
"onConnectionError": "ignore" # Ignore if this server is not running for demo
}
}
)
# Retrieve tools from the connected MCP servers
tools = await client.get_tools()
print(f"Loaded {len(tools)} tools.")
# Example: Create an agent using LangChain's create_agent
# Choose your LLM. For example, "openai:gpt-4o" or "anthropic:claude-3-opus-20240229"
agent = create_agent("openai:gpt-4o", tools)
# Invoke the agent with a message that uses a tool
print("\nInvoking agent for math query...")
math_response = await agent.ainvoke(
{"messages": [HumanMessage(content="what's (3 + 5) x 12?")]}
)
print("Math Agent Response:", math_response)
print("\nInvoking agent for weather query (may fail if server not running)...")
weather_response = await agent.ainvoke(
{"messages": [HumanMessage(content="what is the weather in nyc?")]}
)
print("Weather Agent Response:", weather_response)
await client.close() # Important to close client to terminate subprocesses
if __name__ == "__main__":
asyncio.run(main())