LangChain MCP Adapters
This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph agents. It automatically converts MCP tools, manages connections to multiple MCP servers, and seamlessly integrates them into LangChain workflows. The current version is 0.2.2 and it appears to have an active development and release cadence, with version 0.2.0 released in December 2025 and ongoing updates.
Common errors
-
ModuleNotFoundError: No module named 'langchain_mcp_adapters'
cause The `langchain-mcp-adapters` package has not been installed in your Python environment or your virtual environment is not activated.fixInstall the package using pip: `pip install langchain-mcp-adapters` -
ValueError: Unsupported transport: streamable-http. Must be one of: 'stdio', 'sse', 'websocket', 'streamable_http'.
cause The `transport` type for `Streamable HTTP` is specified with a hyphen (`streamable-http`) instead of an underscore (`streamable_http`), which is strictly enforced by the library.fixChange the transport type in your client configuration to `streamable_http`: `"transport": "streamable_http"` -
MCPToolConversionError: Failed to get tools from MCP server: 404
cause The `MultiServerMCPClient` or `load_mcp_tools` function failed to connect to the specified MCP server, often due to an incorrect URL, the server not running, or using an incompatible transport type for the server (e.g., `stdio` on the client when the server expects `streamable_http` or `sse`).fixEnsure the MCP server is running and accessible at the specified URL, verify the URL is correct, and confirm that the `transport` type configured in the client matches the transport type the MCP server is using (e.g., `"url": "http://localhost:8000/mcp", "transport": "sse"` if the server is SSE). -
TypeError: Type is not msgpack serializable: ToolMessage
cause This error typically occurs when the `ToolMessage` object returned by an MCP tool contains non-text content (artifacts, such as `EmbeddedResource` with `AnyUrl`) that cannot be directly serialized by the `msgpack` library, especially when using a LangGraph `checkpointer` or other serialization mechanisms.fixThis often points to an underlying serialization incompatibility. While a direct fix might require changes in how artifacts are handled or serialized by `langchain-mcp-adapters` or `langchain` itself, a common workaround is to ensure that the tool output content is simplified or converted to a msgpack-compatible format if artifacts are not critical for the `checkpointer`. Alternatively, ensure all dependent libraries are updated to their latest versions, as serialization issues are frequently addressed in new releases.
Warnings
- gotcha Connecting to multiple MCP servers can lead to high token consumption due to all tool schemas being preloaded into the LLM's system prompt. This 'token overhead' can be significant, especially with many verbose tool definitions.
- gotcha Each MCP server can have distinct authentication requirements (API keys, OAuth, etc.), leading to 'Auth Fragmentation' and complex credential management across multiple development and production environments.
- gotcha Schema misalignment between MCP tool input/output JSON schemas and LangChain's expectations, or invalid connection configurations for `MultiServerMCPClient`, can lead to `ZodError` or silent failures.
- gotcha Managing different transport protocols (stdio, HTTP, SSE) and handling connection issues (e.g., server startup delays, unreachable servers) can add complexity to setup and debugging.
- gotcha When connecting to multiple MCP servers, tools from different servers might have conflicting names, leading to ambiguity or unexpected behavior if not properly handled.
- breaking The `langchain-mcp-adapters` library requires Python 3.10 or newer. Attempting to install or use it with Python 3.9 or older will result in installation failure.
- breaking The library depends on `langchain` (or similar AI framework libraries) which must be installed separately. A `ModuleNotFoundError` indicates that a required dependency is missing.
Install
-
pip install langchain-mcp-adapters langchain-core
Imports
- MultiServerMCPClient
from langchain_mcp_adapters.client import MultiServerMCPClient
- load_mcp_tools
from langchain_mcp_adapters.tools import load_mcp_tools
- ClientSession
from langchain_mcp_adapters.client import ClientSession
from mcp import ClientSession
- StdioServerParameters
from langchain_mcp_adapters.client import StdioServerParameters
from mcp import StdioServerParameters
Quickstart
import os
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from langchain_core.messages import HumanMessage
# NOTE: For this example to be runnable, you need a running MCP server.
# For a 'math' server, you could use fastmcp:
# # math_server.py
# from fastmcp import FastMCP
# mcp = FastMCP("Math")
# @mcp.tool()
# def add(a: int, b: int) -> int:
# """Add two numbers"""
# return a + b
# if __name__ == "__main__":
# mcp.run(transport="stdio")
# And start it from your terminal: python /path/to/math_server.py
async def main():
# Set your LLM API key as an environment variable
# e.g., export OPENAI_API_KEY="your_key_here"
# Or, for Anthropic: export ANTHROPIC_API_KEY="your_key_here"
if not os.environ.get('OPENAI_API_KEY') and not os.environ.get('ANTHROPIC_API_KEY'):
print("Please set OPENAI_API_KEY or ANTHROPIC_API_KEY environment variable.")
return
client = MultiServerMCPClient(
{
"math": {
"transport": "stdio", # Local subprocess communication
"command": "python", # Path to python interpreter
"args": ["/path/to/your/math_server.py"], # ABSOLUTE path to your math_server.py
},
"weather": {
"transport": "http", # HTTP-based remote server
"url": "http://localhost:8000/mcp", # Ensure your weather server is running on port 8000
"onConnectionError": "ignore" # Ignore if this server is not running for demo
}
}
)
# Retrieve tools from the connected MCP servers
tools = await client.get_tools()
print(f"Loaded {len(tools)} tools.")
# Example: Create an agent using LangChain's create_agent
# Choose your LLM. For example, "openai:gpt-4o" or "anthropic:claude-3-opus-20240229"
agent = create_agent("openai:gpt-4o", tools)
# Invoke the agent with a message that uses a tool
print("\nInvoking agent for math query...")
math_response = await agent.ainvoke(
{"messages": [HumanMessage(content="what's (3 + 5) x 12?")]}
)
print("Math Agent Response:", math_response)
print("\nInvoking agent for weather query (may fail if server not running)...")
weather_response = await agent.ainvoke(
{"messages": [HumanMessage(content="what is the weather in nyc?")]}
)
print("Weather Agent Response:", weather_response)
await client.close() # Important to close client to terminate subprocesses
if __name__ == "__main__":
asyncio.run(main())