LangChain MCP Adapters

0.2.2 · active · verified Thu Apr 09

This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph agents. It automatically converts MCP tools, manages connections to multiple MCP servers, and seamlessly integrates them into LangChain workflows. The current version is 0.2.2 and it appears to have an active development and release cadence, with version 0.2.0 released in December 2025 and ongoing updates.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up `MultiServerMCPClient` to connect to multiple (mock) MCP servers and use their tools with a LangChain agent. It shows configuration for both `stdio` (local subprocess) and `http` transports. Remember to replace `/path/to/your/math_server.py` with an actual path to a running MCP math server and ensure your API keys for the chosen LLM are set as environment variables.

import os
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from langchain_core.messages import HumanMessage

# NOTE: For this example to be runnable, you need a running MCP server.
# For a 'math' server, you could use fastmcp:
# # math_server.py
# from fastmcp import FastMCP
# mcp = FastMCP("Math")
# @mcp.tool()
# def add(a: int, b: int) -> int:
#     """Add two numbers"""
#     return a + b
# if __name__ == "__main__":
#     mcp.run(transport="stdio")

# And start it from your terminal: python /path/to/math_server.py

async def main():
    # Set your LLM API key as an environment variable
    # e.g., export OPENAI_API_KEY="your_key_here"
    # Or, for Anthropic: export ANTHROPIC_API_KEY="your_key_here"
    if not os.environ.get('OPENAI_API_KEY') and not os.environ.get('ANTHROPIC_API_KEY'):
        print("Please set OPENAI_API_KEY or ANTHROPIC_API_KEY environment variable.")
        return

    client = MultiServerMCPClient(
        {
            "math": {
                "transport": "stdio", # Local subprocess communication
                "command": "python", # Path to python interpreter
                "args": ["/path/to/your/math_server.py"], # ABSOLUTE path to your math_server.py
            },
            "weather": {
                "transport": "http", # HTTP-based remote server
                "url": "http://localhost:8000/mcp", # Ensure your weather server is running on port 8000
                "onConnectionError": "ignore" # Ignore if this server is not running for demo
            }
        }
    )

    # Retrieve tools from the connected MCP servers
    tools = await client.get_tools()
    print(f"Loaded {len(tools)} tools.")

    # Example: Create an agent using LangChain's create_agent
    # Choose your LLM. For example, "openai:gpt-4o" or "anthropic:claude-3-opus-20240229"
    agent = create_agent("openai:gpt-4o", tools)

    # Invoke the agent with a message that uses a tool
    print("\nInvoking agent for math query...")
    math_response = await agent.ainvoke(
        {"messages": [HumanMessage(content="what's (3 + 5) x 12?")]}
    )
    print("Math Agent Response:", math_response)

    print("\nInvoking agent for weather query (may fail if server not running)...")
    weather_response = await agent.ainvoke(
        {"messages": [HumanMessage(content="what is the weather in nyc?")]}
    )
    print("Weather Agent Response:", weather_response)

    await client.close() # Important to close client to terminate subprocesses

if __name__ == "__main__":
    asyncio.run(main())

view raw JSON →