MCP Server Fetch

2025.4.7 · active · verified Mon Apr 13

MCP Server Fetch is a Python library that provides a server implementation for the Model Context Protocol (MCP). It's designed to fetch, process, and convert web content into a format suitable for use by Large Language Models (LLMs). As of version `2025.4.7`, it offers a robust solution for web data ingestion within an LLM ecosystem. The project follows a rapid, approximately monthly, release cadence, often aligning with a YYYY.M.D versioning scheme.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to programmatically start the MCP Server Fetch application using `uvicorn`. It first configures essential environment variables (like CORS) which are critical for its operation, then creates the FastAPI app instance, and finally runs it using `uvicorn.run`. For interaction with the running server, install and use the separate `mcp-client` library.

import uvicorn
from mcp_server_fetch.server import create_app
import os
import asyncio

# Configure necessary environment variables, e.g., for CORS or debug mode
os.environ['CORS_ORIGINS'] = 'http://localhost:3000,http://127.0.0.1:3000'
os.environ['MCP_DEBUG'] = 'true'

# Create the FastAPI application instance
app = create_app()

async def run_server():
    print("Starting MCP Server Fetch on http://127.0.0.1:8000")
    print("Use Ctrl+C to stop the server.")
    config = uvicorn.Config(app, host="127.0.0.1", port=8000, log_level="info")
    server = uvicorn.Server(config)
    await server.serve()

if __name__ == "__main__":
    # This demonstrates programmatic startup. More commonly, you'd run from CLI:
    # uvicorn mcp_server_fetch.server:app --host 127.0.0.1 --port 8000
    try:
        asyncio.run(run_server())
    except KeyboardInterrupt:
        print("\nServer stopped.")

view raw JSON →