AWS Labs CloudWatch MCP Server

0.0.24 · deprecated · verified Thu Apr 16

The `awslabs-cloudwatch-mcp-server` is an AWS Labs Model Context Protocol (MCP) server designed to provide AI assistants with comprehensive tools for monitoring, analyzing, and troubleshooting AWS services through CloudWatch. It enables AI agents to perform root cause analysis, investigate performance metrics, and track SLO compliance using natural language queries, eliminating the need for custom API integrations. The current version is 0.0.24, and its release cadence is tied to updates within the broader AWS Labs MCP ecosystem. As of early 2026, it is marked as deprecated in favor of `cloudwatch-applicationsignals-mcp-server` for new implementations.

Common errors

Warnings

Install

Imports

Quickstart

The `awslabs-cloudwatch-mcp-server` is designed to be consumed by AI assistants (LLM clients) rather than directly imported into Python applications. This quickstart demonstrates how to configure an LLM client (like Amazon Q or Claude Code) to use the server by defining its command and environment variables within the client's `mcp.json` configuration. Ensure AWS credentials and Docker (or `uvx`) are set up as prerequisites.

# 1. Ensure Docker is installed and AWS credentials are configured (e.g., via ~/.aws/credentials or env vars)
#    export AWS_PROFILE="your-aws-profile"
#    export AWS_REGION="us-east-1"

# 2. Build the Docker image (if not already done via install instructions)
#    git clone https://github.com/awslabs/mcp.git
#    cd mcp/src/cloudwatch-mcp-server/
#    docker build -t awslabs/cloudwatch-mcp-server:latest .

# 3. Configure your LLM client (e.g., Amazon Q CLI, Claude Code) with the following in its mcp.json config:
#    For Amazon Q CLI (e.g., ~/.aws/amazonq/mcp.json) or similar:
import os

mcp_config = {
  "mcpServers": {
    "awslabs.cloudwatch-mcp-server": {
      "disabled": False,
      "timeout": 60,
      "type": "stdio",
      "command": "uv",
      "args": [
        "tool",
        "run",
        "--from",
        "awslabs.cloudwatch-mcp-server@latest",
        "awslabs.cloudwatch-mcp-server.exe"
      ],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR",
        "AWS_PROFILE": os.environ.get('AWS_PROFILE', 'your-aws-profile'),
        "AWS_REGION": os.environ.get('AWS_REGION', 'us-east-1')
      }
    }
  }
}

# In a real scenario, this 'mcp_config' would be written to a JSON file
# and picked up by the LLM client. For example, if you were setting
# up Amazon Q CLI:
# import json
# with open(os.path.expanduser('~/.aws/amazonq/mcp.json'), 'w') as f:
#     json.dump(mcp_config, f, indent=2)

print("MCP server configuration snippet generated. Place this in your LLM client's mcp.json file.")

view raw JSON →