Laminar Claude Code Proxy
A thin proxy server designed to route requests from Claude Code to various Large Language Model (LLM) providers, while incorporating Laminar tracing for observability. It is currently at version 0.1.19 and appears to follow an as-needed release cadence for updates and bug fixes.
Warnings
- breaking Breaking changes in the official Claude API or Claude Code client (e.g., Anthropic Messages API format) can render the proxy non-functional, requiring updates to the proxy implementation. Historically, Claude Code has had frequent updates to its internal workings.
- gotcha Properly securing API keys and environment variables is critical. Proxies handle sensitive credentials, and misconfigurations (e.g., in `.env` files or project settings) have led to API key exfiltration vulnerabilities in related proxy projects (CVE-2026-21852).
- gotcha Claude Code's internal token consumption and session management can be complex, leading to unexpected rate limit hits or high costs, even when using a proxy. Recent changes in Claude Code's behavior regarding prompt caching and session resume have exacerbated these issues.
- gotcha The functionality and reliability of proxying Claude Code to non-Anthropic models (e.g., OpenAI, Gemini) heavily depend on the proxy's ability to accurately translate between API formats (Anthropic Messages API to target provider's API and back). Incomplete or incorrect translation can lead to reduced functionality or unexpected model behavior.
Install
-
pip install lmnr-claude-code-proxy
Imports
- run_proxy_server
from lmnr_claude_code_proxy import run_proxy_server
Quickstart
import os
from lmnr_claude_code_proxy import run_proxy_server
# Set required environment variables
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY_HERE') # Or other target LLM API key
os.environ['ANTHROPIC_BASE_URL'] = 'http://localhost:8082' # Point Claude Code to the proxy
# In a real scenario, you would run this in a separate process or via a CLI command.
# For demonstration, we simulate starting the server. This function typically blocks.
# run_proxy_server(host='0.0.0.0', port=8082)