Helicone
raw JSON → helicone-helpers 1.0.3 (optional stub) verified Tue May 12 auth: yes python install: verified quickstart: stale
Open-source LLM observability platform using a proxy-based architecture. Unlike LangSmith or Langfuse, Helicone requires NO Python SDK install for core tracing — it works by routing requests through its AI gateway (https://ai-gateway.helicone.ai) via a base_url override on the OpenAI/Anthropic client. All logging happens at the proxy layer. The 'helicone' PyPI package (helicone-helpers) is a thin optional helper with minimal functionality. Primary integration is via HTTP headers and base_url, not a Python library.
pip install helicone-helpers Common errors
error 400 Bad Request ↓
cause This error typically indicates that your API request to Helicone or the underlying LLM provider is malformed, has incorrect headers, or contains an invalid payload format.
fix
Verify your request payload, ensure all required HTTP headers (like
Content-Type, Helicone-Auth, anthropic-version for Anthropic) are correctly set, and confirm the base_url is pointing to the correct Helicone endpoint (e.g., https://oai.helicone.ai/v1 for OpenAI or https://anthropic.helicone.ai for Anthropic). error 401 Unauthorized ↓
cause This error signifies that the API keys provided are either missing or invalid, preventing authentication with Helicone or the upstream LLM provider.
fix
Ensure that your
HELICONE_API_KEY is correctly set and, if using direct header integration, ensure the Helicone-Auth header includes the 'Bearer' prefix (e.g., Helicone-Auth: Bearer YOUR_HELICONE_API_KEY). Also, verify that your underlying provider's API key (e.g., OPENAI_API_KEY) is valid. error 500 Internal Server Error ↓
cause This error indicates a problem on the server-side, either within the Helicone gateway or originating from the upstream LLM provider.
fix
Check the status of your LLM provider and review Helicone's dashboard or logs for more specific error details. Implementing
Helicone-Retry-Enabled: true in your request headers can help mitigate transient provider issues by automatically retrying failed requests. error Auth failed! Network connection lost ↓
cause This specific error often occurs in self-hosted Helicone deployments when the worker proxy cannot successfully log data to the Helicone Jawn service due to authentication or network connectivity issues between internal components.
fix
For self-hosted Docker deployments, ensure that
SUPABASE_SERVICE_ROLE_KEY is correctly configured with a real JWT signed with the same secret PostgREST uses, and restart the worker services. Verify network connectivity between your Helicone containers. Warnings
gotcha There is no meaningful Python package to install. The 'helicone' and 'helicone-helpers' PyPI packages are thin stubs with minimal utility. Most Helicone documentation and features are implemented via base_url + HTTP headers, not Python code. Searching PyPI for 'helicone' and expecting a rich SDK like LangSmith or Langfuse will lead to confusion. ↓
fix Do not expect a Python instrumentation library. Helicone's Python integration is: (1) change base_url to https://ai-gateway.helicone.ai, (2) add Helicone-Auth header.
gotcha All LLM requests route through Helicone's cloud proxy. This adds ~10ms latency (Cloudflare Workers) and means all prompt/response content passes through Helicone's infrastructure. Not suitable for environments with strict data residency requirements without the self-hosted option. ↓
fix For data residency requirements, use Helicone self-hosted (Docker/Helm) and point base_url at your own instance. EU users: check if the EU region endpoint satisfies GDPR requirements.
gotcha Helicone headers are silently ignored if misspelled or if the base_url is wrong. For example, passing the Helicone-Auth header but keeping the original OpenAI base_url sends the header directly to OpenAI — no error, no tracing, and OpenAI ignores the unknown header. ↓
fix Verify the gateway is active by checking your Helicone dashboard after the first request. Confirm base_url is set AND Helicone-Auth header is present.
gotcha Prompt caching via Helicone-Cache-Enabled returns cached responses that bypass your LLM provider entirely. In development/testing, this can cause confusing stale results. The cache key is based on the full request body — small prompt changes bypass the cache. ↓
fix Disable caching in dev/test by omitting the Helicone-Cache-Enabled header or setting it to 'false'. Enable only in production for cost savings.
breaking The `openai` Python package is a core dependency for integrating with OpenAI's API via Helicone. It must be installed separately in your environment. ↓
fix Ensure the `openai` library is installed by running `pip install openai`.
breaking The script fails with a 'ModuleNotFoundError: No module named 'openai''. This error occurs when the 'openai' Python package, required to interact with the OpenAI API, has not been installed in the environment. ↓
fix Install the 'openai' package using pip: `pip install openai`. Ensure all necessary dependencies are installed before running the script.
Install
# No pip install required for core functionality Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) - - - 73.9M
3.10 slim (glibc) - - - 145M
3.11 alpine (musl) - - - 81.1M
3.11 slim (glibc) - - - 152M
3.12 alpine (musl) - - - 71.4M
3.12 slim (glibc) - - - 143M
3.13 alpine (musl) - - - 67.5M
3.13 slim (glibc) - - - 141M
3.9 alpine (musl) - - - 72.8M
3.9 slim (glibc) - - - 144M
Imports
- No SDK import required wrong
import helicone; helicone.instrument()correct# Just override base_url on your existing OpenAI/Anthropic client
Quickstart stale last tested: 2026-05-11
import os
import openai
# Core integration: change base_url, add auth header
# No pip install needed beyond openai
client = openai.OpenAI(
api_key=os.environ['OPENAI_API_KEY'],
base_url='https://ai-gateway.helicone.ai/openai/v1',
default_headers={
'Helicone-Auth': f'Bearer {os.environ["HELICONE_API_KEY"]}',
}
)
response = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': 'Hello!'}]
)
# Request is now logged in your Helicone dashboard
# Add metadata via headers
client_with_metadata = openai.OpenAI(
api_key=os.environ['OPENAI_API_KEY'],
base_url='https://ai-gateway.helicone.ai/openai/v1',
default_headers={
'Helicone-Auth': f'Bearer {os.environ["HELICONE_API_KEY"]}',
'Helicone-User-Id': 'user-123', # per-user tracking
'Helicone-Session-Id': 'session-abc', # session grouping
'Helicone-Cache-Enabled': 'true', # response caching
}
)
# Anthropic integration
import anthropic
client_anthropic = anthropic.Anthropic(
api_key=os.environ['ANTHROPIC_API_KEY'],
base_url='https://ai-gateway.helicone.ai/anthropic',
default_headers={
'Helicone-Auth': f'Bearer {os.environ["HELICONE_API_KEY"]}',
}
)