Portkey.ai Python Client
The Portkey.ai Python client library provides an interface to the Portkey API, a unified AI gateway for managing, monitoring, and routing large language model (LLM) requests. It offers features like observability, caching, load balancing, and prompt management across various LLM providers. The library is actively maintained, with frequent updates.
Warnings
- breaking Major version 2.0.0 introduced significant internal changes, including vendoring a specific version of the OpenAI SDK. While Portkey aims for compatibility, direct dependencies or specific behaviors of the underlying OpenAI client might have changed. This could require adjustments for complex integrations that rely on specific OpenAI client versions or internal workings.
- gotcha Portkey utilizes its own `PORTKEY_API_KEY` for authenticating with the Portkey gateway. Separately, your actual LLM provider API keys (e.g., OpenAI, Anthropic) must be configured within Portkey's 'Virtual Keys' or 'Model Catalog' dashboard, or provided via the `Authorization` parameter in the `Portkey` client constructor. Simply passing a provider's API key to `Portkey(api_key="...")` is incorrect for LLM provider authentication.
- gotcha When using the `provider` parameter, Portkey often expects models to be specified in a `@provider-slug/model-name` format (e.g., `@openai-prod/gpt-4o`). Simply providing `model="gpt-4o"` without the appropriate provider slug prefix, or without a `config` object defining the routing, may lead to errors or incorrect routing.
Install
-
pip install portkey-ai
Imports
- Portkey
from portkey_ai import Portkey
- AsyncPortkey
from portkey_ai import AsyncPortkey
Quickstart
import os
from portkey_ai import Portkey
# Set your Portkey API key as an environment variable: export PORTKEY_API_KEY="pk-sk-..."
# If using a direct provider (e.g., OpenAI) without Portkey Virtual Keys,
# you might also need its API key, e.g.: export OPENAI_API_KEY="sk-..."
portkey_client = Portkey(
api_key=os.environ.get('PORTKEY_API_KEY', ''),
# Use a provider slug from your Portkey Model Catalog
# e.g., "@openai-prod" if configured in Portkey
provider="@openai-prod"
)
try:
response = portkey_client.chat.completions.create(
messages=[
{"role": "user", "content": "What is the capital of France?"}
],
# Model name configured under the "@openai-prod" provider in Portkey
model="gpt-4o"
)
print(response.choices[0].message.content)
except Exception as e:
print(f"An error occurred: {e}")