Portkey.ai Python Client

2.2.0 · active · verified Tue Apr 14

The Portkey.ai Python client library provides an interface to the Portkey API, a unified AI gateway for managing, monitoring, and routing large language model (LLM) requests. It offers features like observability, caching, load balancing, and prompt management across various LLM providers. The library is actively maintained, with frequent updates.

Warnings

Install

Imports

Quickstart

Initializes the Portkey client using the `PORTKEY_API_KEY` environment variable and directs requests through a configured provider slug (e.g., `@openai-prod`). It then makes a chat completion request using a model available via that provider.

import os
from portkey_ai import Portkey

# Set your Portkey API key as an environment variable: export PORTKEY_API_KEY="pk-sk-..."
# If using a direct provider (e.g., OpenAI) without Portkey Virtual Keys,
# you might also need its API key, e.g.: export OPENAI_API_KEY="sk-..."

portkey_client = Portkey(
    api_key=os.environ.get('PORTKEY_API_KEY', ''),
    # Use a provider slug from your Portkey Model Catalog
    # e.g., "@openai-prod" if configured in Portkey
    provider="@openai-prod"
)

try:
    response = portkey_client.chat.completions.create(
        messages=[
            {"role": "user", "content": "What is the capital of France?"}
        ],
        # Model name configured under the "@openai-prod" provider in Portkey
        model="gpt-4o"
    )
    print(response.choices[0].message.content)
except Exception as e:
    print(f"An error occurred: {e}")

view raw JSON →