Aleph Alpha Python Client
The `aleph-alpha-client` is the official Python client for interacting with Aleph Alpha's API endpoints. It provides synchronous and asynchronous interfaces to access various AI capabilities, including completion, embedding, evaluation, and tool calling with large language and multimodal models. The library is actively maintained, with frequent releases, and is currently at version 11.5.1.
Common errors
-
ValueError: ALEPH_ALPHA_API_TOKEN environment variable not set or is default.
cause The API client requires an authentication token, which is typically read from the `ALEPH_ALPHA_API_TOKEN` environment variable, but it's either missing or set to a placeholder.fixSet the `ALEPH_ALPHA_API_TOKEN` environment variable to your actual Aleph Alpha API key. For example: `export ALEPH_ALPHA_API_TOKEN="your_secret_token"` in your shell, or pass it directly to the `Client` constructor. -
aleph_alpha_client.aleph_alpha_client.errors.ModelError: Could not find model with name 'your_model_name' on hosting 'cloud'
cause The specified model name is incorrect, mistyped, or not available on the default ('cloud') or specified hosting for your account.fixVerify the exact name of the model you intend to use. You can use methods like `client.available_models()` to check for available models and their hostings. -
aleph_alpha_client.aleph_alpha_client.errors.ValidationError: prompt is too long: X tokens > Y maximum
cause The combined length of your prompt exceeds the maximum token limit allowed by the chosen Aleph Alpha model's context window.fixReduce the length of your input prompt. Consider summarizing parts of the prompt, breaking it into multiple requests, or choosing a model with a larger context window if available.
Warnings
- breaking Breaking changes in v3.0.0 removed `AlephAlphaClient` and `AlephAlphaModel`. The class `ImagePrompt` was also removed.
- breaking The parameter order for `client.semantic_embed` changed, swapping `hosting` and `request`.
- gotcha The `maximum_tokens` parameter limits the *generated output* length, not the total context window (input + output). Setting it too low can result in truncated responses.
Install
-
pip install aleph-alpha-client
Imports
- Client
from aleph_alpha_client import Client
- AsyncClient
from aleph_alpha_client import AsyncClient
- CompletionRequest
from aleph_alpha_client import CompletionRequest
- Prompt
from aleph_alpha_client import Prompt
- Image
from aleph_alpha_client import ImagePrompt
from aleph_alpha_client import Image
- AlephAlphaClient
from aleph_alpha_client import AlephAlphaClient
from aleph_alpha_client import Client
Quickstart
import os
from aleph_alpha_client import Client, CompletionRequest, Prompt
# Ensure ALEPH_ALPHA_API_TOKEN is set in your environment variables
api_token = os.environ.get('ALEPH_ALPHA_API_TOKEN', 'YOUR_API_TOKEN')
if not api_token or api_token == 'YOUR_API_TOKEN':
raise ValueError("ALEPH_ALPHA_API_TOKEN environment variable not set or is default. Please set it to your actual API token.")
# Instantiate the synchronous client
client = Client(token=api_token)
# Define the prompt and completion request
request = CompletionRequest(
prompt=Prompt.from_text("Provide a short description of AI:"),
maximum_tokens=64,
model="luminous-base" # Or another available model, e.g., "pharia-1-llm-7b-control"
)
try:
# Send the completion request
response = client.complete(request=request, model=request.model)
print(response.completions[0].completion)
except Exception as e:
print(f"An error occurred: {e}")