VLM Run Python SDK

raw JSON →
0.6.2 verified Mon Apr 27 auth: no python

Official Python SDK for VLM Run, a platform for running vision-language models. Current version 0.6.2, requires Python >=3.9. Active development with frequent releases.

pip install vlmrun
error AuthenticationError: No API key provided. Please set VLM_RUN_API_KEY environment variable or pass 'api_key' parameter.
cause API key not provided.
fix
Set VLM_RUN_API_KEY environment variable or pass api_key='sk-...' to VLM constructor.
error ModelNotFoundError: Model 'gpt-4' not found. Available models: ['gpt-4o', 'claude-3-opus', 'claude-3-sonnet']
cause Using incorrect model name.
fix
Use an exact model identifier from the list of available models.
gotcha API key required; must be set via environment variable VLM_RUN_API_KEY or passed directly to constructor. Missing key leads to authentication error.
fix Set VLM_RUN_API_KEY environment variable or pass api_key parameter.
gotcha The 'infer' method requires both 'model' and 'prompt' parameters; model string must match available models exactly (e.g., 'gpt-4o', 'claude-3'). Case-sensitive.
fix Use correct model identifier as per docs.
deprecated Version 0.5.0 changed the client initialization signature. Old pattern 'VLM(api_key=..., base_url=...)' still works but base_url is deprecated.
fix Remove base_url usage; use environment or default endpoint.

Initialize client with API key from environment variable and run inference.

import os
from vlmrun import VLM

client = VLM(api_key=os.environ.get('VLM_RUN_API_KEY', ''))
response = client.infer(
    model="gpt-4o",
    prompt="Describe this image",
    image_url="https://example.com/image.jpg"
)
print(response)