LLM: CLI Utility and Python Library for Large Language Models
LLM is a Python CLI utility and library for interacting with Large Language Models from various providers like OpenAI, Anthropic, and Google Gemini, as well as local models. It provides a unified interface for running prompts, storing conversations in SQLite, generating embeddings, extracting structured content, and executing tools. Currently at version 0.30, the library maintains an active development and release cadence, frequently adding support for new models and features.
Common errors
-
llm.UnknownModelError: Unknown model: 'gpt-4o-mini'
cause The requested model ID or alias is not recognized, often because the corresponding plugin (e.g., llm-openai) has not been installed.fixInstall the plugin for the desired model provider. For OpenAI models, run `llm install llm-openai`. For Gemini, `llm install llm-gemini`, etc. -
401 Unauthorized
-
AuthenticationError: Invalid API key provided
cause The API key for the LLM provider is missing, incorrect, or expired.fixVerify your API key. Set it as an environment variable (e.g., `export OPENAI_API_KEY='sk-...'`) or use the CLI command `llm keys set <provider_name>` to store it securely. -
429 Too Many Requests
-
Rate limit reached for gpt-4o in organization org-xxx. Limit: 500 RPM.
cause You have exceeded the rate limits (requests per minute or tokens per minute) imposed by the LLM provider.fixImplement retry logic with exponential backoff in your application. Reduce the frequency of your API calls. Check the provider's documentation for current rate limits and consider increasing your quota if necessary. -
ModuleNotFoundError: No module named 'llm_openai'
cause You are trying to import or use an OpenAI model, but the `llm-openai` plugin is not installed.fixInstall the required plugin using `llm install llm-openai`.
Warnings
- breaking LLM 0.28 introduced a minimum Python version requirement of 3.10 or higher. Previous versions supported older Python versions.
- gotcha Some LLM plugins that depend on PyTorch (e.g., `llm-sentence-transformers`) may not install cleanly when `llm` itself is installed via Homebrew, due to Python version mismatches with PyTorch's stable releases.
- gotcha API keys for remote LLM providers are crucial and must be configured correctly. The library will look for environment variables (e.g., `OPENAI_API_KEY`) or keys stored via the `llm keys set` CLI command.
- gotcha The `Response.text()` method employs lazy loading. If you inspect the `Response` object before calling `.text()`, it will show '... not yet done ...'. The actual API call is made when `.text()` is invoked.
Install
-
pip install llm -
llm install llm-openai -
llm install llm-gemini
Imports
- get_model
import llm model = llm.get_model('gpt-4o-mini') - Attachment
from llm import Attachment
- Tool
from llm import Tool
Quickstart
import llm
import os
# Ensure your API key is set as an environment variable (e.g., OPENAI_API_KEY)
# or use llm keys set openai from the CLI
openai_api_key = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY_HERE')
if not openai_api_key:
print("Warning: OPENAI_API_KEY environment variable not set. Please configure it.")
# For demonstration, we'll try to proceed, but it might fail.
# In a real application, you'd handle this more robustly.
try:
# Get a specific model (e.g., gpt-4o-mini). Ensure the corresponding plugin is installed.
model = llm.get_model("gpt-4o-mini")
# If the key is not in env, pass it directly (if model plugin supports it)
response = model.prompt(
"Five surprising names for a pet pelican",
key=openai_api_key if openai_api_key != 'YOUR_OPENAI_API_KEY_HERE' else None
)
# Access the generated text (lazy loading)
print(response.text())
# Example with a conversation
conversation = model.conversation()
response1 = conversation.prompt("Tell me a fun fact about pandas.")
print(f"Fact 1: {response1.text()}")
response2 = conversation.prompt("Now, tell me another one.")
print(f"Fact 2: {response2.text()}")
except llm.UnknownModelError as e:
print(f"Error: {e}. Make sure you have installed the necessary plugin, e.g., 'llm install llm-openai'.")
except Exception as e:
print(f"An unexpected error occurred: {e}")