LLM: CLI Utility and Python Library for Large Language Models

0.30 · active · verified Thu Apr 16

LLM is a Python CLI utility and library for interacting with Large Language Models from various providers like OpenAI, Anthropic, and Google Gemini, as well as local models. It provides a unified interface for running prompts, storing conversations in SQLite, generating embeddings, extracting structured content, and executing tools. Currently at version 0.30, the library maintains an active development and release cadence, frequently adding support for new models and features.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to use the `llm` Python API to get a model and execute a prompt. It shows both a single prompt and a conversational flow. API keys are preferably managed via environment variables (e.g., `OPENAI_API_KEY`) or the `llm keys set` CLI command. Ensure the relevant model plugin (e.g., `llm-openai`) is installed.

import llm
import os

# Ensure your API key is set as an environment variable (e.g., OPENAI_API_KEY)
# or use llm keys set openai from the CLI
openai_api_key = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY_HERE')

if not openai_api_key:
    print("Warning: OPENAI_API_KEY environment variable not set. Please configure it.")
    # For demonstration, we'll try to proceed, but it might fail.
    # In a real application, you'd handle this more robustly.

try:
    # Get a specific model (e.g., gpt-4o-mini). Ensure the corresponding plugin is installed.
    model = llm.get_model("gpt-4o-mini")
    
    # If the key is not in env, pass it directly (if model plugin supports it)
    response = model.prompt(
        "Five surprising names for a pet pelican", 
        key=openai_api_key if openai_api_key != 'YOUR_OPENAI_API_KEY_HERE' else None
    )
    
    # Access the generated text (lazy loading)
    print(response.text())

    # Example with a conversation
    conversation = model.conversation()
    response1 = conversation.prompt("Tell me a fun fact about pandas.")
    print(f"Fact 1: {response1.text()}")
    response2 = conversation.prompt("Now, tell me another one.")
    print(f"Fact 2: {response2.text()}")

except llm.UnknownModelError as e:
    print(f"Error: {e}. Make sure you have installed the necessary plugin, e.g., 'llm install llm-openai'.")
except Exception as e:
    print(f"An unexpected error occurred: {e}")

view raw JSON →