{"id":9088,"library":"llm","title":"LLM: CLI Utility and Python Library for Large Language Models","description":"LLM is a Python CLI utility and library for interacting with Large Language Models from various providers like OpenAI, Anthropic, and Google Gemini, as well as local models. It provides a unified interface for running prompts, storing conversations in SQLite, generating embeddings, extracting structured content, and executing tools. Currently at version 0.30, the library maintains an active development and release cadence, frequently adding support for new models and features.","status":"active","version":"0.30","language":"en","source_language":"en","source_url":"https://github.com/simonw/llm","tags":["LLM","AI","Large Language Models","CLI","Python API","OpenAI","Anthropic","Gemini","Ollama","Tool Calling"],"install":[{"cmd":"pip install llm","lang":"bash","label":"Install LLM"},{"cmd":"llm install llm-openai","lang":"bash","label":"Install OpenAI plugin (example)"},{"cmd":"llm install llm-gemini","lang":"bash","label":"Install Gemini plugin (example)"}],"dependencies":[{"reason":"Required for interacting with OpenAI models. Installed as a plugin.","package":"llm-openai","optional":true},{"reason":"Required for interacting with Anthropic models. Installed as a plugin.","package":"llm-anthropic","optional":true},{"reason":"Required for interacting with Google Gemini models. Installed as a plugin.","package":"llm-gemini","optional":true},{"reason":"Required for interacting with local Ollama models. Installed as a plugin.","package":"llm-ollama","optional":true}],"imports":[{"note":"Used to retrieve a model instance by ID or alias.","symbol":"get_model","correct":"import llm\nmodel = llm.get_model('gpt-4o-mini')"},{"note":"For providing multi-modal input (e.g., images) to models that support it.","symbol":"Attachment","correct":"from llm import Attachment"},{"note":"To define custom tools that LLMs can execute.","symbol":"Tool","correct":"from llm import Tool"}],"quickstart":{"code":"import llm\nimport os\n\n# Ensure your API key is set as an environment variable (e.g., OPENAI_API_KEY)\n# or use llm keys set openai from the CLI\nopenai_api_key = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY_HERE')\n\nif not openai_api_key:\n    print(\"Warning: OPENAI_API_KEY environment variable not set. Please configure it.\")\n    # For demonstration, we'll try to proceed, but it might fail.\n    # In a real application, you'd handle this more robustly.\n\ntry:\n    # Get a specific model (e.g., gpt-4o-mini). Ensure the corresponding plugin is installed.\n    model = llm.get_model(\"gpt-4o-mini\")\n    \n    # If the key is not in env, pass it directly (if model plugin supports it)\n    response = model.prompt(\n        \"Five surprising names for a pet pelican\", \n        key=openai_api_key if openai_api_key != 'YOUR_OPENAI_API_KEY_HERE' else None\n    )\n    \n    # Access the generated text (lazy loading)\n    print(response.text())\n\n    # Example with a conversation\n    conversation = model.conversation()\n    response1 = conversation.prompt(\"Tell me a fun fact about pandas.\")\n    print(f\"Fact 1: {response1.text()}\")\n    response2 = conversation.prompt(\"Now, tell me another one.\")\n    print(f\"Fact 2: {response2.text()}\")\n\nexcept llm.UnknownModelError as e:\n    print(f\"Error: {e}. Make sure you have installed the necessary plugin, e.g., 'llm install llm-openai'.\")\nexcept Exception as e:\n    print(f\"An unexpected error occurred: {e}\")","lang":"python","description":"This quickstart demonstrates how to use the `llm` Python API to get a model and execute a prompt. It shows both a single prompt and a conversational flow. API keys are preferably managed via environment variables (e.g., `OPENAI_API_KEY`) or the `llm keys set` CLI command. Ensure the relevant model plugin (e.g., `llm-openai`) is installed."},"warnings":[{"fix":"Upgrade your Python environment to 3.10 or newer if using LLM 0.28 or later. Consider using a virtual environment (venv) for project isolation.","message":"LLM 0.28 introduced a minimum Python version requirement of 3.10 or higher. Previous versions supported older Python versions.","severity":"breaking","affected_versions":">=0.28"},{"fix":"As a workaround, manually install PyTorch within the `llm`'s virtual environment using `llm install llm-python` followed by `llm python -m pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu` before installing the PyTorch-dependent plugin.","message":"Some LLM plugins that depend on PyTorch (e.g., `llm-sentence-transformers`) may not install cleanly when `llm` itself is installed via Homebrew, due to Python version mismatches with PyTorch's stable releases.","severity":"gotcha","affected_versions":"All versions when installed via Homebrew with PyTorch-dependent plugins."},{"fix":"Set API keys as environment variables (e.g., `export OPENAI_API_KEY='sk-...'`) or use `llm keys set <provider_name>` for persistent storage. Pass `key=...` directly to `model.prompt()` only if the specific plugin supports it and it's not sensitive for your use case.","message":"API keys for remote LLM providers are crucial and must be configured correctly. The library will look for environment variables (e.g., `OPENAI_API_KEY`) or keys stored via the `llm keys set` CLI command.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always call `response.text()` (or `response.tool_calls()`, etc.) to trigger the actual model interaction and retrieve the result.","message":"The `Response.text()` method employs lazy loading. If you inspect the `Response` object before calling `.text()`, it will show '... not yet done ...'. The actual API call is made when `.text()` is invoked.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Install the plugin for the desired model provider. For OpenAI models, run `llm install llm-openai`. For Gemini, `llm install llm-gemini`, etc.","cause":"The requested model ID or alias is not recognized, often because the corresponding plugin (e.g., llm-openai) has not been installed.","error":"llm.UnknownModelError: Unknown model: 'gpt-4o-mini'"},{"error":"401 Unauthorized"},{"fix":"Verify your API key. Set it as an environment variable (e.g., `export OPENAI_API_KEY='sk-...'`) or use the CLI command `llm keys set <provider_name>` to store it securely.","cause":"The API key for the LLM provider is missing, incorrect, or expired.","error":"AuthenticationError: Invalid API key provided"},{"error":"429 Too Many Requests"},{"fix":"Implement retry logic with exponential backoff in your application. Reduce the frequency of your API calls. Check the provider's documentation for current rate limits and consider increasing your quota if necessary.","cause":"You have exceeded the rate limits (requests per minute or tokens per minute) imposed by the LLM provider.","error":"Rate limit reached for gpt-4o in organization org-xxx. Limit: 500 RPM."},{"fix":"Install the required plugin using `llm install llm-openai`.","cause":"You are trying to import or use an OpenAI model, but the `llm-openai` plugin is not installed.","error":"ModuleNotFoundError: No module named 'llm_openai'"}]}