llama-index-cli
llama-index-cli was a command-line interface tool for LlamaIndex, primarily designed to facilitate Retrieval-Augmented Generation (RAG) with local files without writing Python code. It allowed users to ingest documents and perform Q&A or chat within the terminal. The package is currently at version 0.5.7, but it has been officially deprecated and is no longer maintained as of April 2, 2026.
Warnings
- deprecated The `llama-index-cli` package is officially deprecated and no longer maintained. It will not receive further updates or bug fixes. Users are advised to consider alternatives such as the `create-llama` CLI tool for scaffolding LlamaIndex projects or interacting directly with the core `llama-index` Python library.
- gotcha By default, `llama-index-cli` uses OpenAI for embeddings and LLM, which requires setting the `OPENAI_API_KEY` environment variable. This means that, by default, any local data ingested with this tool will be sent to OpenAI's API.
- breaking Upgrading the underlying `llama-index` library to versions v0.10.0 and above introduced significant breaking changes, including a massive packaging refactor, deprecation of `ServiceContext`, and changes to import paths. While `llama-index-cli` itself is deprecated, users might encounter issues if their `llama-index` dependency is updated.
- gotcha Users installing `llama-index-cli` alongside `chromadb` (a default dependency for RAG) may encounter dependency conflicts, particularly with `onnxruntime` in certain environments (e.g., Conda). This can lead to installation failures.
Install
-
pip install llama-index-cli
Imports
- RagCLI
from llama_index.cli.rag_cli import RagCLI
Quickstart
import os
import subprocess
# NOTE: llama-index-cli is deprecated. Use the main llama-index library
# or the 'create-llama' CLI for new projects.
# 1. Set your OpenAI API key (required by default for embeddings/LLM)
# Replace 'YOUR_OPENAI_API_KEY' with your actual key or set it as an env var.
os.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY')
# Create a dummy file for demonstration
with open('example.txt', 'w') as f:
f.write('LlamaIndex is a data framework for LLM applications. It helps connect custom data sources with LLMs.')
print("Ingesting 'example.txt' and asking a question via llamaindex-cli...")
# 2. Ingest a file and ask a question
try:
result = subprocess.run(
['llamaindex-cli', 'rag', '--files', 'example.txt', '--question', 'What is LlamaIndex?'],
capture_output=True, text=True, check=True
)
print("--- CLI Output ---")
print(result.stdout)
if result.stderr:
print("--- CLI Errors (if any) ---")
print(result.stderr)
except FileNotFoundError:
print("Error: 'llamaindex-cli' command not found. Ensure the package is installed and in your PATH.")
except subprocess.CalledProcessError as e:
print(f"Error running command: {e}")
print(f"Stdout: {e.stdout}")
print(f"Stderr: {e.stderr}")
# Clean up the dummy file
os.remove('example.txt')