LlamaIndex
raw JSON → 0.14.15 verified Tue May 12 auth: no python install: reviewed quickstart: stale
LlamaIndex is a data framework for building LLM-powered agents over your data. Specializes in RAG pipelines, document parsing, and agent workflows. Core package is llama-index-core. Integrations are separate packages installed from LlamaHub.
pip install llama-index Common errors
error ModuleNotFoundError: No module named 'llama_index.query_engine' ↓
cause This error occurs because of a significant refactoring in LlamaIndex v0.10 and later, where core modules were moved under the `llama_index.core` namespace.
fix
Update your import statements to use
llama_index.core for core components. For example, change from llama_index.query_engine import RetrieverQueryEngine to from llama_index.core.query_engine import RetrieverQueryEngine. error ImportError: cannot import name 'ServiceContext' from 'llama_index.core' ↓
cause The `ServiceContext` abstraction was deprecated in LlamaIndex v0.10.0 and replaced by a more modular `Settings` object for global configurations, or by directly passing parameters.
fix
Remove
ServiceContext and instead configure LLMs, embedding models, etc., using the global Settings object or by passing them directly to the relevant LlamaIndex components. For example, use from llama_index.core import Settings and then Settings.llm = OpenAI() instead of ServiceContext.from_defaults(llm=OpenAI()). error ModuleNotFoundError: No module named 'llama_index.llms.huggingface' ↓
cause With LlamaIndex v0.10.0 and above, integrations like LLMs, embedding models, and vector stores were split into separate PyPI packages. This error indicates that the specific integration package is not installed.
fix
Install the specific integration package using pip. For HuggingFace LLMs, run
pip install llama-index-llms-huggingface. The general pattern is pip install llama-index-llms-<provider>, llama-index-embeddings-<provider>, or llama-index-vector-stores-<provider>. error AttributeError: module 'llama_index' has no attribute '__version__' ↓
cause This error typically arises when an older part of your code or an integrated library attempts to access `__version__` directly from the top-level `llama_index` module, but in newer versions (v0.10.x and later), this attribute might have moved, for instance, to `llama_index.core.__version__` or is not exposed in the same way.
fix
Ensure all
llamaindex packages are updated to compatible versions. If you are integrating with other libraries, they may need to be updated to support the newer LlamaIndex package structure. Manually checking llama_index.core.__version__ might reveal the version, but the underlying issue is often a version mismatch or an outdated dependency expecting the old structure. error ValueError: LLM must be a FunctionCallingLLM ↓
cause This error occurs when an LlamaIndex agent workflow (like `AgentWorkflow`) requires an LLM with function calling capabilities, but the configured LLM does not expose the necessary `is_function_calling_model` metadata or does not implement the `FunctionCallingLLM` interface.
fix
Use an LLM that explicitly supports function calling, such as OpenAI's models, or ensure your custom LLM implementation correctly sets
llm.metadata.is_function_calling_model = True and provides the required function calling methods. Alternatively, if your LLM doesn't support function calling, consider using an agent that doesn't require this capability (e.g., a ReAct agent instead of a FunctionAgent in some contexts). Warnings
breaking ServiceContext is fully removed. All code using ServiceContext.from_defaults() will fail. ↓
fix Use Settings global object: from llama_index.core import Settings
breaking AgentRunner, AgentWorker, FunctionCallingAgent, OpenAIAgent, StructuredAgentPlanner all removed. ↓
fix Migrate to AgentWorkflow, FunctionAgent, CodeActAgent, or ReActAgent (new workflow-based)
breaking QueryPipeline class removed. ↓
fix Use Workflows-based approach instead
breaking llama-index-legacy package deprecated and removed from repository. ↓
fix Migrate fully to llama-index-core and current integration packages
breaking GPTVectorStoreIndex, GPTSimpleKeywordTableIndex and all GPT-prefixed index names removed. ↓
fix Use VectorStoreIndex, SimpleKeywordTableIndex etc.
gotcha pip install llama-index alone installs OpenAI integrations by default. For other providers install llama-index-core plus the specific integration package. ↓
fix pip install llama-index-core llama-index-llms-anthropic for Claude access
gotcha index.as_chat_engine() default changed to CondensePlusContextChatEngine in 0.14.x. ↓
fix Explicitly pass chat_mode if you relied on previous default behaviour
gotcha SimpleDirectoryReader requires the specified directory (e.g., 'data') to exist at runtime. If the directory is missing, a ValueError will be raised. ↓
fix Ensure the directory specified in SimpleDirectoryReader (e.g., 'data') exists and contains your data files in the environment where the script is executed.
Install
pip install llama-index-core pip install llama-index-core llama-index-llms-openai llama-index-embeddings-openai Install compatibility reviewed last tested: 2026-05-12
python os / libc variant status wheel install import disk
3.10 alpine (musl) llama-index - - 4.81s 367.1M
3.10 alpine (musl) llama-index-core - - 4.76s 349.6M
3.10 alpine (musl) llama-index-core - - 4.75s 366.9M
3.10 slim (glibc) llama-index - - 3.61s 431M
3.10 slim (glibc) llama-index-core - - 3.62s 414M
3.10 slim (glibc) llama-index-core - - 3.60s 431M
3.11 alpine (musl) llama-index - - 5.96s 401.3M
3.11 alpine (musl) llama-index-core - - 5.91s 382.8M
3.11 alpine (musl) llama-index-core - - 6.00s 401.1M
3.11 slim (glibc) llama-index - - 5.05s 465M
3.11 slim (glibc) llama-index-core - - 4.93s 447M
3.11 slim (glibc) llama-index-core - - 5.04s 464M
3.12 alpine (musl) llama-index - - 5.34s 389.3M
3.12 alpine (musl) llama-index-core - - 5.43s 370.9M
3.12 alpine (musl) llama-index-core - - 5.36s 389.0M
3.12 slim (glibc) llama-index - - 5.28s 453M
3.12 slim (glibc) llama-index-core - - 5.51s 435M
3.12 slim (glibc) llama-index-core - - 5.48s 452M
3.13 alpine (musl) llama-index - - 5.08s 385.1M
3.13 alpine (musl) llama-index-core - - 5.04s 366.8M
3.13 alpine (musl) llama-index-core - - 5.09s 384.9M
3.13 slim (glibc) llama-index - - 5.18s 450M
3.13 slim (glibc) llama-index-core - - 5.04s 433M
3.13 slim (glibc) llama-index-core - - 5.16s 450M
3.9 alpine (musl) llama-index - - - -
3.9 alpine (musl) llama-index-core - - - -
3.9 alpine (musl) llama-index-core - - - -
3.9 slim (glibc) llama-index - - - -
3.9 slim (glibc) llama-index-core - - - -
3.9 slim (glibc) llama-index-core - - - -
Imports
- VectorStoreIndex wrong
from llama_index import GPTVectorStoreIndexcorrectfrom llama_index.core import VectorStoreIndex - OpenAI (LLM) wrong
from llama_index import OpenAIcorrectfrom llama_index.llms.openai import OpenAI - AgentWorkflow wrong
from llama_index.core.agent import AgentRunnercorrectfrom llama_index.core.agent.workflow import AgentWorkflow - Settings wrong
from llama_index.core import ServiceContext service_context = ServiceContext.from_defaults(llm=...)correctfrom llama_index.core import Settings Settings.llm = OpenAI(model='gpt-4o')
Quickstart stale last tested: 2026-05-12
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
Settings.llm = OpenAI(model='gpt-4o')
Settings.embed_model = OpenAIEmbedding(model='text-embedding-3-small')
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query('What did the author do growing up?')
print(response)