{"id":1536,"library":"llama-index-core","title":"LlamaIndex Core","description":"LlamaIndex Core provides the foundational interface and components for building LLM-powered applications, enabling users to connect large language models with their private or domain-specific data. It includes data structures, indexing tools, query engines, and basic abstractions for LLMs and embedding models. The current version is 0.14.20, with frequent, often daily, releases across its modular ecosystem.","status":"active","version":"0.14.20","language":"en","source_language":"en","source_url":"https://github.com/run-llama/llama_index","tags":["LLM","RAG","Data Indexing","Framework","AI","Vector Database"],"install":[{"cmd":"pip install llama-index-core","lang":"bash","label":"Install core package"},{"cmd":"pip install 'llama-index-llms-openai' 'llama-index-embeddings-openai'","lang":"bash","label":"Install common integrations (e.g., OpenAI)"}],"dependencies":[{"reason":"Requires Python 3.10 or higher, less than 4.0.","package":"python","optional":false}],"imports":[{"symbol":"VectorStoreIndex","correct":"from llama_index.core import VectorStoreIndex"},{"symbol":"SimpleDirectoryReader","correct":"from llama_index.core.readers import SimpleDirectoryReader"},{"note":"ServiceContext was deprecated and largely replaced by the global Settings object or explicit passing of components.","wrong":"from llama_index.core import ServiceContext","symbol":"Settings","correct":"from llama_index.core import Settings"},{"symbol":"OpenAI","correct":"from llama_index.llms.openai import OpenAI"},{"symbol":"OpenAIEmbedding","correct":"from llama_index.embeddings.openai import OpenAIEmbedding"}],"quickstart":{"code":"import os\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.embeddings.openai import OpenAIEmbedding\n\n# Ensure you have your OpenAI API key set as an environment variable\n# os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\nOPENAI_API_KEY = os.environ.get('OPENAI_API_KEY', '')\n\nif not OPENAI_API_KEY:\n    raise ValueError(\"OPENAI_API_KEY environment variable not set.\")\n\n# Create a dummy data directory and file\nif not os.path.exists(\"data\"):\n    os.makedirs(\"data\")\nwith open(\"data/hello.txt\", \"w\") as f:\n    f.write(\"The quick brown fox jumps over the lazy dog.\\n\")\n    f.write(\"LlamaIndex is a data framework for LLM applications.\")\n\n# 1. Load data\ndocuments = SimpleDirectoryReader(\"data\").load_data()\n\n# 2. Configure global settings (LLM and Embedding Model)\nSettings.llm = OpenAI(api_key=OPENAI_API_KEY, model=\"gpt-3.5-turbo\")\nSettings.embed_model = OpenAIEmbedding(api_key=OPENAI_API_KEY, model=\"text-embedding-ada-002\")\n\n# 3. Create an index\nindex = VectorStoreIndex.from_documents(documents)\n\n# 4. Create a query engine\nquery_engine = index.as_query_engine()\n\n# 5. Query the index\nresponse = query_engine.query(\"What is LlamaIndex?\")\nprint(response.response)\n","lang":"python","description":"This quickstart demonstrates loading data, configuring the LLM and embedding model via global `Settings`, creating a vector store index, and performing a simple query. Ensure you have the `OPENAI_API_KEY` environment variable set and the `llama-index-llms-openai` and `llama-index-embeddings-openai` packages installed."},"warnings":[{"fix":"Migrate imports and installations to use `llama-index-core` for base classes and explicitly install `llama-index-<component>-<integration_name>` packages for specific integrations. Update import paths from `llama_index.<component>.<integration>` to `llama_index.<component>.<integration>` (e.g., `from llama_index.llms.openai import OpenAI`).","message":"Major architectural shift to a modular package structure in versions ~0.10.x and onwards. Core functionalities moved to `llama-index-core`, and all LLM, embedding, vector store, etc., integrations became separate packages (e.g., `llama-index-llms-openai`, `llama-index-embeddings-openai`).","severity":"breaking","affected_versions":">=0.10.0"},{"fix":"Replace `ServiceContext.from_defaults(...)` with direct assignments to `Settings.llm`, `Settings.embed_model`, `Settings.chunk_size`, etc. Explicitly pass components where granular control is needed.","message":"The `ServiceContext` class was deprecated and largely replaced by the global `Settings` object for configuration. While `ServiceContext` might still exist in some forms, `Settings` is the recommended way to configure LLMs, embedding models, chunk sizes, etc.","severity":"breaking","affected_versions":">=0.10.0"},{"fix":"Upgrade your Python environment to 3.10 or higher. The library currently targets `<4.0,>=3.10`.","message":"Support for Python 3.9 has been officially deprecated and removed.","severity":"deprecated","affected_versions":">=0.14.18"},{"fix":"Always explicitly configure your desired LLM and embedding model via `Settings.llm` and `Settings.embed_model` or by passing them directly to constructors. Ensure relevant integration packages are installed (e.g., `llama-index-llms-anthropic`, `llama-index-embeddings-huggingface`).","message":"Many LlamaIndex components (LLMs, embeddings, vector stores, data loaders) default to using `openai` if not explicitly configured. This often leads to `openai` being a de-facto dependency for basic usage, requiring an API key even if not intended.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Consult the migration guides in the official LlamaIndex documentation when upgrading across major architectural changes. Pay close attention to how document content and metadata are accessed and stored.","message":"When migrating from older versions, `Document` and `Node` structures might have subtle differences in metadata handling and content fields. For instance, `text` vs `content` or `extra_info` vs `metadata`.","severity":"gotcha","affected_versions":"Pre-0.10.x to Post-0.10.x migrations"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}