{"id":9087,"library":"llama-index-vector-stores-pinecone","title":"LlamaIndex Pinecone Vector Store Integration","description":"This library provides the integration for using Pinecone as a vector store backend within LlamaIndex applications. It enables storing and retrieving document embeddings in a Pinecone index for efficient semantic search and Retrieval-Augmented Generation (RAG). As of version 0.8.0, it supports LlamaIndex's modular architecture, requiring separate installation from the core LlamaIndex library. It follows a frequent release cadence, often aligning with LlamaIndex core updates.","status":"active","version":"0.8.0","language":"en","source_language":"en","source_url":"https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/vector_stores/llama-index-vector-stores-pinecone","tags":["LlamaIndex","Pinecone","vector database","vector store","RAG","LLM","embeddings","AI"],"install":[{"cmd":"pip install llama-index-vector-stores-pinecone","lang":"bash","label":"Install only Pinecone integration"},{"cmd":"pip install llama-index llama-index-vector-stores-pinecone pinecone-client openai","lang":"bash","label":"Install with LlamaIndex core, Pinecone client, and OpenAI (common setup)"}],"dependencies":[{"reason":"Core LlamaIndex functionalities like VectorStoreIndex and StorageContext.","package":"llama-index-core","optional":false},{"reason":"Official Python client for interacting with Pinecone.","package":"pinecone-client","optional":false},{"reason":"Commonly used embedding model for LlamaIndex applications.","package":"llama-index-embeddings-openai","optional":true},{"reason":"Often used for generating embeddings when working with OpenAI's models.","package":"openai","optional":true}],"imports":[{"symbol":"PineconeVectorStore","correct":"from llama_index.vector_stores.pinecone import PineconeVectorStore"},{"symbol":"VectorStoreIndex","correct":"from llama_index.core import VectorStoreIndex"},{"symbol":"SimpleDirectoryReader","correct":"from llama_index.core import SimpleDirectoryReader"},{"symbol":"StorageContext","correct":"from llama_index.core import StorageContext"},{"note":"The Pinecone client itself is imported directly from the `pinecone` package, not from the LlamaIndex integration.","wrong":"from llama_index.vector_stores.pinecone import Pinecone","symbol":"Pinecone","correct":"from pinecone import Pinecone"}],"quickstart":{"code":"import os\nfrom pinecone import Pinecone, ServerlessSpec\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext\nfrom llama_index.vector_stores.pinecone import PineconeVectorStore\n\n# Set your API keys (replace with actual keys or use environment variables)\nos.environ['PINECONE_API_KEY'] = os.environ.get('PINECONE_API_KEY', 'YOUR_PINECONE_API_KEY')\nos.environ['OPENAI_API_KEY'] = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY')\n\n# Initialize Pinecone\npc = Pinecone(api_key=os.environ['PINECONE_API_KEY'])\n\nindex_name = \"quickstart-index\"\nif index_name not in pc.list_indexes().names():\n    pc.create_index(\n        name=index_name,\n        dimension=1536, # Dimension for OpenAI's text-embedding-ada-002\n        metric=\"cosine\",\n        spec=ServerlessSpec(cloud=\"aws\", region=\"us-west-2\")\n    )\n\npinecone_index = pc.Index(index_name)\n\n# Initialize PineconeVectorStore\nvector_store = PineconeVectorStore(pinecone_index=pinecone_index)\n\n# Load documents (create a 'data' directory with text files or adjust path)\ntry:\n    documents = SimpleDirectoryReader(input_dir=\"./data\").load_data()\nexcept FileNotFoundError:\n    print(\"Please create a 'data' directory and add some text files, or modify SimpleDirectoryReader path.\")\n    documents = []\n\nif documents:\n    # Set up StorageContext\n    storage_context = StorageContext.from_defaults(vector_store=vector_store)\n\n    # Create VectorStoreIndex\n    index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)\n\n    # Query the index\n    query_engine = index.as_query_engine()\n    response = query_engine.query(\"What is this document about?\")\n    print(response.response)\nelse:\n    print(\"No documents loaded. Skipping index creation and query.\")","lang":"python","description":"This quickstart demonstrates how to set up a Pinecone index, initialize `PineconeVectorStore`, load documents using `SimpleDirectoryReader`, and build a `VectorStoreIndex` for querying. It assumes `PINECONE_API_KEY` and `OPENAI_API_KEY` are set as environment variables."},"warnings":[{"fix":"Ensure you `pip install llama-index-core` and `pip install llama-index-vector-stores-pinecone`. Update imports from `from llama_index import ...` to `from llama_index.core import ...` for core components and `from llama_index.vector_stores.pinecone import ...` for this integration. The `ServiceContext` abstraction has also been deprecated; configure LLMs/embeddings directly or use global settings.","message":"LlamaIndex v0.10.0 introduced a major packaging refactor. Core components moved to `llama-index-core`, and integrations like `pinecone-vector-store` are now separate PyPI packages.","severity":"breaking","affected_versions":">=0.10.0 of `llama-index` core (and `llama-index-vector-stores-pinecone` versions compatible with it)"},{"fix":"When creating a Pinecone index (e.g., `pc.create_index`), ensure the `dimension` parameter matches the output dimension of your chosen embedding model (e.g., 1536 for OpenAI's `text-embedding-ada-002`). Refer to your embedding model's documentation for the correct dimension.","message":"Pinecone index dimensions must match the embedding model's output dimension. Mismatched dimensions will lead to errors during upsert operations.","severity":"gotcha","affected_versions":"All"},{"fix":"Verify `PINECONE_API_KEY` is correct and has access to the specified index. Check if documents were successfully added to Pinecone. Review any `MetadataFilters` applied during querying to ensure they are not inadvertently excluding relevant results. For persistent issues, inspect Pinecone's dashboard to confirm index content and health.","message":"Inconsistent or empty query results from Pinecone often stem from issues with API keys, index state, overly restrictive filters, or problems during document ingestion.","severity":"gotcha","affected_versions":"All"},{"fix":"Use `pip install --upgrade` for specific packages to force the desired versions, or install `pinecone-client` directly with a version constraint that satisfies all dependencies (e.g., `pinecone-client>=4.0.0,<5.0.0`). Check the dependency requirements of all involved libraries.","message":"Compatibility issues can arise when `pinecone-client` is installed alongside other libraries that also depend on it (e.g., `langchain-pinecone`), leading to version downgrades or conflicts.","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Remove any explicit usage of `service_context` when initializing `PineconeVectorStore` or `VectorStoreIndex`. Configure LLM and embedding models directly using `Settings` or by passing them as arguments to `VectorStoreIndex.from_documents()`.","cause":"`ServiceContext` was deprecated in LlamaIndex v0.10.0 and `PineconeVectorStore` no longer relies on it directly.","error":"AttributeError: 'PineconeVectorStore' object has no attribute 'service_context'"},{"fix":"Ensure the `dimension` parameter in `pc.create_index()` matches the output dimension of your embedding model. For example, if using OpenAI's `text-embedding-ada-002`, the dimension should be 1536.","cause":"The vector dimension generated by your embedding model does not match the dimension specified when creating the Pinecone index.","error":"pinecone.exceptions.PineconeException: The dimension of the vectors to be upserted (X) does not match the dimension of the index (Y)."},{"fix":"Implement a retry mechanism with a short delay (e.g., `time.sleep(5)`) or check `pc.describe_index(index_name).status` before proceeding with upserts or queries.","cause":"The Pinecone index creation can take a short amount of time to become active and ready for operations.","error":"Index 'your-index-name' is not ready. Please wait a few seconds and try again."}]}