Qdrant Vector Store for LlamaIndex

0.10.0 · active · verified Thu Apr 16

The `llama-index-vector-stores-qdrant` library provides an integration for using Qdrant as a vector store within the LlamaIndex framework. It enables users to store and retrieve vector embeddings efficiently for building Retrieval-Augmented Generation (RAG) applications. This integration supports various Qdrant features, including hybrid search capabilities, and is part of the LlamaIndex v0.10.0 ecosystem which adopted a modular architecture with separate integration packages.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize a Qdrant in-memory client, create a `QdrantVectorStore`, configure LlamaIndex settings with an embedding model (defaults to OpenAI), index a few dummy documents, and then query the index. It highlights the modular setup in LlamaIndex v0.10+.

import os
from llama_index.core import VectorStoreIndex, Document
from llama_index.vector_stores.qdrant import QdrantVectorStore
from qdrant_client import QdrantClient
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.core import Settings

# Ensure you have your OpenAI API key set as an environment variable
# os.environ["OPENAI_API_KEY"] = "sk-..."

# Fallback for API key or local Qdrant for quick demo without remote setup
if os.environ.get("OPENAI_API_KEY") is None or os.environ.get("OPENAI_API_KEY") == "":
    print("Warning: OPENAI_API_KEY not set. Using a dummy key for example purposes. This will fail if you try to use OpenAI models.")
    os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "sk-dummy-key")

# Initialize Qdrant client (local in-memory for quick start, or connect to a server)
# For persistent storage, use QdrantClient(path="./qdrant_data") or connect to a running Qdrant instance
client = QdrantClient(location=":memory:") # In-memory Qdrant instance

# Create a QdrantVectorStore instance
vector_store = QdrantVectorStore(client=client, collection_name="my_documents")

# Configure LlamaIndex settings (important for v0.10+)
Settings.embed_model = OpenAIEmbedding()

# Create a dummy document
documents = [
    Document(text="LlamaIndex is a data framework for LLM applications."),
    Document(text="Qdrant is a vector similarity search engine."),
    Document(text="Combining LlamaIndex with Qdrant enables powerful RAG systems."),
]

# Build the VectorStoreIndex
index = VectorStoreIndex.from_documents(documents, vector_store=vector_store)

# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("What is LlamaIndex?")
print(f"Response: {response}")

response = query_engine.query("What is Qdrant used for?")
print(f"Response: {response}")

view raw JSON →