{"id":9897,"library":"llama-index-embeddings-ollama","title":"LlamaIndex Ollama Embeddings","description":"The `llama-index-embeddings-ollama` library provides an integration for LlamaIndex to use embedding models served via Ollama. It allows developers to leverage local or self-hosted open-source models for text embeddings within their LlamaIndex applications. The current version is `0.9.0` and it generally follows the release cadence of the main `llama-index-core` library, receiving frequent updates.","status":"active","version":"0.9.0","language":"en","source_language":"en","source_url":"https://github.com/run-llama/llama-index/tree/main/llama-index-integrations/embeddings/llama-index-embeddings-ollama","tags":["LlamaIndex","embeddings","Ollama","LLM","local LLM","RAG"],"install":[{"cmd":"pip install llama-index-embeddings-ollama","lang":"bash","label":"Install package"}],"dependencies":[{"reason":"Core LlamaIndex framework required for integration.","package":"llama-index-core","optional":false},{"reason":"Python client for interacting with the Ollama server.","package":"ollama","optional":false}],"imports":[{"note":"Mistaking the LLM module for the embedding module is a common error, even though both might be Ollama-based.","wrong":"from llama_index.llms.ollama import OllamaEmbedding","symbol":"OllamaEmbedding","correct":"from llama_index.embeddings.ollama import OllamaEmbedding"},{"note":"Older LlamaIndex versions sometimes had integrations directly within `llama_index.core`. Modern integrations are typically separate packages.","wrong":"from llama_index.core.embeddings.ollama import OllamaEmbedding","symbol":"OllamaEmbedding","correct":"from llama_index.embeddings.ollama import OllamaEmbedding"}],"quickstart":{"code":"import os\nfrom llama_index.embeddings.ollama import OllamaEmbedding\n\n# Pre-requisites:\n# 1. Ensure the Ollama server is running locally: `ollama serve` in your terminal.\n# 2. Pull the desired embedding model: `ollama pull llama2` (or another model like `nomic-embed-text`)\n\ntry:\n    # Initialize the Ollama Embedding model\n    # Specify the model_name (must be pulled via Ollama) and optionally base_url\n    embed_model = OllamaEmbedding(\n        model_name=\"llama2\", # Make sure this model is pulled\n        base_url=\"http://localhost:11434\"\n    )\n\n    # Get an embedding for a piece of text\n    text_to_embed = \"This is an example sentence for LlamaIndex with Ollama embeddings.\"\n    embedding_vector = embed_model.get_text_embedding(text_to_embed)\n\n    print(f\"Successfully generated embedding using OllamaEmbedding.\")\n    print(f\"Embedding vector length: {len(embedding_vector)}\")\n    print(f\"First 10 dimensions: {embedding_vector[:10]}\")\n\n    # You can also embed multiple texts in a batch\n    texts_to_embed_batch = [\n        \"LlamaIndex helps build LLM applications.\",\n        \"Ollama runs large language models locally and efficiently.\",\n    ]\n    embedding_vectors_batch = embed_model.get_text_embedding_batch(texts_to_embed_batch)\n    print(f\"\\nGenerated {len(embedding_vectors_batch)} embeddings in batch.\")\n    print(f\"Length of first batch embedding: {len(embedding_vectors_batch[0])}\")\n\nexcept Exception as e:\n    print(f\"An error occurred: {e}\")\n    if \"Connection refused\" in str(e) or \"Failed to connect to Ollama\" in str(e):\n        print(\"Hint: Ensure the Ollama server is running (run `ollama serve`).\")\n    elif \"model not found\" in str(e) or \"no such model\" in str(e):\n        print(f\"Hint: Ensure the model '{embed_model.model_name}' is pulled (run `ollama pull {embed_model.model_name}`).\")\n","lang":"python","description":"This quickstart demonstrates how to initialize the `OllamaEmbedding` model and generate embeddings for single texts and batches. It includes checks for common setup issues like the Ollama server not running or models not being pulled."},"warnings":[{"fix":"Run `ollama serve` in your terminal before running any code that uses Ollama models.","message":"The Ollama server must be running and accessible at the specified `base_url` (defaulting to `http://localhost:11434`). You typically start it with `ollama serve`.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Execute `ollama pull <model_name>` (e.g., `ollama pull llama2`) in your terminal to download the model.","message":"The specified `model_name` for `OllamaEmbedding` must have been pulled and available in your local Ollama instance (e.g., `llama2`, `nomic-embed-text`).","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always install compatible versions. Check `llama-index-embeddings-ollama`'s `pyproject.toml` or `requirements.txt` for `llama-index-core` version constraints, and update both packages simultaneously (e.g., `pip install -U llama-index-embeddings-ollama llama-index-core`).","message":"Major version updates of `llama-index-core` (e.g., from 0.9.x to 0.10.x, or 0.10.x to 0.11.x) often introduce breaking changes that may require updating `llama-index-embeddings-ollama` to a compatible version.","severity":"breaking","affected_versions":">=0.1.0"},{"fix":"Ensure your Python environment meets the `python_requires` specification. Use `pyenv` or `conda` to manage Python versions if necessary.","message":"This package requires Python version `3.10` or higher, but less than `4.0`.","severity":"gotcha","affected_versions":"<0.9.0 (earlier versions might have slightly different constraints)"}],"env_vars":null,"last_verified":"2026-04-17T00:00:00.000Z","next_check":"2026-07-16T00:00:00.000Z","problems":[{"fix":"Start the Ollama server by running `ollama serve` in your terminal. If it's running on a different port or host, specify it in `OllamaEmbedding(base_url='http://<host>:<port>')`.","cause":"The Ollama server is not running or is not accessible at the default address (`http://localhost:11434`).","error":"ollama.exceptions.OllamaConnectionError: Failed to connect to Ollama: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embed (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x...>: Failed to establish a new connection: [Errno 61] Connection refused'))"},{"fix":"Pull the required model using the Ollama CLI: `ollama pull <model_name>` (e.g., `ollama pull llama2` or `ollama pull nomic-embed-text`).","cause":"The embedding model specified in `model_name` (e.g., 'non-existent-model') has not been pulled to your local Ollama instance.","error":"ollama.exceptions.OllamaError: model 'non-existent-model' not found, try `ollama pull non-existent-model`"},{"fix":"Correct the import statement to `from llama_index.embeddings.ollama import OllamaEmbedding`.","cause":"Attempting to import `OllamaEmbedding` from the LlamaIndex LLM (Large Language Model) module for Ollama, instead of the dedicated Embedding module.","error":"AttributeError: module 'llama_index.llms.ollama' has no attribute 'OllamaEmbedding'"}]}