LlamaIndex OpenAI-Like LLM Integration
This package provides an integration for LlamaIndex to use OpenAI-compatible Large Language Models (LLMs). It acts as a thin wrapper, allowing LlamaIndex applications to interact with any API that mimics the OpenAI API, making it flexible for various third-party LLM providers. The current version is 0.7.1, with releases typically tied to the broader LlamaIndex ecosystem updates.
Common errors
-
ModuleNotFoundError: No module named 'llama_index.llms.openai_like'
cause The `llama-index-llms-openai-like` package is not installed in the current Python environment.fixRun `pip install llama-index-llms-openai-like` to install the package. -
NotFoundError: 404 page not found (when using OpenAILike with local LLMs like Ollama)
cause The `api_base` URL or the request format (headers, body, endpoint path) configured in `OpenAILike` does not correctly match the local OpenAI-compatible server's expectations. This often happens with local models (e.g., Ollama) if the `/v1` or `/v1/chat/completions` path isn't correctly appended or if `is_chat_model` is misconfigured.fixVerify that `api_base` is set to the *full* endpoint URL, including any necessary API version paths (e.g., `http://localhost:11434/v1`). Ensure `is_chat_model` is correctly set (`True` for chat models, `False` for completion models) and that the `model` name is recognized by your local server. Compare your Python request with a successful `curl` command if available. -
ModuleNotFoundError: No module named 'llama_index.llms.openai'
cause This error can occur if another LlamaIndex integration or an older version of LlamaIndex is implicitly trying to import `llama_index.llms.openai` when only `llama-index-llms-openai-like` is installed, or if `llama-index-llms-openai` is a required dependency that's missing.fixIf your project indirectly depends on `llama-index-llms-openai`, install it with `pip install llama-index-llms-openai`. If you're solely using the `openai-like` integration, ensure all relevant LlamaIndex components are configured to use `OpenAILike` and that there are no lingering imports or dependencies on the specific OpenAI package.
Warnings
- breaking LlamaIndex v0.10+ introduced a significant packaging refactor. While `llama-index-llms-openai-like` maintains its direct import path, ensure your other LlamaIndex components (core, embeddings, etc.) are updated to their respective integration packages or use `llama-index-core` to avoid conflicts.
- gotcha `api_key` and `api_base` parameters for `OpenAILike` are crucial. `api_base` must point to the correct OpenAI-compatible API endpoint, and `api_key` must be provided, even if it's a dummy string (e.g., 'fake') for APIs that don't require authentication. Incorrect values will lead to connection or authentication errors.
- gotcha Parameters like `context_window`, `is_chat_model`, and `is_function_calling_model` directly influence how `OpenAILike` interacts with the underlying LLM. Incorrect settings (e.g., `is_chat_model=False` for a chat-only API) can cause unexpected behavior or errors.
Install
-
pip install llama-index-llms-openai-like
Imports
- OpenAILike
from llama_index.llms.openai_like import OpenAILike
Quickstart
import os
from llama_index.llms.openai_like import OpenAILike
# Replace with your actual model name and API base URL
# If your API doesn't require an API key, set it to a dummy string like 'fake'
# Set is_chat_model and context_window to match your model's capabilities
llm = OpenAILike(
model="my-openai-compatible-model",
api_base=os.environ.get("OPENAI_COMPATIBLE_API_BASE", "http://localhost:8000/v1"),
api_key=os.environ.get("OPENAI_COMPATIBLE_API_KEY", "fake-api-key"),
is_chat_model=True,
context_window=4096 # Adjust based on your model's context window
)
response = llm.complete("Tell me a short story about a brave knight.")
print(response.text)