OpenAI LLM Integration for LlamaIndex
This package provides the OpenAI Large Language Model (LLM) integration for LlamaIndex (version 0.7.5). LlamaIndex is a data framework designed to connect LLMs with your private or domain-specific data, enabling applications like RAG (Retrieval Augmented Generation). This integration allows LlamaIndex users to leverage various OpenAI models for text completion, chat generation, streaming responses, and structured outputs within their LlamaIndex applications. The library is actively maintained and releases are tied to the broader LlamaIndex ecosystem updates.
Warnings
- breaking With LlamaIndex v0.9.x and newer, direct imports from the root `llamaindex` package for LLMs and other integrations have been removed due to modularization. You must now import `OpenAI` directly from `llama_index.llms.openai`.
- breaking The `ServiceContext` object has been deprecated in LlamaIndex v0.10.x and completely removed in v0.11.x. Global LLM and embedding configurations are now managed through the `Settings` object, and explicit setting of LLMs (like OpenAI) is required.
- gotcha The OpenAI LLM client, by default, expects your OpenAI API key to be set as an environment variable named `OPENAI_API_KEY`. If this variable is not set, you will encounter authentication errors.
- gotcha While `pip install llama-index` includes a starter bundle with `llama-index-core` and `llama-index-llms-openai`, if you're installing only `llama-index-llms-openai` directly, ensure `llama-index-core` is also installed, as it provides fundamental LlamaIndex abstractions.
- gotcha Although a default model (often `gpt-3.5-turbo`) might be used, it is best practice to explicitly specify the `model` parameter when initializing `OpenAI` to ensure consistency and control, especially when using newer models or specific capabilities.
Install
-
pip install llama-index-llms-openai
Imports
- OpenAI
from llama_index.llms.openai import OpenAI
- ChatMessage
from llama_index.core.llms import ChatMessage
Quickstart
import os
from llama_index.llms.openai import OpenAI
# Ensure your OpenAI API key is set as an environment variable (OPENAI_API_KEY)
# If not, you can pass it directly to the OpenAI constructor:
# llm = OpenAI(api_key="your_api_key_here", model="gpt-3.5-turbo")
openai_api_key = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY')
if openai_api_key == 'YOUR_OPENAI_API_KEY':
print("WARNING: Please set the OPENAI_API_KEY environment variable or replace 'YOUR_OPENAI_API_KEY' with your actual key.")
llm = OpenAI(
model="gpt-3.5-turbo",
api_key=openai_api_key # Uses env var if set, otherwise the placeholder
)
# Example: Generate a completion
resp = llm.complete("Tell me a short story about a brave knight.")
print(f"Completion: {resp}")
# Example: Send a chat message
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="What is the capital of Canada?")
]
chat_resp = llm.chat(messages)
print(f"Chat Response: {chat_resp.content}")