OpenAI LLM Integration for LlamaIndex

0.7.5 · active · verified Thu Apr 09

This package provides the OpenAI Large Language Model (LLM) integration for LlamaIndex (version 0.7.5). LlamaIndex is a data framework designed to connect LLMs with your private or domain-specific data, enabling applications like RAG (Retrieval Augmented Generation). This integration allows LlamaIndex users to leverage various OpenAI models for text completion, chat generation, streaming responses, and structured outputs within their LlamaIndex applications. The library is actively maintained and releases are tied to the broader LlamaIndex ecosystem updates.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize the OpenAI LLM and perform basic text completion and chat interactions. It assumes your OpenAI API key is set as an environment variable (`OPENAI_API_KEY`). You can explicitly pass the `api_key` argument during initialization if preferred. It also shows how to use ChatMessage from `llama_index.core.llms` for chat interactions.

import os
from llama_index.llms.openai import OpenAI

# Ensure your OpenAI API key is set as an environment variable (OPENAI_API_KEY)
# If not, you can pass it directly to the OpenAI constructor:
# llm = OpenAI(api_key="your_api_key_here", model="gpt-3.5-turbo")

openai_api_key = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY')
if openai_api_key == 'YOUR_OPENAI_API_KEY':
    print("WARNING: Please set the OPENAI_API_KEY environment variable or replace 'YOUR_OPENAI_API_KEY' with your actual key.")

llm = OpenAI(
    model="gpt-3.5-turbo", 
    api_key=openai_api_key # Uses env var if set, otherwise the placeholder
)

# Example: Generate a completion
resp = llm.complete("Tell me a short story about a brave knight.")
print(f"Completion: {resp}")

# Example: Send a chat message
from llama_index.core.llms import ChatMessage
messages = [
    ChatMessage(role="system", content="You are a helpful assistant."),
    ChatMessage(role="user", content="What is the capital of Canada?")
]
chat_resp = llm.chat(messages)
print(f"Chat Response: {chat_resp.content}")

view raw JSON →