{"id":1538,"library":"llama-index-llms-openai","title":"OpenAI LLM Integration for LlamaIndex","description":"This package provides the OpenAI Large Language Model (LLM) integration for LlamaIndex (version 0.7.5). LlamaIndex is a data framework designed to connect LLMs with your private or domain-specific data, enabling applications like RAG (Retrieval Augmented Generation). This integration allows LlamaIndex users to leverage various OpenAI models for text completion, chat generation, streaming responses, and structured outputs within their LlamaIndex applications. The library is actively maintained and releases are tied to the broader LlamaIndex ecosystem updates.","status":"active","version":"0.7.5","language":"en","source_language":"en","source_url":"https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/llms/llama-index-llms-openai","tags":["LLM","OpenAI","LlamaIndex","AI","NLP","integration","large language models"],"install":[{"cmd":"pip install llama-index-llms-openai","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"This package is an integration for the core LlamaIndex framework. 'llama-index-core' provides the fundamental abstractions.","package":"llama-index-core","optional":false},{"reason":"The underlying Python client library for interacting with OpenAI's API.","package":"openai","optional":false}],"imports":[{"note":"Beginning with LlamaIndex 0.9.x, direct imports from specific integration packages are required due to modularization. The older pattern (`from llamaindex import OpenAI`) is no longer supported for new versions.","wrong":"from llamaindex import OpenAI","symbol":"OpenAI","correct":"from llama_index.llms.openai import OpenAI"},{"note":"ChatMessage is part of the core LlamaIndex LLM abstractions.","symbol":"ChatMessage","correct":"from llama_index.core.llms import ChatMessage"}],"quickstart":{"code":"import os\nfrom llama_index.llms.openai import OpenAI\n\n# Ensure your OpenAI API key is set as an environment variable (OPENAI_API_KEY)\n# If not, you can pass it directly to the OpenAI constructor:\n# llm = OpenAI(api_key=\"your_api_key_here\", model=\"gpt-3.5-turbo\")\n\nopenai_api_key = os.environ.get('OPENAI_API_KEY', 'YOUR_OPENAI_API_KEY')\nif openai_api_key == 'YOUR_OPENAI_API_KEY':\n    print(\"WARNING: Please set the OPENAI_API_KEY environment variable or replace 'YOUR_OPENAI_API_KEY' with your actual key.\")\n\nllm = OpenAI(\n    model=\"gpt-3.5-turbo\", \n    api_key=openai_api_key # Uses env var if set, otherwise the placeholder\n)\n\n# Example: Generate a completion\nresp = llm.complete(\"Tell me a short story about a brave knight.\")\nprint(f\"Completion: {resp}\")\n\n# Example: Send a chat message\nfrom llama_index.core.llms import ChatMessage\nmessages = [\n    ChatMessage(role=\"system\", content=\"You are a helpful assistant.\"),\n    ChatMessage(role=\"user\", content=\"What is the capital of Canada?\")\n]\nchat_resp = llm.chat(messages)\nprint(f\"Chat Response: {chat_resp.content}\")\n","lang":"python","description":"This quickstart demonstrates how to initialize the OpenAI LLM and perform basic text completion and chat interactions. It assumes your OpenAI API key is set as an environment variable (`OPENAI_API_KEY`). You can explicitly pass the `api_key` argument during initialization if preferred. It also shows how to use ChatMessage from `llama_index.core.llms` for chat interactions."},"warnings":[{"fix":"Update your import statements from `from llamaindex import OpenAI` to `from llama_index.llms.openai import OpenAI`.","message":"With LlamaIndex v0.9.x and newer, direct imports from the root `llamaindex` package for LLMs and other integrations have been removed due to modularization. You must now import `OpenAI` directly from `llama_index.llms.openai`.","severity":"breaking","affected_versions":"LlamaIndex >= 0.9.0"},{"fix":"Migrate from `ServiceContext` to `Settings`. Instead of `service_context = ServiceContext.from_defaults(llm=OpenAI())`, use `from llama_index.core import Settings; Settings.llm = OpenAI()`. Explicitly set your LLM and embedding model.","message":"The `ServiceContext` object has been deprecated in LlamaIndex v0.10.x and completely removed in v0.11.x. Global LLM and embedding configurations are now managed through the `Settings` object, and explicit setting of LLMs (like OpenAI) is required.","severity":"breaking","affected_versions":"LlamaIndex >= 0.10.0"},{"fix":"Set the `OPENAI_API_KEY` environment variable before running your application (e.g., `export OPENAI_API_KEY='sk-...'`). Alternatively, pass the API key directly to the `OpenAI` constructor: `llm = OpenAI(api_key='sk-...', model='gpt-3.5-turbo')`.","message":"The OpenAI LLM client, by default, expects your OpenAI API key to be set as an environment variable named `OPENAI_API_KEY`. If this variable is not set, you will encounter authentication errors.","severity":"gotcha","affected_versions":"All"},{"fix":"If not installing the full `llama-index` package, explicitly install `pip install llama-index-core llama-index-llms-openai`.","message":"While `pip install llama-index` includes a starter bundle with `llama-index-core` and `llama-index-llms-openai`, if you're installing only `llama-index-llms-openai` directly, ensure `llama-index-core` is also installed, as it provides fundamental LlamaIndex abstractions.","severity":"gotcha","affected_versions":"All"},{"fix":"Initialize the LLM with the desired model, e.g., `llm = OpenAI(model=\"gpt-4o\")` or `llm = OpenAI(model=\"gpt-3.5-turbo\")`.","message":"Although a default model (often `gpt-3.5-turbo`) might be used, it is best practice to explicitly specify the `model` parameter when initializing `OpenAI` to ensure consistency and control, especially when using newer models or specific capabilities.","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}