Google GenAI LLM Integration for LlamaIndex

0.9.1 · active · verified Thu Apr 16

This library provides the official integration for connecting LlamaIndex applications with Google's Generative AI models, including Gemini and Vertex AI. It allows LlamaIndex to leverage Google's LLMs for tasks such as text completion, chat interactions, function calling, and structured prediction. As of version 0.9.1, it is actively maintained with a regular release cadence, often aligning with updates to the core LlamaIndex framework.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize the GoogleGenAI LLM and perform a basic text completion. It assumes the `GOOGLE_API_KEY` environment variable is set. Replace `YOUR_GOOGLE_API_KEY_HERE` with your actual key for testing if not using environment variables.

import os
from llama_index.llms.google_genai import GoogleGenAI

# Ensure your Google API key is set as an environment variable
# Or pass it directly: api_key="YOUR_API_KEY"
os.environ["GOOGLE_API_KEY"] = os.environ.get('GOOGLE_API_KEY', 'YOUR_GOOGLE_API_KEY_HERE')

llm = GoogleGenAI(model="gemini-pro") # Or use 'gemini-1.5-flash', 'gemini-1.5-pro', etc.

response = llm.complete("What is the capital of France?")
print(response)

view raw JSON →