LangChain Google Generative AI

4.2.1 · active · verified Thu Apr 09

LangChain Google Generative AI provides integrations with Google's GenAI models (like Gemini) for use within the LangChain framework. It offers classes for chat models, traditional LLMs, and embeddings. The current version is 4.2.1, and it's actively maintained with frequent updates within the larger LangChain Google monorepo.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize `ChatGoogleGenerativeAI` with a Gemini model and invoke it with a simple prompt. Ensure your `GOOGLE_API_KEY` is set as an environment variable or passed directly. It also shows the basic structure for chat interactions and mentions streaming.

import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import HumanMessage, SystemMessage

# Ensure GOOGLE_API_KEY is set in your environment variables.
# You can also pass it directly as google_api_key='YOUR_API_KEY'.
api_key = os.environ.get("GOOGLE_API_KEY", "")

if not api_key:
    print("Warning: GOOGLE_API_KEY environment variable not set.")
    print("Please set it or pass it directly to the model constructor.")
else:
    try:
        # Initialize the ChatGoogleGenerativeAI model
        chat_model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=api_key)

        # Prepare messages for the model
        messages = [
            SystemMessage(content="You are a helpful assistant."),
            HumanMessage(content="What is the capital of France?")
        ]

        # Invoke the model and print the response
        response = chat_model.invoke(messages)
        print("\nModel Response:", response.content)

        # Example of streaming (uncomment to try)
        # print("\nStreaming Response:")
        # for chunk in chat_model.stream(messages):
        #     print(chunk.content, end="")
        # print()

    except Exception as e:
        print(f"An error occurred: {e}")

view raw JSON →