LangChain Google Generative AI
LangChain Google Generative AI provides integrations with Google's GenAI models (like Gemini) for use within the LangChain framework. It offers classes for chat models, traditional LLMs, and embeddings. The current version is 4.2.1, and it's actively maintained with frequent updates within the larger LangChain Google monorepo.
Warnings
- gotcha Authentication requires setting the `GOOGLE_API_KEY` environment variable or passing it directly to the model constructor. Alternatively, `GOOGLE_APPLICATION_CREDENTIALS` can be used for service account authentication in Google Cloud environments.
- deprecated Older `GooglePalm` and `ChatGooglePalm` classes found directly in `langchain.llms` or `langchain.chat_models` are deprecated. The functionality has been moved to the `langchain-google-genai` package for better modularity and to support newer models like Gemini.
- gotcha Model naming conventions can vary. Models like `gemini-pro`, `gemini-1.5-flash`, or specific versions like `gemini-pro-1.0` should be specified correctly. The default model may change or have different capabilities.
Install
-
pip install langchain-google-genai
Imports
- ChatGoogleGenerativeAI
from langchain_google_genai import ChatGoogleGenerativeAI
- GoogleGenerativeAI
from langchain_google_genai import GoogleGenerativeAI
- GoogleGenerativeAIEmbeddings
from langchain_google_genai import GoogleGenerativeAIEmbeddings
Quickstart
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import HumanMessage, SystemMessage
# Ensure GOOGLE_API_KEY is set in your environment variables.
# You can also pass it directly as google_api_key='YOUR_API_KEY'.
api_key = os.environ.get("GOOGLE_API_KEY", "")
if not api_key:
print("Warning: GOOGLE_API_KEY environment variable not set.")
print("Please set it or pass it directly to the model constructor.")
else:
try:
# Initialize the ChatGoogleGenerativeAI model
chat_model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=api_key)
# Prepare messages for the model
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="What is the capital of France?")
]
# Invoke the model and print the response
response = chat_model.invoke(messages)
print("\nModel Response:", response.content)
# Example of streaming (uncomment to try)
# print("\nStreaming Response:")
# for chunk in chat_model.stream(messages):
# print(chunk.content, end="")
# print()
except Exception as e:
print(f"An error occurred: {e}")