LangChain Google Vertex AI Integration
This package provides LangChain integrations for Google Cloud generative models via the Vertex AI Platform. It enables access to foundational models (like Gemini) and third-party models available on Vertex AI Model Garden, along with services like Vector Search. The current version is 3.2.2, with active development and frequent releases to support new features and address issues.
Warnings
- deprecated Classes `ChatVertexAI`, `VertexAI`, and `VertexAIEmbeddings` are officially deprecated in `langchain-google-vertexai` version 3.2.0 and will be removed in 4.0.0. [8, 9, 20]
- gotcha Authentication to Google Cloud Vertex AI typically requires Application Default Credentials (ADC). Incorrect or missing authentication setup is a common source of errors. [2, 3, 6, 7]
- gotcha There can be confusion between `langchain-google-vertexai` and `langchain-google-genai`. While both offer Google LLM integrations, `langchain-google-vertexai` focuses on Vertex AI platform-specific features (e.g., Model Garden, Vector Search, Anthropic models on Vertex AI), whereas `langchain-google-genai` is for direct Google Generative AI (Gemini API) access. [2, 7, 22, 33, 34]
- gotcha When using tools with Gemini models through Vertex AI, certain Zod schema features (e.g., discriminated unions, union types, positive refinements) are not supported or are automatically converted, potentially leading to unexpected behavior or errors. [3, 19]
Install
-
pip install langchain-google-vertexai
Imports
- ChatVertexAI
from langchain_google_vertexai import ChatVertexAI
- VertexAI
from langchain_google_vertexai import VertexAI
- VertexAIEmbeddings
from langchain_google_vertexai import VertexAIEmbeddings
- ChatAnthropicVertex
from langchain_google_vertexai import ChatAnthropicVertex
Quickstart
import os
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI
# Ensure your Google Cloud Project ID and location are set
# or use GOOGLE_APPLICATION_CREDENTIALS for authentication.
# For example, by running 'gcloud auth application-default login'
# os.environ["GOOGLE_CLOUD_PROJECT"] = os.environ.get("GOOGLE_CLOUD_PROJECT", "your-gcp-project-id")
# os.environ["GOOGLE_CLOUD_LOCATION"] = os.environ.get("GOOGLE_CLOUD_LOCATION", "us-central1")
# Initialize the chat model
# Note: ChatVertexAI is deprecated in favor of ChatGoogleGenerativeAI from langchain_google_genai
# for most Gemini models, but can still be used for Vertex AI specific deployments.
llm = ChatVertexAI(model="gemini-pro") # or "gemini-2.5-flash", "chat-bison", etc.
# Invoke the model with a message
response = llm.invoke("What is the capital of France?")
print(response.content)
# Example with multimodal input (requires a vision model like "gemini-pro-vision")
# from langchain_core.messages import HumanMessage
# llm_vision = ChatVertexAI(model="gemini-pro-vision")
# message_with_image = HumanMessage(
# content=[
# {"type": "text", "text": "What's in this image?"},
# {"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
# ]
# )
# response_vision = llm_vision.invoke([message_with_image])
# print(response_vision.content)