LangChain Google Vertex AI Integration
raw JSON → 3.2.2 verified Tue May 12 auth: no python install: verified
This package provides LangChain integrations for Google Cloud generative models via the Vertex AI Platform. It enables access to foundational models (like Gemini) and third-party models available on Vertex AI Model Garden, along with services like Vector Search. The current version is 3.2.2, with active development and frequent releases to support new features and address issues.
pip install langchain-google-vertexai Common errors
error ModuleNotFoundError: No module named 'langchain_google_vertexai' ↓
cause The 'langchain-google-vertexai' package is not installed, or the Python environment where the code is being run does not have it installed.
fix
Install the package using pip:
pip install langchain-google-vertexai. Ensure this is run in the same Python environment used by your application. error AttributeError: 'TextGenerationResponse' object has no attribute 'candidates' ↓
cause This error typically occurs with older versions of LangChain or 'langchain-google-vertexai' when the internal structure of the response object from Vertex AI models has changed, making the 'candidates' attribute unavailable or deprecated.
fix
Update 'langchain-google-vertexai' to a compatible version with your LangChain installation (
pip install --upgrade langchain-google-vertexai). You may need to adjust your code to access the generated text directly, often via response.text instead of response.candidates. error ValueError: Could not resolve project_id ↓
cause This indicates an authentication failure with Google Cloud's Vertex AI. It often means that the Google Cloud project ID is not correctly identified, which can happen if `GOOGLE_APPLICATION_CREDENTIALS` is not set, or if the environment (e.g., Cloud Run) is not correctly configured for application default credentials.
fix
Ensure the
GOOGLE_APPLICATION_CREDENTIALS environment variable points to a valid service account JSON key file. Alternatively, ensure you are authenticated via gcloud auth application-default login for local development. When initializing ChatVertexAI, explicitly pass project and location parameters. error ValueError: Cannot get the Candidate text. Response candidate content part has no text. ↓
cause This error arises from an incompatibility between the installed version of `langchain-google-vertexai` and `google-cloud-aiplatform`, especially when using tool-calling features.
fix
Upgrade
langchain-google-vertexai to version 1.0.2 or later to ensure compatibility with the google-cloud-aiplatform SDK's tool-calling updates: pip install --upgrade langchain-google-vertexai. Warnings
deprecated Classes `ChatVertexAI`, `VertexAI`, and `VertexAIEmbeddings` are officially deprecated in `langchain-google-vertexai` version 3.2.0 and will be removed in 4.0.0. [8, 9, 20] ↓
fix Migrate to `ChatGoogleGenerativeAI`, `GoogleGenerativeAI`, and `GoogleGenerativeAIEmbeddings` respectively, from the `langchain-google-genai` package for most Gemini API interactions. `langchain-google-vertexai` remains supported for Vertex AI platform-specific features (Model Garden, Vector Search, etc.). [8, 9, 22]
gotcha Authentication to Google Cloud Vertex AI typically requires Application Default Credentials (ADC). Incorrect or missing authentication setup is a common source of errors. [2, 3, 6, 7] ↓
fix Ensure you are authenticated via `gcloud auth application-default login` or by setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of a service account JSON key file. Project ID and location might also need to be specified in the model constructor or environment variables. [2, 3, 6, 20]
gotcha There can be confusion between `langchain-google-vertexai` and `langchain-google-genai`. While both offer Google LLM integrations, `langchain-google-vertexai` focuses on Vertex AI platform-specific features (e.g., Model Garden, Vector Search, Anthropic models on Vertex AI), whereas `langchain-google-genai` is for direct Google Generative AI (Gemini API) access. [2, 7, 22, 33, 34] ↓
fix For general Gemini model access (Gemini API), use `langchain-google-genai`. For integrations requiring Vertex AI platform features or third-party models deployed on Vertex AI, use `langchain-google-vertexai`. Refer to the official LangChain Google documentation for guidance on which package to use. [22]
gotcha When using tools with Gemini models through Vertex AI, certain Zod schema features (e.g., discriminated unions, union types, positive refinements) are not supported or are automatically converted, potentially leading to unexpected behavior or errors. [3, 19] ↓
fix Use simple, flat object structures for tool schemas, replace discriminated unions with enums and optional fields, and prefer `min()` over `positive()` for number constraints. Test your tool schemas thoroughly. [3, 19]
Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) sdist - 19.86s 549.8M
3.10 alpine (musl) - - 21.61s 541.5M
3.10 slim (glibc) wheel 27.0s 7.78s 527M
3.10 slim (glibc) - - 9.23s 523M
3.11 alpine (musl) sdist - 22.65s 593.8M
3.11 alpine (musl) - - 29.68s 584.7M
3.11 slim (glibc) wheel 25.3s 13.39s 571M
3.11 slim (glibc) - - 16.85s 565M
3.12 alpine (musl) sdist - 20.32s 576.7M
3.12 alpine (musl) - - 31.40s 567.7M
3.12 slim (glibc) wheel 22.5s 12.26s 554M
3.12 slim (glibc) - - 24.25s 548M
3.13 alpine (musl) sdist - 17.45s 573.7M
3.13 alpine (musl) - - 30.19s 564.8M
3.13 slim (glibc) wheel 22.8s 11.76s 551M
3.13 slim (glibc) - - 24.49s 545M
3.9 alpine (musl) sdist - 16.97s 505.7M
3.9 alpine (musl) - - 16.89s 504.5M
3.9 slim (glibc) wheel 26.6s 7.02s 488M
3.9 slim (glibc) - - 6.81s 487M
Imports
- ChatVertexAI wrong
from langchain.llms import VertexAI or from langchain.chat_models import ChatVertexAI (for older LangChain versions)correctfrom langchain_google_vertexai import ChatVertexAI - VertexAI wrong
from langchain.llms import VertexAI (for older LangChain versions)correctfrom langchain_google_vertexai import VertexAI - VertexAIEmbeddings wrong
from langchain.embeddings import VertexAIEmbeddings (for older LangChain versions)correctfrom langchain_google_vertexai import VertexAIEmbeddings - ChatAnthropicVertex
from langchain_google_vertexai import ChatAnthropicVertex
Quickstart last tested: 2026-04-24
import os
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI
# Ensure your Google Cloud Project ID and location are set
# or use GOOGLE_APPLICATION_CREDENTIALS for authentication.
# For example, by running 'gcloud auth application-default login'
# os.environ["GOOGLE_CLOUD_PROJECT"] = os.environ.get("GOOGLE_CLOUD_PROJECT", "your-gcp-project-id")
# os.environ["GOOGLE_CLOUD_LOCATION"] = os.environ.get("GOOGLE_CLOUD_LOCATION", "us-central1")
# Initialize the chat model
# Note: ChatVertexAI is deprecated in favor of ChatGoogleGenerativeAI from langchain_google_genai
# for most Gemini models, but can still be used for Vertex AI specific deployments.
llm = ChatVertexAI(model="gemini-pro") # or "gemini-2.5-flash", "chat-bison", etc.
# Invoke the model with a message
response = llm.invoke("What is the capital of France?")
print(response.content)
# Example with multimodal input (requires a vision model like "gemini-pro-vision")
# from langchain_core.messages import HumanMessage
# llm_vision = ChatVertexAI(model="gemini-pro-vision")
# message_with_image = HumanMessage(
# content=[
# {"type": "text", "text": "What's in this image?"},
# {"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
# ]
# )
# response_vision = llm_vision.invoke([message_with_image])
# print(response_vision.content)