LangChain Google Vertex AI Integration

3.2.2 · active · verified Sun Mar 29

This package provides LangChain integrations for Google Cloud generative models via the Vertex AI Platform. It enables access to foundational models (like Gemini) and third-party models available on Vertex AI Model Garden, along with services like Vector Search. The current version is 3.2.2, with active development and frequent releases to support new features and address issues.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to instantiate and use the `ChatVertexAI` model for text generation. It also shows an example of multimodal input (commented out) using a vision-capable model. Ensure your Google Cloud project and location are configured, and that you are authenticated to Google Cloud, typically via Application Default Credentials (e.g., `gcloud auth application-default login`) or by setting `GOOGLE_APPLICATION_CREDENTIALS`.

import os
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI

# Ensure your Google Cloud Project ID and location are set
# or use GOOGLE_APPLICATION_CREDENTIALS for authentication.
# For example, by running 'gcloud auth application-default login'
# os.environ["GOOGLE_CLOUD_PROJECT"] = os.environ.get("GOOGLE_CLOUD_PROJECT", "your-gcp-project-id")
# os.environ["GOOGLE_CLOUD_LOCATION"] = os.environ.get("GOOGLE_CLOUD_LOCATION", "us-central1")

# Initialize the chat model
# Note: ChatVertexAI is deprecated in favor of ChatGoogleGenerativeAI from langchain_google_genai
# for most Gemini models, but can still be used for Vertex AI specific deployments.
llm = ChatVertexAI(model="gemini-pro") # or "gemini-2.5-flash", "chat-bison", etc.

# Invoke the model with a message
response = llm.invoke("What is the capital of France?")
print(response.content)

# Example with multimodal input (requires a vision model like "gemini-pro-vision")
# from langchain_core.messages import HumanMessage
# llm_vision = ChatVertexAI(model="gemini-pro-vision")
# message_with_image = HumanMessage(
#     content=[
#         {"type": "text", "text": "What's in this image?"},
#         {"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
#     ]
# )
# response_vision = llm_vision.invoke([message_with_image])
# print(response_vision.content)

view raw JSON →