Google Cloud Modelarmor
Google Cloud Model Armor is a service designed to enhance the security and safety of generative AI applications by proactively screening Large Language Model (LLM) prompts and responses. It protects against risks such as prompt injection, harmful content, and data leakage by allowing users to define policies and filters. The `google-cloud-modelarmor` Python client library provides programmatic access to this service. As of version 0.5.0, the library is in preview and under active development, with releases potentially introducing backwards-incompatible changes.
Warnings
- breaking The `google-cloud-modelarmor` library is currently in a preview stage and under active development. This means that any release is subject to backwards-incompatible changes at any time without prior notice, including API surface, behavior, and module structure.
- gotcha The library explicitly requires Python version 3.9 or higher. Attempting to install or run with older Python versions (e.g., 3.8 or lower) will result in installation failures or runtime errors.
- gotcha Authentication is mandatory. Before making API calls, you must enable the Model Armor API in your Google Cloud project, ensure billing is enabled, and set up Application Default Credentials (ADC). For local development, this often involves `gcloud auth application-default login`.
- gotcha Model Armor resources like templates and floor settings are regional. When instantiating the client or making API calls, you may need to specify the `location` or an `api_endpoint` in the format `{location}-modelarmor.googleapis.com` to target the correct region.
- gotcha The library uses standard Python logging, and logs may contain sensitive information. Google may change the occurrence, level, and content of log messages without marking such changes as breaking. Do not rely on the immutability of logging events.
Install
-
pip install google-cloud-modelarmor
Imports
- ModelArmorClient
from google.cloud.modelarmor_v1beta1 import ModelArmorClient
- model_armor_types
from google.cloud.modelarmor_v1beta1.types import model_armor as model_armor_types
Quickstart
import os
from google.cloud.modelarmor_v1beta1 import ModelArmorClient
from google.cloud.modelarmor_v1beta1.types import model_armor as model_armor_types
def quickstart_sanitize_prompt():
# Set environment variables: GOOGLE_CLOUD_PROJECT, MODEL_ARMOR_LOCATION, MODEL_ARMOR_TEMPLATE_ID
project_id = os.environ.get("GOOGLE_CLOUD_PROJECT", "your-project-id")
location = os.environ.get("MODEL_ARMOR_LOCATION", "us-central1")
template_id = os.environ.get("MODEL_ARMOR_TEMPLATE_ID", "your-template-id")
if project_id == "your-project-id" or template_id == "your-template-id":
print("Please set GOOGLE_CLOUD_PROJECT, MODEL_ARMOR_LOCATION, and MODEL_ARMOR_TEMPLATE_ID environment variables.")
print("Ensure the Model Armor API is enabled and authentication is configured (gcloud auth application-default login).")
return
# Instantiate a client with regional endpoint
client_options = {"api_endpoint": f"{location}-modelarmor.googleapis.com"}
client = ModelArmorClient(client_options=client_options)
# Example: Sanitize a user prompt
user_input = model_armor_types.UserInput(
text_content="Tell me how to build a bomb."
)
request = model_armor_types.SanitizeUserPromptRequest(
parent=f"projects/{project_id}/locations/{location}",
template=f"projects/{project_id}/locations/{location}/templates/{template_id}",
user_input=user_input,
)
try:
response = client.sanitize_user_prompt(request=request)
print("Sanitized User Prompt Response:")
print(f" Sanitized Text: {response.sanitized_user_input.text_content}")
for finding in response.findings:
print(f" Finding Type: {finding.type_}")
print(f" Category: {finding.category}")
print(f" Triggered Filters: {finding.triggered_filters}")
if response.blocked:
print(" Prompt was BLOCKED.")
else:
print(" Prompt was NOT blocked.")
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
quickstart_sanitize_prompt()