Azure AI Content Safety Client Library for Python
The Microsoft Azure AI Content Safety Client Library for Python (version 1.0.0) provides a robust solution for detecting harmful user-generated and AI-generated content in applications and services. It offers APIs for analyzing text and images across categories such as sexual content, violence, hate, and self-harm, with multi-severity levels. The library is actively developed and maintained as part of the broader Azure SDK for Python, with frequent updates and a focus on enterprise-grade content moderation.
Warnings
- breaking Public Preview SDKs (versions prior to 1.0.0) were deprecated by March 31, 2024. Applications using older versions must update to the Generally Available (GA) SDK (1.0.0 or later) as API names and return formats have changed significantly.
- deprecated All API versions of the Azure AI Content Safety service prior to '2024-09-01' (excluding specific preview versions like '2024-09-15-preview' and '2024-09-30-preview') are scheduled for deprecation by March 1st, 2025.
- gotcha Authentication requires either an Azure API Key (via `AzureKeyCredential`) or an Azure Active Directory (Microsoft Entra ID) token credential (via `DefaultAzureCredential` from `azure-identity`). Incorrect endpoint, API key, or insufficient role assignments (e.g., 'Cognitive Services User' role) are common causes of authentication failures.
- gotcha The library provides two distinct client types: `ContentSafetyClient` for analyzing text and images, and `BlocklistClient` for managing custom blocklists. Ensure you are using the correct client for your intended operation.
- gotcha The Content Safety service has input limitations for text and images (e.g., maximum text length). Exceeding these limits or providing unsupported content types (e.g., non-image files to image analysis) will result in `HttpResponseError`.
Install
-
pip install azure-ai-contentsafety
Imports
- ContentSafetyClient
from azure.ai.contentsafety import ContentSafetyClient
- BlocklistClient
from azure.ai.contentsafety import BlocklistClient
- AnalyzeTextOptions
from azure.ai.contentsafety.models import AnalyzeTextOptions
- TextCategory
from azure.ai.contentsafety.models import TextCategory
- AzureKeyCredential
from azure.core.credentials import AzureKeyCredential
- DefaultAzureCredential
from azure.identity import DefaultAzureCredential
Quickstart
import os
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
# Set your Azure Content Safety endpoint and key as environment variables
# e.g., export CONTENT_SAFETY_ENDPOINT="https://<your-resource-name>.cognitiveservices.azure.com/"
# e.g., export CONTENT_SAFETY_KEY="<your-api-key>"
endpoint = os.environ.get("CONTENT_SAFETY_ENDPOINT", "").strip()
key = os.environ.get("CONTENT_SAFETY_KEY", "").strip()
if not endpoint or not key:
print("Please set the environment variables CONTENT_SAFETY_ENDPOINT and CONTENT_SAFETY_KEY.")
exit(1)
# Create a Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
# Text to analyze
text_to_analyze = "I hate you. You are an idiot and I will harm you."
# Construct the analysis request
request = AnalyzeTextOptions(text=text_to_analyze)
try:
response = client.analyze_text(request)
print(f"Analyzing text: '{text_to_analyze}'")
for category_result in response.categories_analysis:
if category_result.severity is not None:
print(f" Category: {category_result.category}, Severity: {category_result.severity}")
else:
print(f" Category: {category_result.category}, No severity detected.")
except HttpResponseError as e:
print(f"Analyze text failed: {e.reason}")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print("\nText analysis complete.")