Guardrails AI
Guardrails AI is a Python library designed to add guardrails to large language models, ensuring that LLM outputs are structured, safe, and reliable. It helps define expected output schemas, validate responses against these schemas, and apply corrective actions or re-prompts when validation fails. The current version is 0.10.0 and it maintains a regular release cadence, with minor updates and bug fixes typically released every few weeks.
Common errors
-
pydantic.v1.error_wrappers.ValidationError: 1 validation error for MyModel
cause Your environment is using Pydantic v1, but Guardrails AI now requires Pydantic v2.fixUpgrade Pydantic to version 2: `pip install "pydantic>=2"`. You may also need to update your Pydantic models to be compatible with v2 syntax if you used deprecated features. -
guardrails.errors.ValidationError: Output validation failed:
cause The Large Language Model's output did not conform to the schema or validation rules defined in your RAIL specification.fixReview your prompt to better guide the LLM towards the desired output format. Examine the detailed error message to identify which validator failed. Consider using `OnFail.reask` or `OnFail.fix` for validators to automatically attempt correction. -
openai.error.AuthenticationError: Incorrect API key provided: None. You can find your API key at https://platform.openai.com.
cause The `OPENAI_API_KEY` environment variable is not set or contains an invalid key, preventing Guardrails from authenticating with OpenAI.fixSet your `OPENAI_API_KEY` environment variable with a valid OpenAI API key. For example: `export OPENAI_API_KEY='your-key-here'` in your terminal before running the script. -
ModuleNotFoundError: No module named 'guardrails_ai'
cause You are attempting to import from `guardrails_ai` instead of the correct package name `guardrails`, or the package is not installed.fixEnsure the package is installed via `pip install guardrails-ai`. Then, import classes and functions from the `guardrails` package: `from guardrails import Guard`.
Warnings
- breaking Guardrails AI now strictly requires Pydantic v2. Projects using Pydantic v1 will encounter `ValidationError` or `ImportError`.
- deprecated Directly passing Python dictionaries or Pydantic models as `output_schema` to `Guard` is deprecated. RAILLPEC strings are now the standard for defining schemas.
- gotcha The default `OnFail` action for validators is `OnFail.exception`, meaning any validation failure will raise a `ValidationError` and stop execution.
- gotcha When using `llm_api='openai'` (or other string providers), ensure the corresponding API key (e.g., `OPENAI_API_KEY`) is set as an environment variable or passed directly to `guard()`.
Install
-
pip install guardrails-ai
Imports
- Guard
import guardrails guardrails.Guard
from guardrails import Guard
- OnFail
from guardrails.actions import OnFail
from guardrails import OnFail
- PydanticValidation
from guardrails.validators import PydanticValidation
- BaseModel
from pydantic import BaseModel
Quickstart
import os
from guardrails import Guard
from pydantic import BaseModel, Field
# 1. Define your desired output structure using Pydantic
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline of the joke")
# 2. Define the Guardrails RAIL specification as a string
# Guardrails automatically infers the output type and generates a prompt
# based on the Pydantic model and ${gr.complete_json_object_prompt}.
rail_spec = f'''
<rail version="0.1">
<output type="object" name="joke" model="Joke" />
<prompt>
Tell me a joke.\n
{{gr.complete_json_object_prompt}}
</prompt>
</rail>
'''
# 3. Initialize Guard with the RAIL specification
guard = Guard.from_string(rail_spec)
# 4. Call the LLM with Guardrails
# Ensure OPENAI_API_KEY is set in your environment
openai_api_key = os.environ.get("OPENAI_API_KEY")
if not openai_api_key:
print("Please set the OPENAI_API_KEY environment variable to run this example.")
else:
try:
# The llm_api='openai' string automatically uses the OpenAI API client
# configured via the OPENAI_API_KEY environment variable.
raw_llm_output, validated_output = guard(
llm_api="openai",
prompt_params={"gr.complete_json_object_prompt": Joke.schema_json()}
)
print("\n--- Raw LLM Output ---")
print(raw_llm_output)
print("\n--- Validated Output (Pydantic Model) ---")
print(validated_output)
print(f"Setup: {validated_output.setup}")
print(f"Punchline: {validated_output.punchline}")
except Exception as e:
print(f"An error occurred: {e}. Check your API key and network connection.")