Guardrails AI

0.10.0 · active · verified Thu Apr 16

Guardrails AI is a Python library designed to add guardrails to large language models, ensuring that LLM outputs are structured, safe, and reliable. It helps define expected output schemas, validate responses against these schemas, and apply corrective actions or re-prompts when validation fails. The current version is 0.10.0 and it maintains a regular release cadence, with minor updates and bug fixes typically released every few weeks.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart defines a simple Pydantic model for a joke, creates a RAILLPEC string to instruct the LLM and define the output schema, then uses `Guard.from_string` to initialize Guardrails. It then calls the OpenAI LLM, passing the Pydantic schema for structured output and automatically validating the response.

import os
from guardrails import Guard
from pydantic import BaseModel, Field

# 1. Define your desired output structure using Pydantic
class Joke(BaseModel):
    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline of the joke")

# 2. Define the Guardrails RAIL specification as a string
# Guardrails automatically infers the output type and generates a prompt
# based on the Pydantic model and ${gr.complete_json_object_prompt}.
rail_spec = f'''
<rail version="0.1">
    <output type="object" name="joke" model="Joke" />
    <prompt>
        Tell me a joke.\n
        {{gr.complete_json_object_prompt}}
    </prompt>
</rail>
'''

# 3. Initialize Guard with the RAIL specification
guard = Guard.from_string(rail_spec)

# 4. Call the LLM with Guardrails
# Ensure OPENAI_API_KEY is set in your environment
openai_api_key = os.environ.get("OPENAI_API_KEY")
if not openai_api_key:
    print("Please set the OPENAI_API_KEY environment variable to run this example.")
else:
    try:
        # The llm_api='openai' string automatically uses the OpenAI API client
        # configured via the OPENAI_API_KEY environment variable.
        raw_llm_output, validated_output = guard(
            llm_api="openai",
            prompt_params={"gr.complete_json_object_prompt": Joke.schema_json()}
        )
        print("\n--- Raw LLM Output ---")
        print(raw_llm_output)
        print("\n--- Validated Output (Pydantic Model) ---")
        print(validated_output)
        print(f"Setup: {validated_output.setup}")
        print(f"Punchline: {validated_output.punchline}")
    except Exception as e:
        print(f"An error occurred: {e}. Check your API key and network connection.")

view raw JSON →