{"id":7281,"library":"guardrails-ai","title":"Guardrails AI","description":"Guardrails AI is a Python library designed to add guardrails to large language models, ensuring that LLM outputs are structured, safe, and reliable. It helps define expected output schemas, validate responses against these schemas, and apply corrective actions or re-prompts when validation fails. The current version is 0.10.0 and it maintains a regular release cadence, with minor updates and bug fixes typically released every few weeks.","status":"active","version":"0.10.0","language":"en","source_language":"en","source_url":"https://github.com/guardrails-ai/guardrails","tags":["LLM","AI","validation","guardrails","moderation","Pydantic"],"install":[{"cmd":"pip install guardrails-ai","lang":"bash","label":"Install core library"}],"dependencies":[],"imports":[{"note":"The main Guard class is imported directly from the top-level 'guardrails' package, not from the imported module itself.","wrong":"import guardrails\nguardrails.Guard","symbol":"Guard","correct":"from guardrails import Guard"},{"note":"While 'OnFail' is an action, it is exposed directly from the top-level 'guardrails' package for convenience.","wrong":"from guardrails.actions import OnFail","symbol":"OnFail","correct":"from guardrails import OnFail"},{"symbol":"PydanticValidation","correct":"from guardrails.validators import PydanticValidation"},{"note":"Pydantic models are typically used to define output schemas.","symbol":"BaseModel","correct":"from pydantic import BaseModel"}],"quickstart":{"code":"import os\nfrom guardrails import Guard\nfrom pydantic import BaseModel, Field\n\n# 1. Define your desired output structure using Pydantic\nclass Joke(BaseModel):\n    setup: str = Field(description=\"The setup of the joke\")\n    punchline: str = Field(description=\"The punchline of the joke\")\n\n# 2. Define the Guardrails RAIL specification as a string\n# Guardrails automatically infers the output type and generates a prompt\n# based on the Pydantic model and ${gr.complete_json_object_prompt}.\nrail_spec = f'''\n<rail version=\"0.1\">\n    <output type=\"object\" name=\"joke\" model=\"Joke\" />\n    <prompt>\n        Tell me a joke.\\n\n        {{gr.complete_json_object_prompt}}\n    </prompt>\n</rail>\n'''\n\n# 3. Initialize Guard with the RAIL specification\nguard = Guard.from_string(rail_spec)\n\n# 4. Call the LLM with Guardrails\n# Ensure OPENAI_API_KEY is set in your environment\nopenai_api_key = os.environ.get(\"OPENAI_API_KEY\")\nif not openai_api_key:\n    print(\"Please set the OPENAI_API_KEY environment variable to run this example.\")\nelse:\n    try:\n        # The llm_api='openai' string automatically uses the OpenAI API client\n        # configured via the OPENAI_API_KEY environment variable.\n        raw_llm_output, validated_output = guard(\n            llm_api=\"openai\",\n            prompt_params={\"gr.complete_json_object_prompt\": Joke.schema_json()}\n        )\n        print(\"\\n--- Raw LLM Output ---\")\n        print(raw_llm_output)\n        print(\"\\n--- Validated Output (Pydantic Model) ---\")\n        print(validated_output)\n        print(f\"Setup: {validated_output.setup}\")\n        print(f\"Punchline: {validated_output.punchline}\")\n    except Exception as e:\n        print(f\"An error occurred: {e}. Check your API key and network connection.\")","lang":"python","description":"This quickstart defines a simple Pydantic model for a joke, creates a RAILLPEC string to instruct the LLM and define the output schema, then uses `Guard.from_string` to initialize Guardrails. It then calls the OpenAI LLM, passing the Pydantic schema for structured output and automatically validating the response."},"warnings":[{"fix":"Upgrade Pydantic to version 2 (`pip install \"pydantic>=2\"`). If your project heavily relies on Pydantic v1, consider creating a separate virtual environment or adapting your Pydantic models to v2 syntax.","message":"Guardrails AI now strictly requires Pydantic v2. Projects using Pydantic v1 will encounter `ValidationError` or `ImportError`.","severity":"breaking","affected_versions":"Guardrails AI versions >= 0.7.0 (approx.)"},{"fix":"Migrate to defining your output schema using RAILLPEC strings (XML-like syntax) and initialize Guard with `Guard.from_string()` or `Guard.from_file()`.","message":"Directly passing Python dictionaries or Pydantic models as `output_schema` to `Guard` is deprecated. RAILLPEC strings are now the standard for defining schemas.","severity":"deprecated","affected_versions":"Guardrails AI versions < 0.7.0 relied more on direct schema passing. While still functional in some cases, RAILLPEC is preferred."},{"fix":"Explicitly define `on_fail` for your validators to control behavior, e.g., `OnFail.reask` (re-prompt LLM), `OnFail.fix` (attempt to fix the output), `OnFail.refrain` (return None), or `OnFail.noop` (return original output). Wrap `guard()` calls in `try...except guardrails.errors.ValidationError` for robust error handling.","message":"The default `OnFail` action for validators is `OnFail.exception`, meaning any validation failure will raise a `ValidationError` and stop execution.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Set the API key as an environment variable (`export OPENAI_API_KEY='...'`) or pass it directly in the `llm_api` argument if using a custom client (e.g., `guard(llm_api=your_openai_client)`).","message":"When using `llm_api='openai'` (or other string providers), ensure the corresponding API key (e.g., `OPENAI_API_KEY`) is set as an environment variable or passed directly to `guard()`.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Upgrade Pydantic to version 2: `pip install \"pydantic>=2\"`. You may also need to update your Pydantic models to be compatible with v2 syntax if you used deprecated features.","cause":"Your environment is using Pydantic v1, but Guardrails AI now requires Pydantic v2.","error":"pydantic.v1.error_wrappers.ValidationError: 1 validation error for MyModel"},{"fix":"Review your prompt to better guide the LLM towards the desired output format. Examine the detailed error message to identify which validator failed. Consider using `OnFail.reask` or `OnFail.fix` for validators to automatically attempt correction.","cause":"The Large Language Model's output did not conform to the schema or validation rules defined in your RAIL specification.","error":"guardrails.errors.ValidationError: Output validation failed:"},{"fix":"Set your `OPENAI_API_KEY` environment variable with a valid OpenAI API key. For example: `export OPENAI_API_KEY='your-key-here'` in your terminal before running the script.","cause":"The `OPENAI_API_KEY` environment variable is not set or contains an invalid key, preventing Guardrails from authenticating with OpenAI.","error":"openai.error.AuthenticationError: Incorrect API key provided: None. You can find your API key at https://platform.openai.com."},{"fix":"Ensure the package is installed via `pip install guardrails-ai`. Then, import classes and functions from the `guardrails` package: `from guardrails import Guard`.","cause":"You are attempting to import from `guardrails_ai` instead of the correct package name `guardrails`, or the package is not installed.","error":"ModuleNotFoundError: No module named 'guardrails_ai'"}]}