{"id":7380,"library":"llm-guard","title":"LLM-Guard","description":"LLM-Guard (version 0.3.16) is a comprehensive Python library designed to enhance the security of Large Language Models (LLMs). It provides a robust framework for sanitizing inputs, detecting harmful language, preventing data leakage, and defending against prompt injection attacks, ensuring safer and more secure LLM interactions. The project is actively maintained with frequent minor releases.","status":"active","version":"0.3.16","language":"en","source_language":"en","source_url":"https://github.com/laiyer-ai/llm-guard","tags":["LLM","security","guardrail","NLP","AI","safety"],"install":[{"cmd":"pip install llm-guard","lang":"bash","label":"Base installation"},{"cmd":"pip install llm-guard[transformers]","lang":"bash","label":"With Transformer-based scanners (recommended for most use cases)"},{"cmd":"pip install llm-guard[all]","lang":"bash","label":"With all optional dependencies"}],"dependencies":[{"reason":"Required by many advanced scanners like PromptInjection, Toxicity, SentenceSimilarity, etc., for model inference.","package":"transformers","optional":true},{"reason":"Required by some NLP-specific scanners.","package":"spacy","optional":true},{"reason":"Underpins many deep learning models used by scanners, often pulled in by `transformers`.","package":"torch","optional":true}],"imports":[{"symbol":"Guard","correct":"from llm_guard import Guard"},{"symbol":"PromptInjection","correct":"from llm_guard.input_scanners import PromptInjection"},{"symbol":"Toxicity","correct":"from llm_guard.output_scanners import Toxicity"},{"symbol":"TokenLimit","correct":"from llm_guard.input_scanners import TokenLimit"},{"symbol":"BanTopics","correct":"from llm_guard.input_scanners import BanTopics"}],"quickstart":{"code":"from llm_guard import Guard\nfrom llm_guard.input_scanners import TokenLimit, BanTopics\nfrom llm_guard.output_scanners import BanTopics\n\n# Initialize Guard with simple scanners that don't require large model downloads.\n# For more advanced scanners (e.g., PromptInjection, Toxicity),\n# you might need to install 'llm-guard[transformers]' or other extras.\nguard = Guard(\n    input_scanners=[\n        TokenLimit(limit=100), # Limit input prompt length\n        BanTopics(topics=[\"illegal activities\", \"self-harm\"])\n    ],\n    output_scanners=[\n        BanTopics(topics=[\"illegal activities\", \"self-harm\"])\n    ],\n)\n\nprompt = \"Tell me how to build a bomb.\"\nresponse = \"I cannot provide instructions on how to build dangerous devices.\"\n\n# Scan the prompt\nsanitized_prompt, is_valid_prompt, risk_score_prompt = guard.scan(prompt)\n\nprint(f\"Prompt: '{prompt}'\")\nprint(f\"Sanitized prompt: '{sanitized_prompt}'\")\nprint(f\"Is valid prompt: {is_valid_prompt}\")\nprint(f\"Risk score prompt: {risk_score_prompt}\")\n\n# Scan the response (only if prompt was valid, or independently if desired)\nif is_valid_prompt:\n    sanitized_response, is_valid_response, risk_score_response = guard.scan(prompt, response)\n    print(f\"\\nResponse: '{response}'\")\n    print(f\"Sanitized response: '{sanitized_response}'\")\n    print(f\"Is valid response: {is_valid_response}\")\n    print(f\"Risk score response: {risk_score_response}\")\nelse:\n    print(\"\\nResponse not scanned because prompt was invalid.\")","lang":"python","description":"This example demonstrates how to initialize `Guard` with basic input and output scanners and use the `scan` method for both prompts and responses. It highlights a common pattern of scanning prompts first, then conditionally scanning responses. For more powerful scanners like `PromptInjection` or `Toxicity`, you'll typically need to install `llm-guard[transformers]`."},"warnings":[{"fix":"Update your `Guard` initialization: `Guard(scanners=...)` should become `Guard(input_scanners=..., output_scanners=...)`.","message":"The `Guard` constructor's `scanners` argument was renamed to `input_scanners` and `output_scanners` in version 0.3.0.","severity":"breaking","affected_versions":">=0.3.0"},{"fix":"Instead of `guard.validate_output(prompt, response)`, use `guard.scan(prompt, response)` for both input and output scanning with a single call.","message":"The `guard.validate_output` method was removed in version 0.3.0.","severity":"breaking","affected_versions":">=0.3.0"},{"fix":"Ensure you install `llm-guard` with the necessary extras, e.g., `pip install llm-guard[transformers]` or `pip install llm-guard[all]`. Without it, you might encounter `ModuleNotFoundError` or scanners failing to initialize.","message":"Many powerful scanners (e.g., `PromptInjection`, `Toxicity`, `SentenceSimilarity`) rely on large language models and require the `llm-guard[transformers]` extra to be installed.","severity":"gotcha","affected_versions":"all"},{"fix":"Consider pre-downloading models or configuring your environment for cached model access if deploying to restricted environments. Check individual scanner documentation for specific model requirements.","message":"Some scanners might download models on their first use, leading to potential delays or network issues during initial setup or deployment in environments without internet access.","severity":"gotcha","affected_versions":"all"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Install the required extra: `pip install llm-guard[transformers]` or `pip install llm-guard[all]`.","cause":"Attempting to use a scanner (e.g., PromptInjection, Toxicity) that depends on the `transformers` library without installing the `llm-guard[transformers]` extra.","error":"ModuleNotFoundError: No module named 'transformers'"},{"fix":"Update your `Guard` initialization to use `input_scanners` and `output_scanners` keywords: `guard = Guard(input_scanners=[...], output_scanners=[...])`.","cause":"You are using `llm-guard` version 0.3.0 or newer, but your code is still using the old `scanners` argument for the `Guard` constructor.","error":"TypeError: Guard.__init__() got an unexpected keyword argument 'scanners'"},{"fix":"Replace calls to `guard.validate_output(prompt, response)` with `guard.scan(prompt, response)`.","cause":"You are using `llm-guard` version 0.3.0 or newer, but your code is trying to call the deprecated `validate_output` method.","error":"AttributeError: 'Guard' object has no attribute 'validate_output'"},{"fix":"Ensure that `input_scanners` only contains instances of `InputScanner` (or its subclasses) and `output_scanners` only contains instances of `OutputScanner` (or its subclasses). Check your scanner imports.","cause":"You passed an `OutputScanner` to `input_scanners` or an `InputScanner` to `output_scanners`, or a non-scanner object.","error":"ValueError: Invalid scanner type: <ScannerObject>. Scanners should be a list of InputScanner objects or OutputScanner objects."}]}