{"id":7445,"library":"nemoguardrails","title":"NeMo Guardrails","description":"NeMo Guardrails is an open-source toolkit developed by NVIDIA for adding programmable guardrails to LLM-based conversational systems. It helps define rules, enable safety, and ensure desired behavior for AI assistants. As of version 0.21.0, it supports flexible integration with various LLMs and frameworks, often releasing updates regularly to enhance features and stability.","status":"active","version":"0.21.0","language":"en","source_language":"en","source_url":"https://github.com/NVIDIA/NeMo-Guardrails","tags":["LLM","Guardrails","NLP","AI","Conversation","Safety"],"install":[{"cmd":"pip install nemoguardrails","lang":"bash","label":"Install core library"}],"dependencies":[],"imports":[{"symbol":"LLMRails","correct":"from nemoguardrails import LLMRails"},{"symbol":"RailsConfig","correct":"from nemoguardrails import RailsConfig"},{"symbol":"RailsClient","correct":"from nemoguardrails.python_client import RailsClient"}],"quickstart":{"code":"import os\nimport asyncio\nfrom nemoguardrails import LLMRails, RailsConfig\n\n# Ensure you have OPENAI_API_KEY or other LLM provider keys set in your environment.\n# For OpenAI, set it via: export OPENAI_API_KEY=\"your_api_key_here\"\n# Or in Python: os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n\n# Define the rails configuration\nconfig = RailsConfig.from_content(\n    colang_content=\"\"\"\n        define user express greeting\n            \"hello\"\n            \"hi\"\n\n        define bot express greeting\n            \"Hello, how can I help you today?\"\n\n        flow user express greeting\n            bot express greeting\n    \"\"\",\n    config={\n        \"models\": [\n            {\n                \"type\": \"main\",\n                \"engine\": \"openai\",\n                \"model\": \"gpt-3.5-turbo\",\n                \"api_key\": os.environ.get(\"OPENAI_API_KEY\", \"\") # Use environment variable for API key\n            }\n        ]\n    }\n)\n\n# Initialize the LLMRails\nrails = LLMRails(config=config)\n\n# Example asynchronous interaction\nasync def main():\n    print(\"\\n--- User: Hello!\")\n    response = await rails.generate_async(messages=[{\"role\": \"user\", \"content\": \"Hello!\"}])\n    print(f\"--- Bot: {response['content']}\")\n\n    print(\"\\n--- User: Tell me about NVIDIA.\")\n    response = await rails.generate_async(messages=[{\"role\": \"user\", \"content\": \"Tell me about NVIDIA.\"}])\n    print(f\"--- Bot: {response['content']}\")\n\nif __name__ == \"__main__\":\n    # Make sure your OPENAI_API_KEY environment variable is set before running.\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        print(\"Warning: OPENAI_API_KEY environment variable not set. The LLM call might fail.\")\n        print(\"Please set it: export OPENAI_API_KEY='sk-...'\\n\")\n\n    asyncio.run(main())\n","lang":"python","description":"This quickstart initializes `NeMo Guardrails` with a basic configuration using OpenAI's GPT-3.5-turbo. It defines a simple greeting flow. Ensure your `OPENAI_API_KEY` environment variable is set for the example to successfully interact with the LLM."},"warnings":[{"fix":"Update imports from `from nemoguardrails.rails import ...` to `from nemoguardrails import ...`.","message":"The import path for core classes like `LLMRails` and `RailsConfig` was changed in version 0.9.0.","severity":"breaking","affected_versions":"<0.9.0"},{"fix":"Custom LLM providers should now be registered either by passing them through the `RailsConfig` `config` dictionary or using `rails.register_llm_provider()`.","message":"The method for configuring custom LLM providers was significantly refactored in version 0.8.0. The `configure_llm_model` method was removed.","severity":"breaking","affected_versions":"<0.8.0"},{"fix":"Use `RailsConfig.from_path(path)` to create a `RailsConfig` object, then pass it to the `LLMRails` constructor as `LLMRails(config=...)`.","message":"The `LLMRails` constructor's `config_path` parameter has been deprecated since version 0.17.0 in favor of a more explicit configuration flow.","severity":"deprecated","affected_versions":">=0.17.0"},{"fix":"Always `await` asynchronous methods within an `async` function and run the async function using `asyncio.run()` or similar mechanisms (e.g., in Jupyter).","message":"Many core interaction methods, such as `generate_async`, are asynchronous. Incorrectly calling them without `await` or outside an `async` context will lead to runtime errors or unexpected behavior.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Change your import statements from `from nemoguardrails.rails import ...` to `from nemoguardrails import ...`.","cause":"The import path for core classes (`LLMRails`, `RailsConfig`) changed directly to `nemoguardrails` in version 0.9.0.","error":"ModuleNotFoundError: No module named 'nemoguardrails.rails'"},{"fix":"Set the API key as an environment variable (e.g., `export OPENAI_API_KEY='sk-...'`) or explicitly pass it in the `config` dictionary within your `RailsConfig`.","cause":"The required API key for the configured LLM model (e.g., OpenAI, Hugging Face) is not set in the environment variables or directly in the `RailsConfig`.","error":"KeyError: 'openai_api_key'"},{"fix":"Ensure all calls to async methods like `generate_async` are prefixed with `await` and executed within an `async` function, which is then run using `asyncio.run()`.","cause":"An asynchronous method (e.g., `generate_async`) was called without the `await` keyword in an asynchronous context.","error":"TypeError: object LLMRails can't be awaited"},{"fix":"If in an environment like Jupyter, use `await` directly in a cell if the environment supports it, or use `nest_asyncio` (`import nest_asyncio; nest_asyncio.apply()`) to allow nested event loops.","cause":"Attempting to start a new `asyncio` event loop (e.g., with `asyncio.run()`) from within an environment where an event loop is already active (e.g., Jupyter notebooks, certain web frameworks).","error":"RuntimeError: Cannot run the event loop while another loop is running"}]}