NeMo Guardrails
NeMo Guardrails is an open-source toolkit developed by NVIDIA for adding programmable guardrails to LLM-based conversational systems. It helps define rules, enable safety, and ensure desired behavior for AI assistants. As of version 0.21.0, it supports flexible integration with various LLMs and frameworks, often releasing updates regularly to enhance features and stability.
Common errors
-
ModuleNotFoundError: No module named 'nemoguardrails.rails'
cause The import path for core classes (`LLMRails`, `RailsConfig`) changed directly to `nemoguardrails` in version 0.9.0.fixChange your import statements from `from nemoguardrails.rails import ...` to `from nemoguardrails import ...`. -
KeyError: 'openai_api_key'
cause The required API key for the configured LLM model (e.g., OpenAI, Hugging Face) is not set in the environment variables or directly in the `RailsConfig`.fixSet the API key as an environment variable (e.g., `export OPENAI_API_KEY='sk-...'`) or explicitly pass it in the `config` dictionary within your `RailsConfig`. -
TypeError: object LLMRails can't be awaited
cause An asynchronous method (e.g., `generate_async`) was called without the `await` keyword in an asynchronous context.fixEnsure all calls to async methods like `generate_async` are prefixed with `await` and executed within an `async` function, which is then run using `asyncio.run()`. -
RuntimeError: Cannot run the event loop while another loop is running
cause Attempting to start a new `asyncio` event loop (e.g., with `asyncio.run()`) from within an environment where an event loop is already active (e.g., Jupyter notebooks, certain web frameworks).fixIf in an environment like Jupyter, use `await` directly in a cell if the environment supports it, or use `nest_asyncio` (`import nest_asyncio; nest_asyncio.apply()`) to allow nested event loops.
Warnings
- breaking The import path for core classes like `LLMRails` and `RailsConfig` was changed in version 0.9.0.
- breaking The method for configuring custom LLM providers was significantly refactored in version 0.8.0. The `configure_llm_model` method was removed.
- deprecated The `LLMRails` constructor's `config_path` parameter has been deprecated since version 0.17.0 in favor of a more explicit configuration flow.
- gotcha Many core interaction methods, such as `generate_async`, are asynchronous. Incorrectly calling them without `await` or outside an `async` context will lead to runtime errors or unexpected behavior.
Install
-
pip install nemoguardrails
Imports
- LLMRails
from nemoguardrails import LLMRails
- RailsConfig
from nemoguardrails import RailsConfig
- RailsClient
from nemoguardrails.python_client import RailsClient
Quickstart
import os
import asyncio
from nemoguardrails import LLMRails, RailsConfig
# Ensure you have OPENAI_API_KEY or other LLM provider keys set in your environment.
# For OpenAI, set it via: export OPENAI_API_KEY="your_api_key_here"
# Or in Python: os.environ["OPENAI_API_KEY"] = "sk-..."
# Define the rails configuration
config = RailsConfig.from_content(
colang_content="""
define user express greeting
"hello"
"hi"
define bot express greeting
"Hello, how can I help you today?"
flow user express greeting
bot express greeting
""",
config={
"models": [
{
"type": "main",
"engine": "openai",
"model": "gpt-3.5-turbo",
"api_key": os.environ.get("OPENAI_API_KEY", "") # Use environment variable for API key
}
]
}
)
# Initialize the LLMRails
rails = LLMRails(config=config)
# Example asynchronous interaction
async def main():
print("\n--- User: Hello!")
response = await rails.generate_async(messages=[{"role": "user", "content": "Hello!"}])
print(f"--- Bot: {response['content']}")
print("\n--- User: Tell me about NVIDIA.")
response = await rails.generate_async(messages=[{"role": "user", "content": "Tell me about NVIDIA."}])
print(f"--- Bot: {response['content']}")
if __name__ == "__main__":
# Make sure your OPENAI_API_KEY environment variable is set before running.
if not os.environ.get("OPENAI_API_KEY"):
print("Warning: OPENAI_API_KEY environment variable not set. The LLM call might fail.")
print("Please set it: export OPENAI_API_KEY='sk-...'\n")
asyncio.run(main())