NeMo Guardrails

0.21.0 · active · verified Thu Apr 16

NeMo Guardrails is an open-source toolkit developed by NVIDIA for adding programmable guardrails to LLM-based conversational systems. It helps define rules, enable safety, and ensure desired behavior for AI assistants. As of version 0.21.0, it supports flexible integration with various LLMs and frameworks, often releasing updates regularly to enhance features and stability.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart initializes `NeMo Guardrails` with a basic configuration using OpenAI's GPT-3.5-turbo. It defines a simple greeting flow. Ensure your `OPENAI_API_KEY` environment variable is set for the example to successfully interact with the LLM.

import os
import asyncio
from nemoguardrails import LLMRails, RailsConfig

# Ensure you have OPENAI_API_KEY or other LLM provider keys set in your environment.
# For OpenAI, set it via: export OPENAI_API_KEY="your_api_key_here"
# Or in Python: os.environ["OPENAI_API_KEY"] = "sk-..."

# Define the rails configuration
config = RailsConfig.from_content(
    colang_content="""
        define user express greeting
            "hello"
            "hi"

        define bot express greeting
            "Hello, how can I help you today?"

        flow user express greeting
            bot express greeting
    """,
    config={
        "models": [
            {
                "type": "main",
                "engine": "openai",
                "model": "gpt-3.5-turbo",
                "api_key": os.environ.get("OPENAI_API_KEY", "") # Use environment variable for API key
            }
        ]
    }
)

# Initialize the LLMRails
rails = LLMRails(config=config)

# Example asynchronous interaction
async def main():
    print("\n--- User: Hello!")
    response = await rails.generate_async(messages=[{"role": "user", "content": "Hello!"}])
    print(f"--- Bot: {response['content']}")

    print("\n--- User: Tell me about NVIDIA.")
    response = await rails.generate_async(messages=[{"role": "user", "content": "Tell me about NVIDIA."}])
    print(f"--- Bot: {response['content']}")

if __name__ == "__main__":
    # Make sure your OPENAI_API_KEY environment variable is set before running.
    if not os.environ.get("OPENAI_API_KEY"):
        print("Warning: OPENAI_API_KEY environment variable not set. The LLM call might fail.")
        print("Please set it: export OPENAI_API_KEY='sk-...'\n")

    asyncio.run(main())

view raw JSON →