Agent Tool-Output Sanitization

Security · updated Thu Feb 26

Scrub and validate data returned by external tools before it enters the LLM context to prevent indirect injection.

Steps

  1. Validate tool output against expected JSON/Type schema (e.g., Zod or Pydantic).
  2. Scrub PII, credentials, or internal secrets (tokens, keys) from raw tool responses.
  3. Truncate excessive output strings to prevent context window exhaustion.
  4. Neutralize hidden instructions or prompt-injection triggers within tool data.
  5. Convert complex API objects into flat, LLM-readable text representations.
  6. Log sanitization events where data was altered or dropped for security.

view raw JSON →