{"title":"Agent Hardening: Preventing Prompt Injection & Hijacking","region":"Global","category":"Security","description":"Five-step defenses to prevent prompt injection and hijacking in autonomous tool callers.","lastUpdated":"2026-02-22","steps":["Wrap user-provided data in explicit delimiters and forbid instructions inside those tags.","Enforce schema validation so only expected data types reach tools.","Pre-scan external content for injection patterns before passing to the main agent.","Flag high-risk tool calls for human approval.","Execute untrusted code only in sandboxed environments."],"url":"https://checklist.day/agent-prompt-injection-defense"}