Prompt Injection Scanner
(Input scanner)
Last updated
(Input scanner)
Last updated
It is specifically tailored to guard against crafty input manipulations targeting large language models (LLM). By identifying and mitigating such attempts, it ensures the LLM operates securely without succumbing to injection attacks.
The scanner examines user inputs for signs of prompt injection, such as embedded commands or code that could alter the AI’s behavior or output. It looks for unusual patterns or unexpected instructions within the input text.
Prompt Injection Detection Policy for AI Chatbot
Create a new policy as same as shown in LLM Guardrails Policy, for Prompt Injection detection select scanner Prompt Injection.
Optionally, perform a test to ensure the policy is functioning as intended. Check that Prompt Injection is detected and blocked as specified.