LLM Guardrails API Integration
Last updated
Last updated
This document describes how any third party can integrate with Swift Security. There are two entities:
Swift Security.
Swift Security Integrator ( The third party which integrates with Swift Security).
Prerequisite:
Need to use Swift Security UI to create an enforce policy. The screenshot is given below:
To create a policy, click on 'ADD NEW POLICY' in the UI (Policies -> LLM Guardrails). The policy creation process is straightforward and self-explanatory.
2. Select Detectors.
For LLM Guardrail policies selecting detectors is an important step. The following screenshot illustrates this process.
3. Enforce the policy
Enforce the policy by toggling the status button to enable the listed policy.
Customize Detector:
System admins should customize the detector by navigating to Policy -> LLM Guardrails -> Customized Detectors.
System admin can enable and block.
Authorization:
Swift Integration and Swift Security should be authenticated and authorized.
API Request from Swift Security Integrator: (Input Scanner Curl)
Here is a sample Input scanner curl:
curl --location --request POST 'https://policymanager.swiftsecurity.ai/input/policy/scan' --header 'Content-Type: application/json' --header 'x-api-key: eyJhcHBsaWNhdGlvbl9uYW1lIjoiQUNNRS1BSSBDaGF0Ym90IiwiYXBwbGljYXRpb25fdHlwZSI6ImludGVybmFsIiwiZGVzY3JpcHRpb24iOiJBQ01FLUFJIENoYXRib3QiLCJvcmdhbml6YXRpb25faWQiOiJzd2lmdCIsInRhZ3MiOlsicHJpdmF0ZSJdfQ==' --data-raw '{"email":"admin@swiftsecurity.ai","prompt":"Shit.","user_group":["admin","hr"],"user_ip":"14.97.177.30","user_name":"John Doe"}'
A HTTP POST request consisting of user details, “x-api-key” with authentication key, email address , prompt , user group , user_ip and user name.
Here a policy is created with Toxicity as input scanner detector and the policy action is block:
Response of Swift Security:
Response when Policy is violated and Action is Block.
Toxicity: true means Toxicity is detected and in “action” field it is showing as “Block”.
Response when Policy is not violated.
Response when Policy is violated and some detector having policy action as
Block and some having Alert only.
When in Policy multiple detectors is selected and some having policy action as “Alert only” and some having “Block”. Then the response will be like above where Swift Security will return the list of detectors that are violated with their policy action also.
Swift Integration Curl request for Output Scanner:
Sample Curl-
curl --location --request POST 'https://policymanager.swiftsecurity.ai/output/policy/scan' --header 'Content-Type: application/json' --header 'x-api-key: eyJhcHBsaWNhdGlvbl9uYW1lIjoiQUNNRS1BSSBDaGF0Ym90IiwiYXBwbGljYXRpb25fdHlwZSI6ImludGVybmFsIiwiZGVzY3JpcHRpb24iOiJBQ01FLUFJIENoYXRib3QiLCJvcmdhbml6YXRpb25faWQiOiJzd2lmdCIsInRhZ3MiOlsicHJpdmF0ZSJdfQ==' --data-raw '{"email":"admin@swiftsecurity.ai","output_response":"Don’t talk shit.","prompt":"Hi","user_group":["admin"],"user_ip":"14.97.177.30","user_name":"John Doe"}'
Parameters are self-explanatory.
Now policy instance for output scanner:
Now for this policy If Swift Integration hit the POST request with appropiate values it will scan the output_response and will return results accordingly.Like if output_resonse is “Don’t talk shit”, Swift will detect the response as toxic.
Response when Policy is Violated and Policy Action is Block.
Response when Policy is Violated and Policy Action is Alert Only.
Response when Policy is not Violated.
Alert and Event : Swift will generate Event irrespective of any violation. Alert will be generated in case of Policy violation in any of the detector, shown in UI.