Integration Guide
Get Sentinel Proxy running in front of your LLM in minutes.
1. Get your API key
Sign up and grab an API key from your dashboard. Your key starts with sk_live_ and is only shown once at creation.
2. Point your OpenAI client to Sentinel
Swap the base URL — that's the only change needed. Works with any OpenAI-compatible SDK.
Python
from openai import OpenAI
client = OpenAI(
base_url="https://sentinel.ircnet.us/v1/scrub",
api_key="sk_live_your_key_here",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)Node.js / TypeScript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://sentinel.ircnet.us/v1/scrub",
apiKey: "sk_live_your_key_here",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});cURL
curl -X POST https://sentinel.ircnet.us/v1/scrub \
-H "Content-Type: application/json" \
-H "X-Sentinel-Key: sk_live_your_key_here" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'3. Choose a security tier
Sentinel supports two tiers that control how aggressively threats are handled. Pass the tier field in your request body. Defaults to standard if omitted.
{
"content": "User message to scan...",
"tier": "standard"
}Balanced for most use cases. Blocks high-confidence attacks, neutralizes likely threats, and flags borderline content for your app to decide.
Lower thresholds for high-security environments. Casts a wider net — more content is neutralized or flagged, at the cost of potential false positives.
Threat score thresholds by tier:
| Score Range | Standard | Strict |
|---|---|---|
| > 0.82 | blocked | blocked |
| > 0.55 | neutralized | — |
| > 0.40 | flagged | neutralized |
| > 0.25 | clean | flagged |
| ≤ 0.25 | clean | clean |
High-confidence regex matches bypass scoring and are blocked immediately (threat_score 1.0).
4. Understand responses
Sentinel inspects every request and returns one of four actions:
No threat detected. Request passes through transparently.
Borderline content — the full payload is passed through untouched, but the action_taken is set to "flagged" so your application can apply its own logic (e.g. run a secondary classifier, require human review, or add context to the LLM system prompt).
Suspicious content is sanitized before forwarding. The request still completes.
High-threat request is rejected. A 403 response is returned to the caller.
5. n8n workflow integration
Use Sentinel Proxy as a tool inside n8n AI agents. These two workflows work together to give your agent safe web search — all retrieved content is scanned for prompt injection before the LLM sees it.
Sub-workflow tool: searches Tavily, preprocesses results, and runs them through Sentinel's POST /v1/scrub endpoint.
Main agent workflow: chat trigger + LLM + the safe web search tool. Import both into n8n to get started.
Import both JSON files into your n8n instance, update the X-Sentinel-Key header in the Safe Web Search workflow with your API key, and connect your preferred LLM provider.
6. Monitor in your dashboard
View real-time usage stats, threat reports, and attack analytics in your dashboard. Every request is logged with its threat score, action taken, and latency.