Integration Guide
Get Sentinel Proxy running in front of your LLM in minutes.
1. Get your API key
Sign up and grab an API key from your dashboard. Your key starts with sk_live_ and is only shown once at creation.
2. Point your OpenAI client to Sentinel
Swap the base URL — that's the only change needed. Works with any OpenAI-compatible SDK.
Python
from openai import OpenAI
client = OpenAI(
base_url="https://sentinel.ircnet.us/v1/scrub",
api_key="sk_live_your_key_here",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)Node.js / TypeScript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://sentinel.ircnet.us/v1/scrub",
apiKey: "sk_live_your_key_here",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});cURL
curl -X POST https://sentinel.ircnet.us/v1/scrub \
-H "Content-Type: application/json" \
-H "X-Sentinel-Key: sk_live_your_key_here" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'3. Understand responses
Sentinel inspects every request and takes one of three actions:
Request passes through to the upstream LLM transparently.
Suspicious content is sanitized before forwarding. The request still completes.
High-threat request is rejected. A 403 response is returned to the caller.
4. n8n workflow integration
Use Sentinel Proxy as a tool inside n8n AI agents. These two workflows work together to give your agent safe web search — all retrieved content is scanned for prompt injection before the LLM sees it.
Sub-workflow tool: searches Tavily, preprocesses results, and runs them through Sentinel's POST /v1/scrub endpoint.
Main agent workflow: chat trigger + LLM + the safe web search tool. Import both into n8n to get started.
Import both JSON files into your n8n instance, update the X-Sentinel-Key header in the Safe Web Search workflow with your API key, and connect your preferred LLM provider.
5. Monitor in your dashboard
View real-time usage stats, threat reports, and attack analytics in your dashboard. Every request is logged with its threat score, action taken, and latency.