We believe every AI decision in a regulated industry should have an immutable, auditable record.
AI agents are making autonomous decisions in regulated industries — handling patient records in healthcare, processing financial transactions in fintech, managing pharmaceutical trial data under FDA oversight. Every one of these decisions is a potential compliance breach, data leak, or privilege escalation. The tooling to secure them didn't exist as a unified product.
Sentinel was born from the realization that compliance, security, and auditability can't be afterthoughts bolted onto an AI deployment. They need to be embedded into the runtime itself — intercepting every prompt, every tool call, every LLM response, and every outgoing message before damage is done.
To make every AI agent in a regulated industry fully auditable, tamper-proof, and compliant — without requiring a single line of code change from the developer. We build the security infrastructure so AI teams can focus on building intelligent agents.
A world where deploying an AI agent in finance, healthcare, or pharma is as safe and auditable as deploying a traditional banking application. Sentinel becomes the unified security layer that every enterprise AI stack includes by default.
Every security decision is logged, explained, and auditable. No black boxes.
SHA-256 hash chains, immutable writes, tamper detection on every read.
One install command. Zero code changes. Auto-instrumentation handles the rest.
The full security engine is open source. Enterprise features layer on top.
We spent months building separate open-source tools for AI agent security: a prompt interceptor, an endpoint monitor, a compliance classifier, a policy engine, and an agent plugin. Each one worked well in isolation. Together, they left gaps — different logging formats, no shared dashboard, inconsistent policy enforcement, and five separate install processes.
We kept seeing the same problems across deployments: prompt injections slipping through because the interceptor didn't talk to the classifier. PII leaking into Slack channels because the DLP scanner and the message hook were in different repos. Compliance auditors asking for a unified audit trail that didn't exist because each tool wrote to its own database.
So we unified everything. Sentinel merges all five repositories into a single platform where every security layer talks to every other, every event feeds the same dashboard, and one install command protects every agent on the machine. The 8-hook plugin, the 3-layer prompt analyzer, the breach compliance engine, the LLM proxy, and the endpoint monitors all share a single event bus, a single audit database, and a single configuration file.
The result: install once, everything is automatic. No gaps. No inconsistencies. No five-repo nightmare.
Co-Founder
Co-Founder
Co-Founder
Open Roles
Merged five separate repositories into a single monorepo. Designed shared event bus, unified configuration, and single audit database.
Built the unified TypeScript plugin with 8 security hooks covering the entire agent lifecycle — from message_received to before_message_write.
Launched rule-based classifier detecting 40+ breach types across fintech, healthcare, and pharma verticals in under 1ms.
Shipped zero-code monkey-patching for OpenAI, Anthropic, and LangChain. Any Python agent is automatically protected at import time.
Deployed HTTP reverse proxy that intercepts LLM API calls, measures latency, extracts tokens, and computes cost across 16 model variants.
Launched real-time SSE dashboard with Overview, Live Events, Traces, Breach Monitor, Block Rules, and Settings tabs.
Whether you're deploying agents in fintech, healthcare, or pharma — we'd love to hear from you.