Featured
🛡️

Why AI Agents Need Runtime Security — Not Just Guardrails

Static guardrails check prompts before they reach the LLM. But AI agents don't just send prompts — they make tool calls, build dynamic prompts, receive LLM responses, and send messages to external channels. Each of these is an attack surface that guardrails never see.

Sentinel's 8-hook architecture intercepts every decision point in the agent lifecycle: from the moment a message arrives (message_received), through tool calls (before_tool_call, after_tool_call), prompt construction (before_prompt_build), LLM interaction (llm_input, llm_output), to the final channel delivery (message_sending, before_message_write). Each hook enforces policy in real time, creating an unbroken security perimeter. This is the difference between auditing logs after a breach and preventing the breach from happening at all.

Read more →
Latest Posts

From the engineering blog

🎯
April 2026 · Security

Prompt Injection: The #1 Threat to AI Agents

Prompt injection attacks are evolving faster than defenses. We analyze the latest techniques — from indirect injection via tool outputs to multi-turn context poisoning — and show how Sentinel's 3-layer analysis stack detects them in under 1ms.

🔒
March 2026 · Engineering

8-Layer Defense: How Sentinel's Hook System Works

A deep dive into the 8-hook architecture — message_received through before_message_write. How each hook enforces a different security policy, how they share state via the event bus, and why the order matters.

🏥
March 2026 · Healthcare

HIPAA Compliance for AI Agents

AI agents processing patient data must meet HIPAA's Privacy Rule, Security Rule, and Breach Notification Rule. We break down how Sentinel's healthcare vertical detects 14 breach types and generates audit trails that satisfy HHS investigations.

🐍
March 2026 · Deep Dive

Zero-Touch Instrumentation: How sitecustomize.py Changes Everything

Python's sitecustomize.py runs before any user code. We show how Sentinel uses this mechanism to monkey-patch OpenAI, Anthropic, and LangChain at import time — protecting every Python agent without a single code change.

💊
February 2026 · Pharma

FDA 21-CFR-11 and AI: Electronic Records in the Age of Agents

FDA's 21 CFR Part 11 requires electronic records to be attributable, legible, contemporaneous, original, and accurate (ALCOA). We explain how Sentinel's SHA-256 hash chain and immutable audit log satisfy every ALCOA principle for AI-generated records.

📊
February 2026 · Engineering

Building a Cost-Tracking LLM Proxy with 16-Model Pricing

Sentinel's LLM proxy intercepts every API call, extracts token counts, and computes cost across 16 model variants (GPT-4o, Claude 3.5, Gemini Pro, and more). We walk through the architecture, pricing tables, and how trace correlation ties cost back to individual agent sessions.