# BLOG
Notes on AI agent security, vibe coding, and building in public.
- 2026-03-13Give it 60 seconds. We generate a honeypot URL, your agent reads it, and we show you exactly which prompt injection attacks it fell for — stealth canary tokens, fake command execution, content suppression, injected marketing claims, and more.
- 2026-03-13Check Point spent $300M on Lakera. CrowdStrike, Cloudflare, and AWS all added AI guardrails. But none of them can detect a tool-chain exfiltration attack or stop an agent from drifting across 15 turns of conversation. The reason isn't technical — it's structural.
- 2026-03-10OpenAI acquired Promptfoo. Palo Alto bought Protect AI. CrowdStrike got Pangea. $8.5B in AI security acquisitions — but almost all of it is dev-time testing or network-layer filtering. The runtime layer is still missing.
- 2026-03-08Three fundamentally different perspectives on agent security — the engineer, the pragmatist, and the researcher — and why they're all right. The question isn't whether agents should have elevated credentials. Companies already decided. The question is what happens when they make mistakes at machine speed.
- 2026-03-08135K GitHub stars. 42,000 exposed instances. 1,184 poisoned skills. 12+ CVEs in two months. OpenClaw is the first major AI agent security crisis — and a preview of what happens when agents go mainstream without runtime security.
- 2026-03-06Attackers think in sequences — reconnaissance, trust building, exploitation — but today's AI security checks each request in isolation. Chain detection is the missing layer.
- 2026-03-05The future isn't one super-agent — it's 20 agents with cross-system access. When one gets compromised via prompt injection, lateral movement becomes real. Here's why agents need network microsegmentation and zero trust.
- 2026-03-04Three real examples showing how Bastion's security pipeline handles prompt injection and API key leaks — from ML detection to regex redaction to session escalation.
- 2026-03-04Regex catches known attack patterns, but sophisticated prompt injections use semantic camouflage. We added local ONNX model inference to Bastion — 7-20ms latency, zero cloud dependency, and it coordinates with Tool Guard and DLP for system-wide response.
- 2026-03-02AI Agent conversations carry full history on every turn, causing O(N²) DLP scanning overhead. Here's how Bastion solves it with message-level hash caching.
- 2026-03-02AI agents are evolving into autonomous executors. We integrated Bastion into OpenClaw to make every LLM interaction visible, auditable, and controllable.
- 2025-06-15I monitored my Claude Code sessions for a week. Here's what data it sent to Anthropic — and what could go wrong.