AppSec
Control failure, proof, and review surfaces
Start where runtime behavior, approval, and evidence quality can be measured.
Independent research and operating notes on AI agent governance.
CAISI / Research + Operating Notes
Independent, reproducible research on AI agent governance for security and platform leaders. Start with the decision you need to make, then follow the evidence.
Role routes
AppSec
Start where runtime behavior, approval, and evidence quality can be measured.
CISO
Start where leadership needs a defensible story for risk, audit, and board review.
Platform security
Start where repo contracts, orchestration, boundaries, and proof become reusable platform work.
Research
The research hub is the canonical entry point for report pages, methodology, and artifact-backed findings.
Published report
A controlled comparison showing what changes when the system moves from prompt-only constraints to enforceable tool-boundary control with evidence capture.
Published report
An `890`-target publication subset showing that public AI and agent adoption is easy to detect, but approved, deployable, and well-evidenced use is much harder to prove.
Blog
Use the blog when you need the operating model behind the research: approval packets, repo contracts, boundaries, pilots, and proof.
Executive adoption series
Five posts on platform standards, sanctioned pathways, approval discipline, and how leaders move from AI pilots to governed use.
Framework series
A 10-part framework on repo contracts, orchestration, isolation, evaluation, proof, and maturity.
Benchmark series
Five posts on risk scenarios, control efficacy, proof completeness, and pilot evaluation language for buyers.
About
Every headline claim maps to published artifacts, deterministic queries, and explicit scope limits. The point is to make AI agent control measurable enough for security and platform teams to act on.
Team
Contact
For research questions, publication inquiries, or collaboration around reproducible AI governance work: david@caisi.dev