CAISI / Research + Operating Notes

Centre for AI Security and Integrity

Independent, reproducible research on AI agent governance for security and platform leaders. Start with the decision you need to make, then follow the evidence.

Role routes

Choose the path that matches the meeting you are in

Platform security

Standards before scale

Start where repo contracts, orchestration, boundaries, and proof become reusable platform work.

Research

Published reports and live builds

The research hub is the canonical entry point for report pages, methodology, and artifact-backed findings.

Published report

AI Tool and Agent Sprawl 2026

An `890`-target publication subset showing that public AI and agent adoption is easy to detect, but approved, deployable, and well-evidenced use is much harder to prove.

Blog

Operating notes

Use the blog when you need the operating model behind the research: approval packets, repo contracts, boundaries, pilots, and proof.

Executive adoption series

From AI Pilots to Governed Adoption

Five posts on platform standards, sanctioned pathways, approval discipline, and how leaders move from AI pilots to governed use.

Benchmark series

How to Evaluate Agentic Control

Five posts on risk scenarios, control efficacy, proof completeness, and pilot evaluation language for buyers.

About

Research that can be checked

Every headline claim maps to published artifacts, deterministic queries, and explicit scope limits. The point is to make AI agent control measurable enough for security and platform teams to act on.

Team

CAISI contributors

Contact

Get in touch

For research questions, publication inquiries, or collaboration around reproducible AI governance work: david@caisi.dev