Foundation Model Due Diligence Playbook
Bringing a GPAI, Claude, GPT, Gemini, Llama or sovereign model into scope — the diligence a regulated deployer is now expected to perform.
How the playbook runs.
Each phase is operated jointly by our compliance agents and our specialists. Agents carry the mechanical steps; specialists own the judgement calls and the sign-off at the boundary between phases.
- 01Provider due diligence
- 02Training-data posture
- 03Safety-testing review
- 04Contractual flow-down
What you hold at the end.
Signed, dated, tamper-evident, portable. The artifact set reads in PDF, Excel and JSON — and without a platform login. Your practitioners keep working even if we walk away.
The regimes this playbook answers.
One playbook, mapped clause-by-clause to every framework in scope. Open any framework below for the primary-source detail and the controls we land against it.
The agents that touch this playbook.
Each agent is bounded, instrumented and auditable. Actions are logged. Thresholds are reviewed. A specialist holds the pen at every decision point that carries supervisory weight.
- Inventory AgentDiscovers nth-party AI that arrived through enterprise SaaS, agent marketplaces and sub-processor stacks.
- Risk Tiering AgentApplies the vendor-tiering methodology and flags concentration risk the register should surface to the board.
- Control Mapping AgentRoutes each vendor to the clause library — flow-down obligations, audit rights, exit clauses, AI-specific addenda.
- Monitoring AgentTracks vendor-side incidents, SOC 2 bridge letters and control changes, and alerts when the residual-risk posture shifts.
Start this playbook on your portfolio.
Tell us which estate it runs against and which supervisory conversation it needs to answer. We will walk from first discovery to a signed artifact — with the agents doing the assembly and our specialists owning the sign-off.