DORA for AI Systems Playbook
ICT risk management and incident reporting where AI is in the critical path — for EU-facing financial entities.
How the playbook runs.
Each phase is operated jointly by our compliance agents and our specialists. Agents carry the mechanical steps; specialists own the judgement calls and the sign-off at the boundary between phases.
- 01Critical function mapping
- 02Third-party register
- 03Incident classification
- 04Threat-led testing
What you hold at the end.
Signed, dated, tamper-evident, portable. The artifact set reads in PDF, Excel and JSON — and without a platform login. Your practitioners keep working even if we walk away.
The regimes this playbook answers.
One playbook, mapped clause-by-clause to every framework in scope. Open any framework below for the primary-source detail and the controls we land against it.
The agents that touch this playbook.
Each agent is bounded, instrumented and auditable. Actions are logged. Thresholds are reviewed. A specialist holds the pen at every decision point that carries supervisory weight.
- Control Mapping AgentLands the canonical control mapping against your active frameworks and identifies the operational gaps a specialist needs to close.
- Evidence Engine AgentProduces the operational artifacts — manuals, SoA, runbooks, agent cards — against the schema each regulator will accept.
- Validation AgentRuns effective-challenge reviews on the controls that gate high-risk AI — with a specialist sign-off recorded in the audit log.
- Monitoring AgentKeeps your control register current as the portfolio, the vendors and the regulator guidance move.
Start this playbook on your portfolio.
Tell us which estate it runs against and which supervisory conversation it needs to answer. We will walk from first discovery to a signed artifact — with the agents doing the assembly and our specialists owning the sign-off.