Service pillar

AI Governance

Senior advisory on the governance an AI-era regulator will actually examine — policies, roles, risk tiering, validation cadence.

Stand up an AI governance programme grounded in the authoritative regulator record, calibrated to your portfolio, and carried through to the evidence a supervisor can sign. Partners on the engagement; the compliance intelligence layer underneath.

Outcomes we deliver

Each outcome is a signed, dated artifact your regulator, your auditor and your board can read — and that your practitioners can keep working with long after we walk away.

Auditable AI inventory indexed to primary-source obligations
Governance charter, RACI, and human sign-off gates that survive examination
Model and agent cards indexed to regulator clauses
Evidence traceable to source and to the decision that produced it

Compliance agents in this pillar

Each agent is bounded, instrumented and auditable. Our specialists direct, review and sign off; the agents do the mechanical work at a multiple of the pace of traditional firms.

Inventory Agent

Crawls SaaS, platforms, APIs, agent orchestrators and shadow deployments to enumerate every model and AI in the estate — with owner, vendor, data classes and risk signals.

Risk Tiering Agent

Classifies use cases against your tiering rubric and the regulatory cuts your portfolio answers to — OSFI E-23 material-risk, EU AI Act Annex III, NIST AI RMF profiles.

Policy Drafting Agent

Drafts AI management-system policies, role mandates, HITL gates and escalation paths, anchored to ISO/IEC 42001, OSFI FIFAI II, and the frameworks your regulators actually read.

Validation Agent

Assembles validation files — challenger models, stability tests, bias and fairness measures, documentation of effective challenge — to the depth each framework expects.

Board-Brief Agent

Produces board-facing briefs: risk dashboard, exception register, emerging regulation, control-effectiveness rollups. Short, signed, dated.

Frameworks we cover in this pillar

One control library, mapped clause by clause across the regimes below. Answer many supervisors with one artifact set.

ISO/IEC 42001:2023International

AI Management System Standard

Published December 2023

The certifiable AI management system standard. Plan, Do, Check, Act across the AI lifecycle.

Open framework →
OSFI E-23Canada

Model Risk Management Guideline

Effective May 1, 2027

The 17-field Appendix A model inventory. Applies to FRFIs across all model types — traditional, generative, agentic.

Open framework →
NIST AI RMF 1.0United States

AI Risk Management Framework

Published January 2023 · GenAI Profile July 2024

Govern / Map / Measure / Manage. Profileable to any jurisdictional overlay.

Open framework →
EU AI ActEuropean Union

Regulation (EU) 2024/1689

High-risk regime live August 2, 2026

Risk-tiered obligations, Article 15 accuracy/robustness/cybersecurity, Annex IV technical file, GPAI model rules.

Open framework →
BCBS 239International

Principles for effective risk data aggregation and risk reporting

Issued 2013, fully applicable to G-SIBs

The baseline for data governance and risk reporting capability — directly applicable to AI systems in risk models and aggregation pipelines.

Open framework →
PRA SS1/23United Kingdom

Bank of England PRA — Model Risk Management Principles

Effective May 17, 2024

The UK PRA baseline for model risk — governance, lifecycle, validation, MI.

Open framework →
SR 11-7United States

Federal Reserve Supervisory Letter — Model Risk Management

In force (2011)

The US MRM baseline, still the reference point for validation, governance and documentation expectations in US banking supervision.

Open framework →

Recommended playbooks

Each playbook walks from first discovery through artifact. Phases, controls, evidence. Agents assist the mechanical steps; specialists own the sign-off.

OSFI E-23 · Inventory

OSFI E-23 Readiness Playbook

Stand up the 17-field Appendix A model inventory, map controls to the six principles, and produce the artifact set your supervisor will read before the meeting.

Read the playbook →
EU AI Act · Assessment

EU AI Act High-Risk System Playbook

Classify use cases against Annex III, build the Article 9 risk management system, and compile the Annex IV technical file your conformity assessment will depend on.

Read the playbook →
ISO/IEC 42001 · Controls

ISO/IEC 42001 AIMS Stand-Up Playbook

Build a certifiable AI Management System: scope, policy, objectives, risk, controls, audit. Mapped to your portfolio.

Read the playbook →
Cross-jurisdiction · Documentation

SR 11-7 → OSFI E-23 Crosswalk Playbook

For firms operating across US and Canadian supervisory perimeters, one validation file that answers both.

Read the playbook →
NIST AI RMF 1.0 · Assessment

NIST AI RMF Profile Playbook

Govern / Map / Measure / Manage — profiled to your sector, your use cases and the frameworks your regulators read.

Read the playbook →
Cross-framework · Controls

Agentic AI Governance Playbook

Multi-step autonomous agents, tool-calling chains, and the oversight these systems demand. Agent cards, action budgets, kill switches.

Read the playbook →
Cross-framework · Documentation

RAG Assurance Playbook

Retrieval-augmented generation has its own attack surface — source provenance, index drift, poisoning risk. Control it.

Read the playbook →
Cross-framework · Vendor

Foundation Model Due Diligence Playbook

Bringing a GPAI, Claude, GPT, Gemini, Llama or sovereign model into scope — the diligence a regulated deployer is now expected to perform.

Read the playbook →
Cross-framework · Monitoring

Continuous Control Monitoring Playbook

Drift, performance, outcome and complaint monitoring in one pipeline — outputs a supervisor can act on.

Read the playbook →

Stand up ai governance on an artifact your regulator will read.

Tell us where your portfolio sits today. We will map the frameworks, deploy the compliance agents, and put our specialists beside your second line.