Advanced AI · White paper
Agentic AI Governance
Autonomous, multi-step, tool-calling agents: the controls, artifacts and oversight regulators are beginning to ask for.
Abstract
What this paper is for.
Agentic AI introduces multi-step autonomy, tool use and goal-directed behaviour that traditional model risk frameworks were not designed to regulate. This paper defines the governance surface, the artifact schema, and the operating controls (agent cards, action budgets, HITL gating, kill switches, audit trails) that let a 2LOD function bring agentic deployments into regulator readable compliance.
Key findings
The takeaways our research desk stands behind.
- US federal banking supervisors have not yet issued AI-specific agentic guidance while OSFI E-23 brings agentic AI into scope by May 1, 2027. The gap is a live operational risk for cross-border banks.
- Action budgets are the underused control. Most agentic systems ship without per-run cost or effect ceilings.
- Deterministic audit trails are impossible for stochastic agents; sampling + replay is the current best practice.
- Multi-agent emergent behaviour is the frontier regulator concern.
Table of contents
What is inside.
- Executive summary
- What makes an agent an agent: the definition question
- The agent governance surface
- Agent cards: the artifact schema
- Action budgeting and scope design
- Human in the loop gating patterns
- Kill switches and rollback
- Audit trail design for nondeterministic systems
- Monitoring: drift, behavioural tests, red teaming
- Multi-agent orchestration and emergent behaviour
- Regulator posture: OSFI, EU AI Act, NIST AI RMF
- Compliance agent assist for agentic deployments
- Appendix: Agent Card schema (YAML)
Frameworks covered
Regulator and standards reach.
Intended audience
Chief AI Officers, Heads of AI Platform, Model Risk leaders, Security architects at banks, insurers, dealers and fintechs owning agentic deployments.