IntelligenceOSFI FIFAI
OSFI FIFAIMarch 2026

What OSFI is Signalling Through Its AI Governance Workshops — And How FRFIs Should Prepare

OSFI's Financial Industry Forum on AI is a channel — not a rulebook. Read alongside E-23, B-10 and E-21, the supervisory direction for Canadian FRFIs is unambiguous even before any formal AI-specific guideline lands.

What FIFAI is, and what it is not

The Financial Industry Forum on Artificial Intelligence (FIFAI) is OSFI's convening mechanism for dialogue with industry, academia and peer regulators on AI risk and governance in federally regulated financial institutions (FRFIs). It is not a guideline and it does not carry enforceable obligations. What it carries is signal — OSFI's evolving articulation of the questions, themes and supervisory expectations that will shape how AI is reviewed in Canadian FRFIs.

Reading FIFAI as though it were a new rulebook misreads the instrument. The binding obligations sit in OSFI's guidelines — E-23 Model Risk Management, B-10 Third-Party Risk Management, E-21 Operational Risk and Resilience, and B-13 Technology and Cyber Risk. FIFAI is the forum in which OSFI articulates how those guidelines should be read when AI is the model, the third party, or the technology in question.

For a Canadian FRFI planning its AI governance programme, the practical question is not “what does FIFAI require?” It is: what supervisory themes is OSFI consistently signalling, and how should those themes shape the programme before any formal AI-specific guideline is issued?

The supervisory themes that keep surfacing

Across OSFI's public remarks, speeches and forum outputs, a consistent set of themes appears. None of them are novel in isolation — they are the AI-era expression of principles OSFI has long held. What is novel is the way they compose into a coherent posture that FRFIs should be able to evidence.

AI inventory and supply-chain lineage

OSFI expects FRFIs to know what AI they operate, where it comes from, and what sits beneath it. That includes models developed in-house, models embedded in third-party software, and the foundation-model and data dependencies that sit below those. The supervisory test is whether an institution can walk an examiner through its AI landscape without guessing — which systems are material, how they were trained, what data was used, and which third, fourth and fifth parties are embedded in the supply chain.

Controls that keep pace with adoption

Policy approved a year ago is not evidence that controls are running today. OSFI's supervisory posture is consistent: the framework must develop in step with AI adoption rather than freeze at the moment a document was signed. That means data-integrity standards, human oversight for high-impact decisions, transparency to consumers, and third-party oversight have to be observable in day-to-day operation — not only describable in policy.

Responsible innovation, not adoption avoidance

OSFI has been explicit in public remarks that the risk of under-adoption is real. Canadian FRFIs are expected to pursue disciplined, responsible innovation — investing in talent and infrastructure, integrating new classes of AI (including agentic systems) with appropriate oversight rather than blocking them outright. Governance programmes that become bottlenecks to well-governed adoption are themselves a supervisory concern.

Workforce and consumer AI literacy

Continuous AI literacy across the institution — board, management, 1LOD developers, 2LOD risk and compliance, 3LOD internal audit — is a recurring theme, alongside tiered disclosure to consumers so they can understand where and how AI is applied. The supervisory test is whether training and disclosure produce ongoing evidence (roles trained, curricula maintained, incident lessons captured, disclosures calibrated to audience) rather than a one-time campaign.

Systemic and ecosystem-level risk

OSFI has consistently flagged third-party concentration and multi-tiered (“nth party”) supply chains as sources of systemic risk when foundation models, AI APIs and shared infrastructure are embedded across the sector. Ecosystem-level themes — concentration, digital identity integrity, real-time threat and incident information sharing — connect AI governance to B-10 third-party risk, E-21 operational resilience and B-13 technology and cyber.

How the binding guidelines carry the AI load

Because FIFAI itself is not enforceable, the operational weight of OSFI's AI posture is carried by guidelines that already are. Three are load-bearing.

E-23 — the model risk spine

Guideline E-23 is effective May 1, 2027. It applies to FRFIs across all model types — traditional, generative and agentic — and defines the model risk management obligations that institutions must meet: model inventory (Appendix A's 17-field structure), material-risk tiering, independent validation, documentation, monitoring cadence, and governance. Whatever else an AI governance programme does, it has to produce the artifacts and running processes E-23 requires.

B-10 — the third-party cascade

Guideline B-10, revised 2024, governs third-party risk management. For AI, the material question is the cascade: an FRFI's third party often itself depends on a fourth-party foundation-model provider, a fifth-party inference provider, and so on. B-10's risk-tiering, due-diligence depth and concentration-risk expectations are the mechanisms through which OSFI's ecosystem concerns are operationalised.

E-21 — operational resilience when AI is in the critical path

Guideline E-21, revised 2024 with phased implementation, requires FRFIs to map critical operations, set impact tolerances for disruption, and evidence resilience posture. When AI sits in the critical path of a core operation, E-21 is the guideline that asks whether the institution has actually tested what happens when that AI — or its upstream foundation model, or its inference provider — is degraded or unavailable.

Key takeaway

A Canadian FRFI that has a defensible E-23 programme, a B-10 programme that reaches nth-party AI dependencies, and an E-21 programme that tests AI-in-critical-path disruption is already meeting the substantive expectations OSFI signals through FIFAI — whether or not an AI-specific guideline is issued.

Why OSFI prefers this posture

OSFI could issue a prescriptive AI-specific guideline with numbered clauses. To date it has not, and the shape of its engagement suggests a deliberate preference for principles-based supervision anchored in existing guidelines. That preference reflects a supervisory judgment that AI risk is best addressed through the same lenses OSFI already applies — model risk, third-party risk, operational resilience, technology risk — sharpened by dialogue in FIFAI and by specific E-23 expectations on model types that include generative and agentic AI.

The posture is also deliberately orthogonal to the model lifecycle. Inventory and lineage, controls that keep pace, responsible innovation, workforce and consumer literacy, and ecosystem-level resilience cut across development, deployment and operation. Strong model validation alone will not answer the supply-chain question; strong vendor management alone will not answer the workforce-literacy question. The programme has to compose across axes.

What a readiness assessment looks like

A readiness assessment is a structured current-state evaluation against the supervisory themes above and the E-23 / B-10 / E-21 obligations that operationalise them. It produces a theme-by-theme maturity rating (is this policy, process, practice, or evidence-backed programme?), a gap analysis, and a sequenced remediation roadmap.

Most FRFIs we work with are at policy or process on AI inventory with supply-chain lineage (multi-tier visibility is rarely complete), at practice on controls for mature models under existing model risk regimes, and at policy at best on workforce AI literacy and tiered consumer disclosure. The gap is typically not capability; it is the operational evidence that the capability is running continuously as a programme.

A defensible assessment produces: a theme-by-theme maturity rating with documented evidence, a gap analysis against OSFI's signalled expectations and against E-23 / B-10 / E-21, a sequenced remediation roadmap that addresses the highest-risk gaps first, and an evidence inventory identifying which governance artifacts already exist, which need to be produced, and which need to be regenerated from a running process.

Organisational readiness implications

Because the themes cut across model development, data governance, operational risk, workforce development, consumer experience and third-party risk, an AI governance programme cannot be owned by a single function. Controls and validation sit across 2LOD risk and data governance; inventory and lineage touch data governance and third-party registries; learning spans HR and customer experience; ecosystem resilience sits across procurement, B-10 third-party risk, and technology and cyber risk; innovation sits with AI strategy, product and the Board.

The FRFIs best positioned for OSFI's posture are those that have already built integrated AI governance forums — standing structures with named representation from 1LOD model developers, 2LOD risk and compliance, 3LOD internal audit, data governance, legal, technology, HR and procurement. The institutions that will struggle are those where AI governance sits exclusively in a model risk function that is not structurally connected to third-party risk, workforce enablement or enterprise innovation.

What this means for fintechs selling into Canadian FRFIs

FIFAI is directed at FRFIs, but it sets the expectation that Canadian bank 2LOD teams will carry into their vendor risk assessments. A fintech that can present its AI governance programme mapped to OSFI's supervisory themes and to the E-23 / B-10 obligations they cascade — with evidence, not with slides — is presenting a package that the bank's 2LOD team can process efficiently against its own internal frame. A fintech that cannot is presenting a package the bank has to translate, and translation burns cycle time.

How RegCore.AI approaches OSFI readiness

Our AI Governance and AI Advisory practices were built inside Canadian Big 5 bank governance perimeters, under the same 2LOD review standards OSFI now signals in FIFAI. We map governance programmes to OSFI's supervisory themes as a baseline assessment frame, identify the specific themes where evidence is weak, and build the operational governance — RAIOps, Model Cards, Agent Cards, HITL gates, AI inventory with supply-chain lineage, AI-literacy curricula and ecosystem third-party controls — that closes those gaps as continuous programme outputs rather than as point-in-time artifacts.

A Regulatory Readiness Assessment calibrated to OSFI's posture produces the maturity baseline, gap analysis and remediation roadmap needed to move from a policy-level programme to an evidence-backed one — in time for OSFI's expectations to become examination-active.

OSFI FIFAIOSFI E-23AI GovernanceRegulatory

Ready to assess your OSFI readiness?

A Regulatory Readiness Assessment calibrated to OSFI's supervisory themes establishes a theme-by-theme baseline and produces a sequenced remediation roadmap.

Request an Assessment