IntelligenceFINTRAC · CIRO
FINTRAC · CIROApril 2026

FINTRAC 2026 Amendments and CIRO Consolidation: What They Mean for AI in Canadian Financial Services

OSFI E-23 is the headline AI-governance regime in Canadian financial services. It is not the only one. FINTRAC and CIRO have each introduced 2026 obligations that extend the AI evidence perimeter into AML and investment-dealer territory — and firms with diversified footprints cannot run these as separate programs.

Context — the 2026 convergence

The public narrative around AI governance in Canadian financial services has centred on OSFI E-23 and, secondarily, on the EU AI Act. Neither covers the full perimeter. In 2026, at least three additional regulatory tracks are actively shaping AI-governance obligations for Canadian FSI:

  • FINTRAC — ongoing PCMLTFR amendments through 2024-2026, extending AML obligations into AI-driven transaction monitoring and automated suspicious-transaction detection.
  • CIRO — the consolidated Canadian Investment Regulatory Organization framework in effect for 2026, extending investment-dealer oversight into AI applications in trading surveillance, suitability, and client communications.
  • Quebec Law 25 and PIPEDA — Law 25 is fully in force (Sept 22, 2023) with automated-decision transparency and privacy impact assessment obligations, penalties up to $25M CAD or 4% global revenue. PIPEDA remains the federal baseline after Bill C-27 lapsed (AIDA was withdrawn Feb 11, 2025); CPPA content is expected to return, but no federal AI statute is currently in force.
  • NIST AI RMF + SR 11-7 — for Canadian fintechs with US bank clients or cross-border operations, US federal AI governance baselines apply in parallel, with SR 11-7 model validation and governance expectations flowing through vendor risk programs at Fed/OCC/FDIC-supervised institutions.

Firms with diversified regulatory footprints — a bank-affiliated fintech, an investment dealer with AI-assisted advisory, a payments provider with AML obligations — sit inside two or three of these perimeters simultaneously. Running the AI-governance obligations as separate workstreams produces duplicative evidence, inconsistent documentation, and persistent gaps at the seams.

FINTRAC's ongoing PCMLTFR amendments

FINTRAC — the Financial Transactions and Reports Analysis Centre of Canada — administers the AML and anti-terrorist financing regime under the Proceeds of Crime (Money Laundering) and Terrorist Financing Act. Successive PCMLTFR amendments through 2024-2026 extend enhanced AML obligations to reporting entities, and the practical effect on AI is direct: organizations using machine-learning or agentic systems for suspicious-transaction detection, risk scoring, or customer-risk rating are now expected to document model logic, validation methodology, and ongoing performance monitoring in formats consistent with regulatory examination.

The implications for AI-driven transaction monitoring are concrete:

  • Model documentation — the AML transaction monitoring model is a model for FINTRAC examination purposes, and an explanation of how it identifies suspicious patterns must be defensible to an AML compliance officer and, ultimately, to a FINTRAC examiner.
  • Validation — initial validation of the model's effectiveness against a representative sample of transactions, with documentation of methodology and results.
  • Ongoing performance monitoring — documented monitoring of false-positive and false-negative rates, alert quality, and drift, with defined thresholds and escalation paths when those thresholds are breached.
  • Incident response — protocols for AI-driven monitoring failures that may have caused reportable transactions to go unreported, including how those failures are detected, remediated, and disclosed.

The through-line is familiar to anyone working on OSFI E-23: AI systems performing regulated functions must produce evidence of how they operate, how they are validated, and how they are monitored — in forms that survive examination.

CIRO's 2026 consolidated framework

CIRO — the Canadian Investment Regulatory Organization — emerged from the consolidation of the former IIROC and MFDA into a single self-regulatory body for investment and mutual-fund dealers. The 2026 consolidated framework is the first full operating year under the unified rulebook, and it covers the dealer activities most affected by AI adoption:

  • Trading surveillance — AI-assisted detection of market abuse, front-running, layering, and other manipulation patterns, which dealers are expected to monitor for and escalate.
  • Suitability — AI-assisted suitability assessment and product recommendation tools, which must produce recommendations defensible against CIRO's Know-Your-Client and suitability expectations.
  • Client communications — AI-generated or AI-assisted communications to clients, which must satisfy CIRO's standards for accuracy, fair dealing, and conduct.
  • Complaint handling and conduct surveillance — AI-assisted triage and analysis of client interactions, which must preserve the audit trail necessary for supervisory review.

Each of these domains carries a distinct governance expectation. Trading surveillance is subject to approximation to the rigour of a model risk management program; suitability is subject to conduct and record-keeping standards; client communications are subject to fair-dealing obligations that apply regardless of whether the communication was human- or AI-generated. The dealer cannot discharge any of these obligations by pointing to the AI vendor.

Regulatory convergence

The pattern is consistent across OSFI, FINTRAC, and CIRO: where AI performs a regulated function, the organization must produce evidence of model logic, validation, monitoring, and human oversight — in forms that survive examination. The specific rule varies; the evidence expectation is the same.

The cross-regulator convergence pattern

Read together, OSFI E-23, FINTRAC's 2026 amendments, and CIRO's 2026 framework produce a convergent evidence architecture. Each regulator expects, in its own language:

  • Documented model logic and design rationale — Model Cards and Agent Cards in our terminology, evidenced from a running governance process.
  • Independent validation, with documentation sufficient to establish the validation was genuine.
  • Ongoing monitoring as a program — metrics, thresholds, escalations, and evidence those escalations occurred when triggered.
  • Human oversight that is auditable, timestamped, and attributable — HITL gates in the high-stakes contexts, defensible review structures elsewhere.
  • Incident response and remediation protocols appropriate to the regulated activity.

The architecture is the same. The vocabulary differs. An organization with a well-designed AI governance program can produce the evidence each regulator wants from a single underlying operational layer — so long as that layer was designed with cross-regulator evidence in mind. An organization that builds three separate stacks, each aimed at a specific regulator, will find itself maintaining three artifact factories and reconciling them continuously.

The E-23 interaction for bank-affiliated fintechs

For fintechs that sell AI products into Canadian banks — especially those whose products touch AML, transaction monitoring, investment-dealer activity, or client-facing communications — the interaction effect matters. OSFI E-23 cascades from the bank's model risk management program into the vendor relationship. FINTRAC and CIRO obligations cascade into the bank's AML and dealer arms, and from there into the vendor relationships that support those arms.

Practically, the fintech faces the same evidence request from multiple directions within the same bank. The bank's 2LOD risk function asks for E-23-aligned Model Cards, validation, monitoring, and HITL architecture. The bank's AML function, under FINTRAC expectations, asks for documentation of transaction-monitoring model logic, validation, and performance. The bank's dealer arm, under CIRO, asks for surveillance and suitability documentation. A fintech that built one evidence pack for one regulator will find itself rebuilding for the next conversation within the same enterprise account.

Operational response — integrated, not parallel

The operational response that holds up across the three regimes is the same response that holds up under E-23 alone — scaled to cover the additional regulated activities and cross-linked across the regulators. Specifically:

  • A single AI use-case inventory — AIRSA in our terminology — covering every material AI system with inherent risk ratings, regulatory mappings to each relevant regulator, and governance status.
  • Model Cards and Agent Cards produced at a level of specificity that supports materiality assessment per use case and per regulator.
  • A consolidated monitoring program with regulator-specific views layered on top — AML-specific monitoring metrics for FINTRAC-scoped systems, surveillance-specific metrics for CIRO-scoped systems, model-performance metrics for E-23-scoped systems.
  • HITL gates sized to the highest-stakes decisions each regime cares about, with audit trails that can be produced in the format each regulator expects.
  • A RegWatch capability tracking material developments across OSFI, FINTRAC, CIRO, OSC, and OPC continuously, so that a new consultation or enforcement action does not require manual rediscovery across four inboxes.

Treating the 2026 regulatory expansion as an integrated compliance program rather than three separate workstreams is what makes the program sustainable as additional regulators — Quebec CAI under Law 25, any federal AI-law successor after Bill C-27's lapse, and international regimes reached through extraterritoriality — layer on.

How RegCore.AI approaches cross-regulator AI governance

Our practice is structured around integrated evidence: the governance artifacts we produce — Model Cards, Agent Cards, HITL architectures, RAIOps operational frameworks, AIRSA inventories, the RegWatch Agent — are designed to satisfy multiple regulators from a single underlying operational layer. The Evidence Generation Pipeline produces the artifacts each regulator expects, in the format each regulator expects, from a single source of governance truth.

A Regulatory Readiness Assessment establishes current-state posture across OSFI E-23, FINTRAC, CIRO, NIST AI RMF, SR 11-7, and other applicable regimes, identifies the highest-priority convergence gaps, and produces a sequenced remediation roadmap that treats AI governance as an integrated compliance program.

FINTRACCIROAMLInvestment Dealers2026

Ready to run AI governance as an integrated program?

A Regulatory Readiness Assessment establishes your current-state posture across OSFI E-23, FINTRAC, CIRO, NIST AI RMF, and SR 11-7 and produces a sequenced remediation roadmap.

Request an Assessment