The evidence pack is the product. Three arms produce and defend it.
The signature deliverable is the OSFI E-23 Appendix A-aligned evidence pack — Model Cards, Agent Cards, HITL architecture, AIRSA and a Governance Operating Model. Portable — PDF, Excel, JSON — readable by OSFI, SR 11-7, ISO/IEC 42001 and EU AI Act examiners without logging into a platform. The three arms below build, defend and run it; eight MCP-bundled agents keep it alive inside your perimeter.
Three arms. One evidence pack. A deployable agent suite.
Each arm names the frameworks it answers to and the artifacts it emits. The pack travels; the platform does not.
AI Governance
OSFI E-23 · SR 11-7 · OSFI FIFAI · ISO/IEC 42001
Model Cards · Agent Cards · HITL gates · RAIOps
Arm 02Data Governance & Privacy
PIPEDA · Québec Law 25 · EU AI Act Art. 10 · BCBS 239
AI lineage · PIAs / DPIAs · training-data provenance · RAG governance
Arm 03Compliance Agents
OSFI · CIRO · FINTRAC · OPC · SEC · EU AI Office
MCP-bundled agents · client-deployed
Arm 04AI Advisory
Board · C-Suite · Vendor-risk · Insurance underwriting
Readiness · vendor DD · board reporting · insurance evidence
AI Governance
Model-risk frameworks aligned to OSFI E-23, SR 11-7, the OSFI FIFAI supervisory themes and ISO/IEC 42001 — delivered as operational systems under 2LOD scrutiny inside regulated-bank perimeters, not as framework documents.
Core Capabilities
+5 additional capabilities
- AI Evaluation Framework
- SHOW/NO SHOW decision logic — structural gates moving policy outside model context
- AI Incident Response
- Classification, escalation, regulatory notification, post-incident evidence
- Deployment Readiness Gates
- Pre-deployment checkpoints: monitoring, failure protocols, rollback
- 2LOD AI Governance Intake
- Inherent risk assessment and intake methodology
- Board-Level AI Risk Reporting
- Governance reporting for Board Risk Committee, KRIs, escalation triggers
Who this is for
- Primary buyers
- Fintechs with OSFI E-23 vendor risk questionnaires in flight
- Internal teams
- Bank AI vendor risk teams building assessment frameworks
- Deployment stage
- AI systems in production without formal governance artifacts
- Audit posture
- Upcoming regulatory examination or 2LOD review
Data Governance & Privacy
Data lineage for AI, training-data provenance and Privacy Impact Assessments structured for PIPEDA and Québec Law 25 — with vector-store and foundation-model governance built in. Designed for regulator scrutiny, not project gates.
Core Capabilities
Who this is for
- Primary buyers
- Fintechs subject to PIPEDA with AI in production without PIAs
- Anticipated regime
- Organizations preparing for CPPA's return and sector-specific AI obligations (OSFI E-23, OSFI FIFAI, Law 25)
- Decision surface
- AI making automated decisions about customers or applicants
- Data footprint
- Cross-border US-Canada and EU-Canada AI data flows
Compliance Agents
Customizable, deployable compliance agents — shipped as repo, Docker image, MCP server and output schema. Installed inside your perimeter, governed under the same RAIOps regime you sell. Every agent is born under that regime: inventoried in AIRSA, gated by HITL, policed by the AI Gateway, monitored for drift, closed by an Evidence Pack.
The Agent Suite
- Purpose
- Materiality-scored regulatory monitoring across Canadian, US and EU regulators.
- Buyer
- Head of Compliance, fintech or credit union
- Customization
- Regulator list, materiality rubric, routing
- Output
- Weekly materiality brief + per-filing note + change log
- Coverage
- OSFI · FCAC · FINTRAC · CIRO · OSC · OPC · Quebec CAI · SEC · Fed · EU AI Office
- Purpose
- 17-field OSFI E-23 Appendix A inventory auto-populated from MLOps.
- Buyer
- Model Risk Manager, FRFI or bank vendor
- Customization
- Source mapping, risk-rating rubric, reviewer / approver workflow
- Output
- Appendix A spreadsheet + PDF + JSON + chain-of-custody log
- Coverage
- OSFI E-23 Appendix A · SR 11-7 §III · OSFI FIFAI inventory signal
- Purpose
- Portable evidence pack per AI system — PDF, Excel, JSON, tamper-evident.
- Buyer
- CCO responding to regulated-vendor-risk questionnaire
- Customization
- Framework selection, narrative templates, pack layout
- Output
- Examiner-readable pack with chain-of-custody log
- Coverage
- OSFI E-23 · SR 11-7 · NIST AI RMF · EU AI Act · PLD 2024/2853
- Purpose
- Drift, performance, stability and outcomes analysis — continuous.
- Buyer
- MRM / ML Ops Lead
- Customization
- Metrics, thresholds, breach routing, sampling
- Output
- Monitoring report + timestamped breach records into the pack
- Coverage
- OSFI E-23 Principle 5 · SR 11-7 Outcomes · EU AI Act Art. 17
- Purpose
- In-VPC policy enforcement on prompt and response traffic.
- Buyer
- Platform Engineering + CISO, fintech selling into banks
- Customization
- PII rules, model allowlist, output filters, jurisdictional routing
- Output
- Usage log, policy-decision log, redaction record
- Coverage
- OSFI B-10 · PIPEDA / CPPA · EU AI Act Art. 15 · OSFI FIFAI controls signal
- Purpose
- pending_approval commit-gate for high-risk AI actions.
- Buyer
- Product + Risk, credit / fraud / execution desks
- Customization
- Gate triggers, reviewer roster, escalation ladder, SLA
- Output
- Timestamped approval record, reviewer identity, rationale
- Coverage
- OSFI E-23 Principle 3 · EU AI Act Art. 14 · SR 11-7 §V · OSFI FIFAI controls signal
- Purpose
- Internal Q&A grounded in the client governance corpus — every answer cites its source.
- Buyer
- Mid-level Compliance / Risk analyst
- Customization
- Corpus, retention, citation rule, persona
- Output
- Source-cited answers; no paragraph, no answer
- Coverage
- OSFI FIFAI literacy signal · all mapped frameworks via corpus tags
- Purpose
- External Q&A bounded to the evidence pack — no hallucinations beyond evidence.
- Buyer
- CCO during B-10 / E-23 examination or insurance bind
- Customization
- Persona (examiner / vendor-risk / insurer), framework
- Output
- Answer + pack citation + 'unanswerable from pack' flag
- Coverage
- OSFI B-10 · OSFI E-23 · AI-insurance underwriting (Armilla / Munich Re)
Every agent ships as a repo, a Docker image, an MCP (Model Context Protocol) server and an output schema. An engagement installs the agent inside the client’s perimeter — the client becomes the deployer of record. Subscription covers ongoing regulator-mapping maintenance and the RegWatch feed. Customer-deployed, engagement-installed, schema-maintained. You hold the keys; we maintain the mapping.
Who this is for
- Compliance scale
- Teams that cannot keep up manually with federal + provincial publication volume
- Evidence cost
- Organizations with multiple AI systems bearing high per-system artifact cost
- Operating model
- Functions moving from periodic audit prep to continuous governance
- Vendor posture
- Fintechs demonstrating systematic infrastructure to enterprise bank clients
AI-augmented delivery. Human-led engagements, digital peers.
Intelligence is a team member, not a demo on a slide.
RegCore engagements are human-led and agent-augmented. Our own RegWatch, AIRSA, Evidence Pack Generator, Model Monitoring, AI Gateway and HITL Gate run inside a governed, observed environment — the same environment pattern we ship to clients. We eat our own dogfood: every artifact we hand a client is produced under the controls we sell, and every internal agent run leaves the same evidence trail we would ask a client’s program to leave.
That means a first-draft pack measured in hours, not the 90-day onboarding the platform category sells. Every prompt is logged; every artifact is signed and versioned; every gate is timestamped and attributable. The compliance agents we would ship to a regulator are the compliance agents watching us.
A regulator-mapped intelligence core — OSFI E-23 Appendix A schema, SR 11-7 three pillars, OSFI FIFAI supervisory themes, NIST AI RMF, EU AI Act Annex IV, ISO/IEC 42001 — expressed as the agents that run our delivery and yours. Not a platform; the proprietary kernel.
Each agent is designed to be inspectable by an OSFI examiner on day one — not quarantined behind a platform login. The same Appendix A our clients face applies to our RegWatch agent.
The same evidence-pack pattern governs our own agents’ runs — audit trail included. When a regulated-vendor-risk team sends the questionnaire, we do not improvise the answer; we run the agents that answered it for ourselves first.
AI Advisory
Regulatory Readiness Assessments, vendor due diligence and board-level briefings — plus insurance-underwriting-ready evidence aligned to Armilla and Munich Re aiSure patterns. We answer the bank cascade from both sides.
Engagement Sequence
Assessment
Current-state compliance posture across all applicable frameworks with documented gap analysis.
Strategy
Sequenced remediation roadmap, risk appetite alignment, and target operating model design.
Implementation Design
Reference architecture for HITL gates, Agent Cards, Model Cards, and evidence infrastructure.
Governance Operating Model
Running governance process with 2LOD integration, Board reporting cadence, and audit posture.
Advisory Capabilities
Strategic
- Readiness Assessment
- Current-state compliance posture and gap remediation roadmap
- Board Risk Strategy
- AI governance structures, risk appetite, oversight mechanisms
- AI Governance Program
- Principles, policies, processes, accountability structures
- Risk Appetite
- AI-specific risk appetite integrated with enterprise ERM
Operational
- Vendor Due Diligence
- Assessment methodology for AI vendors entering bank perimeters
- Bias & Fairness
- Systematic bias assessment with regulator-ready documentation
- Regulatory Horizon
- Structured analysis of emerging AI regulations
- Innovation-with-Guardrails
- Governance enabling AI adoption without compliance debt
Who this is for
- Executives
- CEOs, CTOs, CROs who need a credible governance-posture answer
- Evaluators
- Teams assessing GRC tools or AI governance platform vendor claims
- Market entry
- Institutions entering new AI-regulated jurisdictions with readiness needs
- Board audiences
- Risk Committees requiring defensible AI oversight structures
What buyers and regulators most often ask.
Drawn from bank vendor-risk questionnaires, board briefings and founder conversations.
What is OSFI E-23 and when does it become enforceable?
OSFI Guideline E-23 is the Office of the Superintendent of Financial Institutions' Model Risk Management guideline, which becomes enforceable May 1, 2027 for federally regulated financial institutions. It requires model documentation, independent validation, ongoing monitoring programs, and Board-level governance structures for all material models — explicitly including AI and machine learning systems. The guideline does not tolerate informal processes or retroactive documentation; it expects evidence artifacts produced from a running governance process.
Does OSFI E-23 apply to fintech vendors or only to banks?
E-23 is formally directed at federally regulated financial institutions, but its requirements cascade to fintech vendors through bank procurement and vendor risk management processes. Canadian banks are already requiring E-23-aligned model documentation, validation evidence, and HITL gate architectures from their AI vendors before deployment approvals. In practice, any fintech selling AI into a Big 5 or regional bank must produce governance artifacts that satisfy the bank's 2LOD risk team — which means satisfying E-23 by proxy.
What is OSFI FIFAI, and what does it mean for Canadian FRFIs?
The Financial Industry Forum on Artificial Intelligence (FIFAI) is OSFI's convening mechanism for dialogue with industry, academia and peer regulators on AI risk and governance in federally regulated financial institutions. FIFAI is not a guideline and does not carry enforceable obligations — what it carries is supervisory signal. The consistent themes OSFI has articulated through FIFAI include: AI inventory and supply-chain lineage; controls that keep pace with adoption; responsible innovation rather than adoption avoidance; workforce and consumer AI literacy; and systemic, ecosystem-level risk from foundation-model concentration and multi-tiered vendor chains. Those themes are operationalised through binding OSFI guidelines — E-23 model risk management (effective May 1, 2027), B-10 third-party risk management (revised 2024), E-21 operational risk and resilience (revised 2024), and B-13 technology and cyber risk. A programme built against those guidelines, calibrated to the FIFAI themes, is positioned for OSFI's AI posture whether or not a formal AI-specific guideline is issued.
What is a Model Card, and why do Canadian banks require one?
A Model Card is a structured documentation artifact that captures a model's intended use, training data, validation results, known limitations, performance characteristics, and monitoring requirements. Canadian banks require Model Cards because they are the primary evidence artifact 2LOD review teams use to evaluate whether a model satisfies OSFI E-23 expectations. A Model Card that was written retroactively to satisfy a project gate does not establish the provenance chain E-23 requires — the Model Cards we produce are structured to withstand regulatory examination, not to close a ticket.
What is the difference between a Model Card and an Agent Card?
A Model Card documents a single model — its training data, validation, limitations, and monitoring. An Agent Card documents an AI agent system, which typically orchestrates one or more models against tools, retrieval sources, and decision logic. The Agent Card captures design rationale, operational constraints, escalation logic, and decision boundaries — the reasoning layer that Model Cards alone do not cover. Agentic AI in financial services needs both artifacts because regulators will ask not only what the model does, but why the agent chose to invoke it.
What is a Human-in-the-Loop (HITL) gate, and why does it matter for regulated AI?
A HITL gate is a structural checkpoint where an AI system must enter a pending_approval state and wait for a human decision before an action is committed. This is distinct from a post-hoc alert or a review dashboard — the gate blocks execution, not reviews it after the fact. HITL gates matter for regulated AI because OSFI E-23, SR 11-7, and the EU AI Act all require human oversight that is auditable, timestamped, and attributable. A HITL gate that exists only in policy is not a gate; it is a recommendation that will not survive a compliance audit.
What is the status of Bill C-27, CPPA and AIDA for Canadian financial services?
Bill C-27 combined the Consumer Privacy Protection Act (CPPA) — the proposed federal replacement for PIPEDA — and the Artificial Intelligence and Data Act (AIDA). Bill C-27 lapsed on January 6, 2025 at prorogation and AIDA was formally withdrawn on February 11, 2025. Canada currently has no federal AI statute in force. PIPEDA remains the federal private-sector privacy baseline; CPPA's content is expected to return in a future parliamentary cycle. In the interim, AI-specific obligations for Canadian FSIs flow from OSFI E-23 (effective May 1, 2027), OSFI B-10 third-party risk, OSFI E-21 operational resilience, OSFI FIFAI supervisory themes, FINTRAC, CIRO, Quebec Law 25 (in force, up to $25M CAD or 4% global revenue), and cross-border regimes — SR 11-7, forthcoming US federal banking AI guidance, NIST AI RMF, and the EU AI Act. A program built to PIPEDA and OSFI E-23 today, with voluntary NIST AI RMF + ISO/IEC 42001 alignment, is well-positioned for whatever federal AI law returns next.
How does SR 11-7 apply to AI and machine learning models?
SR 11-7 is the US Federal Reserve's Supervisory Guidance on Model Risk Management (issued as Fed SR Letter 11-7 and OCC Bulletin 2011-12 in April 2011, with FDIC adoption via FIL-22-2017 in June 2017). It predates modern AI but its three-pillar structure — conceptual soundness, ongoing monitoring, and outcomes analysis — is being applied directly to AI/ML models by US bank examiners. For Canadian institutions operating in the US or US subsidiaries of Canadian banks, SR 11-7 is the operative model risk standard, and the evidence expectations map closely to OSFI E-23. An AI governance program designed to satisfy E-23 is largely structured correctly for SR 11-7, with additional attention to US-specific validation and documentation conventions.
How does the EU AI Act affect Canadian fintechs?
The EU AI Act reaches full applicability on August 2, 2026, and has extraterritorial reach — it applies to any provider placing an AI system on the EU market or whose AI output is used in the EU, regardless of where the provider is incorporated. For Canadian fintechs with EU customers or EU-resident end users, high-risk AI in financial services triggers conformity assessment and technical documentation obligations, with penalties up to 7% of global revenue. The practical effect is that Canadian AI providers cannot treat the EU AI Act as someone else's problem if any part of their product touches the EU.
How long does a Regulatory Readiness Assessment typically take?
A Regulatory Readiness Assessment establishes current-state compliance posture across OSFI E-23, SR 11-7, NIST AI RMF, EU AI Act, PIPEDA, Quebec Law 25, FINTRAC, CIRO, and other applicable frameworks, producing a gap analysis and sequenced remediation roadmap. The exact duration depends on the number of AI systems in scope, regulatory jurisdictions in play, and the state of existing documentation. The assessment itself is scoped during the initial conversation — every engagement begins there, because governance work without a current-state baseline produces documents rather than evidence.
How is RegCore.AI different from Big 4 advisory firms or AI governance platforms like Credo AI?
Big 4 advisory firms produce governance frameworks and policy documents; AI governance platforms — Credo AI, Holistic AI, ValidMind, Asenion (formerly Fairly AI) — come from data-science or platform backgrounds and provide tooling that assumes governance is already defined. RegCore.AI produces operational governance infrastructure — HITL gates, Agent Cards, evidence pipelines — that draws from governance perimeters at Canadian BFSI institutions, designed to withstand 2LOD review against real regulatory submissions. Our knowledge of bank compliance culture, 2LOD documentation standards, and OSFI examination expectations is first-hand, not acquired from reading guidelines.
How are the compliance agents packaged and deployed?
Every RegCore agent ships as a reference implementation: a repo, a Docker image, an MCP (Model Context Protocol) server exposing its tools, and an output schema mapped to the relevant regulator. An engagement installs and certifies the agent inside the client's perimeter — the client becomes the deployer of record. Subscription covers ongoing regulator-mapping maintenance and RegWatch feed updates. Customer-deployed, engagement-installed, schema-maintained: the client holds the keys; RegCore maintains the mapping.
What does AI-Augmented Delivery mean in practice?
RegCore engagements are human-led and agent-augmented. Our own RegWatch, AIRSA, Evidence Pack Generator, Model Monitoring, AI Gateway and HITL Gate run inside a governed, observed environment — the same environment pattern we ship to clients. Every artifact we produce is signed, versioned and reproducible from its inputs; every agent run is logged; every output cites its source. When we ship an agent to a client, it is an agent we would ship to a regulator, because we run the same agents on ourselves.
Begin with a Regulatory Readiness Assessment.
Phase 1 (3–4 weeks): inventory, lineage map and gap analysis versus E-23, the OSFI FIFAI supervisory themes, SR 11-7, NIST AI RMF and EU AI Act. Phase 2 installs RAIOps, HITL gates, AI Gateway and agents. Phase 3 compiles the portable evidence pack — PDF, Excel, JSON.