Compliance for the AI era

Your board is asking about AI risk. The regulator is next.

We give compliance teams the program and the proof.

Digital compliance officers, senior advisory, and proven playbooks — deployed alongside the teams whose AI governance programs will be examined.

Advisory·Playbooks·Digital compliance officers
White paper52 pages · PDF
OSFI E-23 · Effective May 2027

Enterprise Readiness: the 17-field inventory, six principles, and the artifact set your supervisor will read.

A practitioner blueprint for the Appendix A inventory, principle mapped controls, and the evidence pack that stands up to examination. Cited to the guideline text.

Download white paper →
The intelligence layer

The compliance intelligence layer for AI-enabled firms.

Grounded in authoritative regulator data. The library moves when the regulator moves, and carried into every engagement through six capabilities that sit where the work already happens.

  1. 01Regulatory Copilot

    A direct line into the regulator record.

    Ask the substrate a question, read the answer with its citations attached. Built for practitioners doing primary source work: boardroom memos, counsel review, supervisory preparation.

    • Primary source citations on every answer
    • Canadian and global regulators indexed
    • Grounded in the firm’s methodology
  2. 02Digital compliance agents

    Personal compliance agents, under human sign-off.

    A digital compliance agentic layer that delivers dedicated agents, extending your team on recurring work, carrying cited reasoning into every step, and routing every decision to a named accountable owner.

    • Dedicated agent configuration
    • Always under human accountability
    • Scoped to the engagement’s playbook
  3. 03Custom compliance playbooks

    The playbooks the agents execute.

    Each engagement produces a playbook indexed to the regulator record and calibrated to your posture. The same artifact the agents run against is the methodology the firm publishes to the industry.

    • OSFI E-23 program build
    • EU AI Act high-risk readiness
    • ISO 42001 operating cadence
  4. 04Integrations

    Fit inside the workflow you already run.

    The layer reaches into the systems your teams already use: the GRC tool, the ticket queue, the document store. You do not adopt a new surface. The work shows up where the work already happens.

    • GRC and risk register connectors
    • Document stores and SSO
    • Ticketing and workflow tools
  5. 05Evidence & provenance

    Traceable to the source, and to the decision.

    Every answer is traceable on two sides. Back to the regulatory instrument it came from, and back to the agent decision that produced it. Defensible to a board above, defensible to a supervisor below.

    • Citation lineage to primary source
    • Agent decision trail, per step
    • Exportable evidence packs
  6. 06Public API

    The same engine, inside your stack.

    Teams building internal controls can reach the substrate directly. The API exposes what the firm uses: the record, the retrieval, the cited reasoning. For clients who want to ground their own systems in it.

    • OpenAPI 3.1 reference
    • Partner managed keys
    • Same provenance, same corpus

Six capabilities, one authoritative record. Indexed from Justice Canada, OSFI, the OSC, Parliament, the Gazettes and the supervisors that sit beside them.

Explore the platformRequest a walkthrough
What we believe

A thesis for compliance in the AI era.

Compliance is being rewritten. Quietly and from below.

Every regulator of consequence has now published AI specific expectations, and every one of them is proportional, principles based, and unfinished. Programs that survive the next decade will not be the ones with the most software. They will be the ones whose compliance functions learned to operate AI themselves.

Credibility is earned at the desk, not in a demo.

The firms doing the real work in this space publish openly, partner deeply, and sit beside clients long enough to be accountable for outcomes. We are building RegCore along those lines. Our playbooks are public. Our engagements are named in the room. Our agents are instruments of the method, not the product of it.

“Institutions hire compliance firms to be accountable for judgements they cannot outsource. Technology is useful when it sharpens that judgement, and dangerous when it pretends to replace it.”

A principle of the firm
What leading teams are preparing for

The obligations on the desk this year.

Six topics dominate the boardroom conversations we are in. Our published work on each is where we begin most engagements.

Canada · Effective May 2027

OSFI E-23: What the guideline asks you to prove

The 17-field Appendix A inventory is the visible artifact. The harder work sits in Principles 1 through 6: evidence of independence, objectivity, and proportional control.

Continue reading →
European Union · High-risk live Aug 2026

EU AI Act Article 15: Accuracy, robustness, cybersecurity

The technical file required by Annex IV is not a document exercise. It is a standing assurance program, and most first movers underestimate it by a factor of four.

Continue reading →
International · Certifiable standard

ISO 42001: Why certification is the easy part

The management system clauses are well understood. The AI specific Annex A controls are where most pilot programs surface gaps they did not know they had.

Continue reading →
Quebec · In force

Law 25 and automated decisions

ADM transparency obligations sit upstream of model design. By the time a notification is drafted, the decision about whether the system can be deployed has already been made.

Continue reading →
Canada · 2026 amendments

FINTRAC 2026: AI in the AML stack

Risk based compliance regimes absorbed model risk quietly. The new amendments are the first time the obligation is explicit, and the first time model documentation will be supervised.

Continue reading →
United States · Foundational

NIST AI RMF: Beyond the four functions

Govern, Map, Measure, Manage is not the whole framework. The Generative Profile added in July 2024 is where serious programs are now doing their mapping work.

Continue reading →
Engagement shapes

How engagements begin, and how they run.

Program design

Six to twelve weeks. Target operating model, policy architecture, initial inventory and tiering, role design. Ends with a program your board and your supervisor can both read.

Ongoing advisory

Retainer. A named partner and a working cadence with your first line. Regulatory change detection, validation review, and the supervisor conversations you want someone senior alongside.

Regulator preparation

Focused. Six to ten weeks ahead of a supervisory examination, thematic review, or internal audit. Evidence build, narrative, and rehearsal.

Questions we hear

What buyers ask in the first conversation.

What kind of firm is RegCore.AI?

A compliance firm built for the AI era. We combine senior advisory with proprietary playbooks and digital compliance officers — AI agents that extend our clients' teams within a governed methodology. We work alongside clients, not over the wall.

Who do you typically work with?

Federally regulated financial institutions, insurers, investment dealers, and regulated AI platforms — the teams whose programs will be examined. Engagements range from initial AI governance program design to ongoing advisory and regulator preparation.

How are digital compliance officers different from a product?

They are advisory tooling, not software we hand over and walk away from. Each deployment is scoped to the client's program, governed by our methodology, and operated alongside the client's compliance team. They extend reach, not replace judgement.

Which regulators and frameworks do you cover?

Canadian: OSFI, FINTRAC, CIRO, OSC, OPC, AMF. International: the EU AI Act, NIST AI RMF, ISO 42001, Quebec Law 25, GDPR, DORA, US federal banking supervision, UK PRA. Frontier topics are covered in our published research. The library is the floor, not the ceiling.

How do engagements typically begin?

A 45-minute conversation with a partner. We use it to understand the program state, the regulatory exposure, and the time horizon. Most engagements begin within three weeks of that call.

Do you publish your methodology?

Yes. Our playbooks are public. The methodology is our product. The engagement is how we deliver it. We believe institutions should be able to read our thinking before they decide to work with us.

If your program is about to be examined,
or about to be built — let’s talk.

A 45-minute conversation with a partner. No slides. No pitch. We use it to understand your program, your exposure, and your time horizon, and to be honest about whether we are the right firm for it.