Data Governance
Lineage, provenance, consent and quality — the substrate every AI control eventually leans on.
Make your data estate legible to AI and privacy regimes in the same artefact. Lineage, training-data provenance, retrieval manifests, consent and purpose ledgers — indexed to the obligations that actually bind the work.
Outcomes we deliver
Each outcome is a signed, dated artifact your regulator, your auditor and your board can read — and that your practitioners can keep working with long after we walk away.
Compliance agents in this pillar
Each agent is bounded, instrumented and auditable. Our specialists direct, review and sign off; the agents do the mechanical work at a multiple of the pace of traditional firms.
Builds end-to-end lineage for training, fine-tuning and retrieval corpora — sources, transformations, consent basis, downstream models and decisions.
Maintains consent and purpose ledgers aligned to PIPEDA, Quebec Law 25 and GDPR — with ADM disclosure text, withdrawal handling and purpose-limitation records.
Produces retrieval-augmented generation manifests a regulator can read — corpus provenance, index governance, grounding evaluation, change log.
Operates data-quality controls mapped to model risk and NIST AI RMF Measure functions — completeness, accuracy, drift, representativeness.
Frameworks we cover in this pillar
One control library, mapped clause by clause across the regimes below. Answer many supervisors with one artifact set.
Personal Information Protection and Electronic Documents Act
In forceFederal private-sector privacy law. Meaningful consent, accountability, access rights.
Open framework →Act to modernize legislative provisions as regards the protection of personal information
In force since September 22, 2023ADM transparency, PIAs, cross-border transfer rules. Penalties up to $25M CAD or 4% of global revenue.
Open framework →General Data Protection Regulation
In forceLawful basis, ADM rights, DPIA triggers, cross-border SCCs for AI data pipelines.
Open framework →Privacy Information Management System
Published August 2025Extends ISO 27001 with privacy-specific controls. The certifiable layer for privacy-by-design in AI systems.
Open framework →Information Security Management System
2022 revisionThe ISMS baseline every regulated AI deployment is expected to sit on top of.
Open framework →Regulation (EU) 2024/1689
High-risk regime live August 2, 2026Risk-tiered obligations, Article 15 accuracy/robustness/cybersecurity, Annex IV technical file, GPAI model rules.
Open framework →Recommended playbooks
Each playbook walks from first discovery through artifact. Phases, controls, evidence. Agents assist the mechanical steps; specialists own the sign-off.
EU AI Act High-Risk System Playbook
Classify use cases against Annex III, build the Article 9 risk management system, and compile the Annex IV technical file your conformity assessment will depend on.
Read the playbook →ISO/IEC 42001 · ControlsISO/IEC 42001 AIMS Stand-Up Playbook
Build a certifiable AI Management System: scope, policy, objectives, risk, controls, audit. Mapped to your portfolio.
Read the playbook →Quebec Law 25 · PrivacyQuebec Law 25 PIA Playbook
Privacy Impact Assessments, ADM disclosures, cross-border transfer assessments — produced against the clause and the regulator guidance.
Read the playbook →Cross-framework · ControlsAgentic AI Governance Playbook
Multi-step autonomous agents, tool-calling chains, and the oversight these systems demand. Agent cards, action budgets, kill switches.
Read the playbook →Cross-framework · DocumentationRAG Assurance Playbook
Retrieval-augmented generation has its own attack surface — source provenance, index drift, poisoning risk. Control it.
Read the playbook →Cross-framework · VendorFoundation Model Due Diligence Playbook
Bringing a GPAI, Claude, GPT, Gemini, Llama or sovereign model into scope — the diligence a regulated deployer is now expected to perform.
Read the playbook →Cross-framework · MonitoringContinuous Control Monitoring Playbook
Drift, performance, outcome and complaint monitoring in one pipeline — outputs a supervisor can act on.
Read the playbook →DORA · ControlsDORA for AI Systems Playbook
ICT risk management and incident reporting where AI is in the critical path — for EU-facing financial entities.
Read the playbook →Stand up data governance on an artifact your regulator will read.
Tell us where your portfolio sits today. We will map the frameworks, deploy the compliance agents, and put our specialists beside your second line.