Inventory intervention surfaces
Enumerate every material AI decision class in AIRSA. For each, answer: What is the intervention pattern? Who authorizes it? What is the SLA?
In 2024, the industry converged on observability — monitoring AI models for drift, bias, and performance degradation. Dashboards proliferated. Alerts fired. Nothing actually stopped. In 2026, regulators and boards are asking the next-order question: how fast can you stop the model? Intervenability — not observability — is the 2026 maturity step. This brief explains what changes, what the architecture looks like, and how Canadian FSIs should build it now.
In 2024, the shared assumption was that comprehensive monitoring would prevent incidents. Boards funded observability budgets. MLOps platforms shipped drift detectors. Vendor questionnaires asked “do you monitor your models?”
In 2026, material AI incidents have demonstrated that observing a problem is not the same as stopping one. Supervisory expectations are catching up asymmetrically across jurisdictions. OSFI E-23 (enforceable May 1, 2027) articulates six principles on model risk governance that cover intervention by implication across the model lifecycle, and OSFI's FIFAI workshop signals extend the same posture to AI/ML systems. The EU AI Act's Article 14 is the clearest statement in force: human oversight must “prevent or minimise” risks, including by allowing a human to intervene or interrupt operation. NIST AI RMF's MANAGE function names response, recovery, and decommissioning as first-class activities. On the US side, SR 11-7 remains the operative model-risk baseline; it predates generative and agentic AI, so its treatment of non-deterministic systems is interpretive rather than explicit. Canadian institutions operating across the border cannot count on a US standard alone to govern GenAI and agentic systems; intervenability evidence is what closes the gap.
The practical implication: a 2LOD team that can see a model misbehaving but cannot halt its decision commit is a policy without an enforcement mechanism. Regulators now ask the enforcement question.
Intervenability is the architectural property by which humans can stop, override, or degrade an AI system's decision commit — before material harm — within a defined SLA. It is a superset of HITL gating. Where a HITL gate requires a human to approve every decision, intervenability encompasses a broader set of interventions: circuit breakers, graceful degradation, deterministic fallback, kill switches, and per-decision override.
Speculation flag: “intervenability” is practitioner vocabulary. No regulator has published a standalone intervenability standard as of April 2026; this framing is a RegCore.AI inference on the supervisory trajectory.
A dashboard shows model drift. The on-call team pages. By the time a human decides to halt the model, thousands of decisions have committed. The governance question is not “did we see it” but “how fast did we act”.
A drift metric crosses a threshold. The team asks: is this real or noise? Debating in a meeting while the model continues is not intervention.
Without a pending-approval state, “halting the model” means no more future inferences. The already-committed decisions — credit denied, fraud flagged, trade executed — stand. Regulators ask about material decisions, not future inferences.
Who authorizes a kill switch? What is their SLA? Does the authorization pathway require five approvers or one? Observability does not answer these questions.
Intervenability is a set of architectural patterns, not a single component. Each pattern addresses a specific intervention class. A mature Canadian FSI AI deployment implements the patterns proportional to the materiality of the decision.
| Pattern | What it provides | Where it lives |
|---|---|---|
| Pending-approval HITL gate | Human review before commit on material decisions | Agent Card + application layer |
| Confidence-threshold gate | Low-confidence decisions routed to HITL or deferred | Inference pipeline |
| Circuit breaker | System-wide halt when error rate or drift metric exceeds threshold | Monitoring service with authority to disable routing |
| Deterministic fallback | On AI failure, route to rule-based or statistical baseline | Application layer with version control |
| Kill-switch runbook | Named human authority + runbook + logged execution | Operations + incident response |
| Override API | Authorized role can substitute decision per-case with audit trail | Case management layer |
No regulator has published a standalone “intervenability standard” as of April 2026. The term is practitioner vocabulary, not statutory language. Expect the concept to appear in guidance over 2026–2027.
The substance shows up across frameworks today. OSFI E-23's model risk principles cover intervention by implication. The EU AI Act's Article 14 on human oversight covers it explicitly. NIST AI RMF's MANAGE function includes intervention. SR 11-7's effective challenge standard implies it. Organizations building intervenability architectures now are ahead of regulator-specific expectations.
Four phases, sequenced by materiality and decision class. Each phase produces an artifact your 2LOD team can review and your regulator can read.
Enumerate every material AI decision class in AIRSA. For each, answer: What is the intervention pattern? Who authorizes it? What is the SLA?
On material decision classes, implement pending-approval architecture — the commit-semantics gap closed.
For high-availability systems, circuit breakers prevent cascading failures. Deterministic fallback preserves operational continuity.
Quarterly drills on kill-switch execution. Document SLA, authority chain, and post-drill retrospective.
HITL is one form of intervenability. Intervenability encompasses HITL plus circuit breakers, graceful degradation, kill switches, and override. The intervention you need depends on decision class and materiality.
No. Scope intervention patterns to decision materiality. A low-materiality internal agent may only require override. A high-materiality credit adjudicator requires HITL gating, circuit breaker, and deterministic fallback.
Set by the institution, calibrated to decision cycle. Real-time trading requires sub-second intervention; credit adjudication can tolerate minutes; quarterly reporting can tolerate hours. The SLA must be named, measured, and rehearsed.
Foundation-model vendors are your dependency. If the vendor is unavailable, your intervention options are limited to deterministic fallback or halt. Design B-10 contracts assuming provider intervention is not in your control.
Increasingly. Audit committees are asking the 'how fast can you stop it' question. Boards funding AI deployments without funded intervention architecture are the exposure. Article 14 of the EU AI Act provides useful framing.
We design the intervention architecture per decision class, implement the HITL commit-semantics pattern, build the circuit-breaker monitoring, and rehearse kill-switch drills with your 2LOD team. See the AI Governance arm.
Observability is 2024. Intervenability is 2026. We build the HITL gates, circuit breakers, deterministic fallback, and kill-switch runbooks that satisfy OSFI, US federal banking supervisors, and the EU AI Office.