Insight · AI liability · April 2026

AI agent liability and the rise of warranty underwriting.

A market is forming to underwrite AI agents. It does not read like a traditional professional-indemnity line, and it does not behave like a cyber line. What it looks like, in practice, is a pre-bind evidence review — the same evidence a supervisor now asks for at examination. We at RegCore.AI read the panels, the questionnaires and the exclusions, and describe the artifact set a regulated deployer has to produce to bind affirmative coverage.

Published 23 April 2026 · RegCore.AI

The line is being written where the old one retreats.

For most of the last decade, AI exposure sat silently inside errors-and-omissions, directors-and-officers and cyber policies that had been written before anyone priced an autonomous agent. That assumption has now been rescinded. Since 2024, primary carriers have been amending wordings to narrow or exclude unaffirmed AI loss. The capacity that is left takes a different form — affirmative AI coverage and AI performance warranty, written by a small group of carriers and reinsurers with a clear view of the loss shape.

The firm sees three anchors in the public market. Munich Re extended its AI performance guarantee to cover generative-AI scenarios in 2023 and 2024, the first large reinsurer to treat GenAI as a pricable failure mode rather than a research problem. Chaucer, a Lloyd’s syndicate, has named itself the lead market for affirmative AI third-party liability through a Toronto-domiciled managing general agent, with Swiss Re, Greenlight Re and AXIS Capital on panel. Lloyd’s Lab Cohort 13 in 2024 structured the methodology the panel now applies. These are the outlines of a real line — capacity disclosed, triggers defined, pre-bind procedure codified.

What the underwriter asks for, and why it looks familiar.

The pre-bind questionnaire is the interesting document. It reads, almost line for line, like the evidence file a regulator expects at examination. Five categories recur across every panel we have seen. The model-performance package: validation report, benchmarks, sensitivity analysis, outcomes analysis against a stated population. The privacy package: processing record, residency posture, consent propagation, cross-border transfer file. The regulatory-violation package: obligation map against the supervisory perimeter the deployer actually operates under — OSFI E-23, PIPEDA, Quebec Law 25, the EU AI Act, PLD 2024/2853, the US MRM transition. The model-output-liability package: agent cards, content-grounding controls, adverse-action documentation, human-in-the-loop commit-gate log. The trade-secret and IP package: indemnification posture for training-data provenance and copyright exposure.

The overlap with a supervisory evidence file is not accidental. A carrier writing AI risk at material limits needs to know the same things OSFI needs to know at an E-23 Appendix A review. Can the deployer prove the system did what it was supposed to do? Can it prove the escalation fired? Can it prove the human signed? The failure modes that lose a carrier money are the failure modes that lose the deployer its authorisation. The convergence is structural, not tactical.

Three loss patterns the market is pricing.

The loss scenarios underwriters are actively pricing read as an operational tour of agentic AI. The first is escalation failure — an agent with authority to resolve a material customer, counterparty or compliance matter that fails to route to a human when the conditions warrant. The loss is the downstream outcome the escalation would have prevented. The evidence ask is the commit-gate log that shows the escalation rule exists, fires, and is audited. The second is the misstatement pattern — an agent binds the operator to a term, rate or clause the operator did not intend, and the operator becomes liable under agent-speech doctrine. The evidence ask is the grounding layer and the Agent Card that define what the agent was permitted to say. The third is the ingest pattern — a credit, KYC or claims agent acts on erroneous, stale or adversarial input. The evidence ask is lineage, data-quality attestation, and drift-monitoring thresholds with signed reviewer rationale.

Silent AI is ending. Affirmative AI has an evidence bar.

The practical effect of the market reshape is that AI risk is being moved off legacy policies and onto purpose-built instruments. At renewal, boards are finding that exposure they assumed was covered is not — and that the carriers prepared to write it expect a documentation standard that looks like a regulator’s. The market is consolidating on a simple rule. No portable evidence pack, no bindable risk. The deployer that can hand the underwriter a signed, dated, versioned artifact set — and can do it again at the next renewal cycle without re-engineering the programme — is the deployer that insures at scale. The rest operate a higher retention or accept the exclusion.

Where the operating posture we build plugs in.

The evidence engine we at RegCore.AI operate produces the artifact set the questionnaires ask for, in a form that also satisfies the supervisory perimeter the deployer lives under. Model cards, agent cards, commit-gate logs, HITL architecture, lineage and monitoring — written once, read many. The AI governance pillar stands up the management system and the control library; the platform operates the agents that produce the evidence and keep it current. The consequence is a single programme that answers the supervisor at examination and the underwriter at pre-bind. We think that is the shape regulated deployers should be building toward — not because insurance is the point, but because the two reviewers are converging on the same artifact, and producing it twice is an avoidable cost.

Bind the risk. Pass the examination. One artifact set.

The firm produces the evidence regulated deployers need — in a form the underwriter can read at pre-bind and the supervisor can read at examination. Talk to us about what your portfolio needs before the next renewal window.