Insight · Product liability · April 2026

When AI is a product: PLD 2024/2853 and the shifting burden of proof.

The revised Product Liability Directive brings software — AI systems included — inside the product-liability perimeter, with transposition into Member-State law due by 9 December 2026. The firm reads the disclosure obligations, the rebuttable presumptions and the evidentiary asymmetries the directive creates, and describes the artifact set a regulated deployer has to be able to produce to defend a claim.

Published 23 April 2026 · RegCore.AI

Software is now a product. AI is software.

Directive (EU) 2024/2853 does three things at once. It replaces the 1985 product-liability framework with a text that explicitly names software — and, by extension, AI systems — as products inside the scope of the directive. It reframes a manufacturer’s obligation to include an ongoing duty of care for substantially modified products, which an AI system that learns in production tends to be. And it broadens the definition of damage to cover psychological harm and the loss or corruption of data, heads of loss that map directly onto the failure modes of generative and agentic systems. Taken together, the directive moves AI from a place where liability was argued in contract and negligence, to a place where it is argued under a regime that presumes defect in specified conditions.

Member States must transpose the directive into domestic law by 9 December 2026. The deployers exposed are not only EU-established firms. A Canadian, US or UK vendor that places an AI product on the EU market, or whose system causes damage to a person in the EU, falls within the regime by virtue of the act of supply. The directive’s extraterritorial reach is one of the features buyers and sellers of AI are only now pricing into procurement.

The disclosure obligation is the operative lever.

The single provision regulated deployers should read first is the disclosure duty. Where a claimant has plausibly shown that damage was caused by a defective product, a national court may order the manufacturer to disclose relevant technical documentation. The scope is what the directive calls the evidence necessary and proportionate to establish the claim — training data descriptions, validation records, risk-management documentation, post-market monitoring evidence, incident logs. For a traditional manufactured good, that ask is bounded. For an AI system, it is the entire governance file.

The firm reads the disclosure obligation as the directive’s operative lever. A deployer that can produce the file the court will ask for is in a very different position from a deployer that cannot. A refusal to disclose — or an inability to — triggers a presumption of defectiveness. That is the burden-of-proof shift the directive is named for, and it is why the quality of the evidence pack is now a litigation variable, not a compliance one.

Rebuttable presumptions and the technical-complexity trigger.

The directive introduces a set of rebuttable presumptions that bite where a claimant faces evidentiary asymmetry. Where the product is of a complexity that makes the causal chain from defect to damage excessively difficult to prove, a national court may presume defectiveness, a causal link, or both, once the claimant has shown the product is likely to be defective or that the damage is of a kind typically consistent with a defect. AI systems — which are opaque by nature to most end users — are prime candidates for the presumption to apply.

The manufacturer can rebut the presumption, but only with evidence. In practice the rebuttal asks for the same artifact set the directive’s disclosure clause will compel — training-data documentation, validation under conditions of intended use, post-market monitoring signals, prompt and retrieval records where applicable, an incident and change log. The presumption therefore converts an absence of documentation into a finding of defect. Deployers that have treated governance documentation as a regulatory cost are about to find it is a litigation asset.

Component liability cascades down the supply chain.

The directive’s component-liability rules are the piece that affects the AI supply chain. Where an AI system is integrated into a downstream product — an AI component inside a medical device, a GPAI model inside an insurance workflow, a retrieval stack inside a customer-service platform — the component supplier can be held jointly liable with the final manufacturer. The contracting posture this forces is familiar from the OSFI B-10 cascade and the EU AI Act’s Annex IV obligations: flow-down clauses, evidence-production covenants, incident-reporting covenants, audit rights. The procurement files of the 2026–2028 cycle will look very different from those of 2024–2025.

The operating posture that answers the directive.

The artifact set the directive presumes against is specific. Training-data lineage and provenance documentation. A risk-management file. Validation records in conditions of intended use. A post-market monitoring plan with outcomes data. An incident register that is current. A technical-documentation package that a national court will accept as evidence of due care. The firm reads this as the same file the EU AI Act’s Annex IV expects, with an added evidentiary standard because the reviewer is a court rather than a regulator.

The compliance intelligence layer the firm maintains — control mappings indexed to primary source, evidence and provenance carried through every engagement, and digital compliance officers that sit inside client workflow — is built to produce that file as a by-product of normal operation, not as a one-off remediation project. The PLD reference tracks the clauses and transposition state. Our AI-enabled compliance implementation practice ships the controls that produce the evidence — the artefacts a regulator reads at examination and a court reads at disclosure are the same artefacts. We read the directive less as a liability shock than as a clarifying event: it settles the question of what a regulated deployer has to be able to produce on demand, and names the consequence of not being able to.

Be ready to disclose. Be ready to rebut.

The firm stands up the governance file the directive presumes against — lineage, validation, monitoring, incident records, technical documentation. Written once; readable by a regulator, a court and a downstream customer.