BRUSSELS — The European Parliament voted 423–124 on Wednesday to adopt the AI Liability Directive, a landmark piece of legislation that for the first time creates a harmonised legal framework across EU member states for seeking compensation when harmed by artificial intelligence systems.

The directive reverses the standard burden of proof in cases involving high-risk AI systems — such as those used in recruitment, credit scoring, medical diagnosis, and law enforcement — requiring developers and deployers to prove their system did not cause the alleged harm, rather than requiring victims to demonstrate that it did.

What This Means

Legal experts said the practical effect of the directive would be to make AI litigation viable for individuals who currently face near-impossible evidentiary hurdles when trying to prove algorithmic harm. "Right now, if an AI system denies you a loan or flags you as a fraud risk, you have essentially no way to challenge that decision in court because you cannot get access to the model," said Dr Claudia Becker of the Max Planck Institute. "This directive changes that fundamentally."

Technology companies, led by industry association DigitalEurope, had lobbied intensively against the directive, arguing it would chill investment in European AI development. The final text includes exemptions for small and medium enterprises and research institutions, and a three-year implementation grace period.