Guest Writer

Nils Lölfing
Nils LölfingBird & Bird LLP

Does the European Union Commission’s Proposal on AI Liability Act as a Game Changer for Fault-Based Liability Regimes in the EU?

By Nils Lölfing

Photo by Christian Lue on Unsplash

Abstract: In this article, the author discusses increasing risks that artificial intelligence system providers, developers, and users will face from a liability directive proposed by the European Union Commission.

The AI Liability Directive proposed by the European Union Commission puts additional liability risks on providers, developers and users of specifically high-risk artificial intelligence (AI)  systems. If enacted, it could become a game changer for fault-based liability regimes in the European Union, as it introduces a presumption of causality to prove fault and a right of access to evidence from companies and suppliers regarding high-risk AI systems. This will help victims enforce non-contractual civil law claims for damages caused by an AI system.

What this is about and how it increases the liability risk exposure of actors in the AI systems supply chain will be discussed in this article.

Background

On September 28, 2022, the EU Commission published its  proposal for a Directive to establish new fault-based liability  rules for AI systems (AI Liability Directive), along with a reform for the existing rules on the strict liability of manufacturers for defective products. The current article focuses on the draft AI Liability Directive, which complements the AI Act by facilitating fault-based civil liability claims for damages, which the AI Act as specific product safety Regulation does not offer.

On June 30, 2021, the EU Commission published an inception impact assessment road map on adapting civil liability rules to the digital age, in particular considering AI (based on the EU Commission’s White Paper on AI of February 19, 2020). With respect to AI in particular, the AI liability proposal is part of the approach by the EU Commission to develop an ecosystem of trust for AI (together with the proposed AI Act and the revised Product Safety and Machinery Directive).

The proposal addresses the peculiarities tied to AI such as autonomous behavior and limited predictability, when applying fault-based liability rules. According to the EU Commission, the peculiarities of AI create legal uncertainties for businesses and make it difficult for consumers and other injured parties to receive compensation. In fact, in a representative survey of 2021, liability ranked among the top three barriers to the use of AI by European companies that are planning to but have not yet adopted AI.

These new requirements, such as a presumption of the burden of proof, have the potential to fundamentally change the EU’s liability regime and will increase the exposure to liability risks for businesses who are involved in manufacturing, distributing, or using AI.

What Is It All About and Why Is It a Potential Game Changer?

The AI Liability Directive proposal intends to enable consumers and businesses injured by AI-based products like robots, drones, or smart-home systems to claim compensation more easily by way of non-contractual civil law claims for damages caused by such AI systems. The proposal generally covers any type of AI system (although, like the AI Act, it seems to predominantly intend to cover highrisk AI) and obliges providers, developers, and users of AI systems to compensate any type of damage covered by national law (life, health, property, privacy, discrimination, etc.) and for any type of victim (individuals, companies, organizations, etc.). This requires errors made by someone from within the supply chain, such as a provider, developer, or user of an AI system who caused the damages. Because of the peculiarities mentioned in AI systems, it will typically be difficult to prove a wrongful action or omission by a provider, developer, or user of an AI system.

Therefore, the AI Liability Directive proposal recommends two groundbreaking changes, which will modify common liability rules, as we currently know them across most of the European Union:

• Presumption of causality to prove fault. The proposed AI Liability Directive establishes a rebuttable presumption of causality, to enable claimants to be able to demonstrate a  causal link between a failure of an AI system (e.g., in the form of flawed output) and any damage caused to the claimant as the individual or business using the AI system. For example, where certain obligations under the AI Act are not complied with, fault of the relevant person that developed, provided, or used the AI system will be presumed. The presumed fault occurs only if it is reasonably likely, from the circumstances in which the damage occurred, that such fault has influenced the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the damage. Such a fault can also be presumed
by a court of law, on the basis of non-compliance, which would lead to a court order for disclosure or preservation of evidence (detailed in the next point). The presumption of causality generally applies to all AI systems, but in the case of non-high-risk AI systems it only applies where a court determines that it is excessively difficult for the claimant to prove the causal link. If the presumption is triggered, the burden is on the defendant to show that its system is not the cause of the harm suffered.
• Right to access evidence from companies and suppliers regarding high-risk AI. When claiming damages from a high-risk AI system provider, developer, or user, claimants have disclosure powers and may ask the court to order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. For this to happen, the claimant must make its claim plausible and show to a court that the damages were potentially caused by a high-risk AI system. The right to access evidence will ease the proving of claims and identify non-responsible actors in the supply chain much faster. However, commercially sensitive information (like trade secrets) is still protected. The access right does not pertain to AI systems that are not considered high-risk under the AI Act.

What Are the Resulting Risks for Providers, Developers, and Users of AI Systems and How to Protect Against Them?

The proposed AI Liability Directive significantly helps victims that suffered damages through AI systems with the presumption of causality and the right to access evidence, specifically with regard to high-risk AI systems.

Risks for providers, developers, and users of (specifically highrisk) AI systems are not negligible in this regard. Claims brought by the AI Liability Directive can be very broad and far-reaching, as they include any type of damage covered by national law, and therefore typically also include non-material damages, such as for discrimination or potentially even privacy harms resulting from, for example, ad targeting. With the prospect of mass claims, providers, developers, and users of AI systems may see big obstacles in the future.

If the proposed AI Liability Directive is enacted, it will be much more difficult for providers, developers, and users of AI to adequately protect themselves against damage claims due to acts or omissions of their AI systems. Nevertheless, providers, developers, and users of AI systems should find strategies to protect themselves
against the presumption of causality by showing that a fault of their specific AI system could not have caused the damage. Additionally, strategies on how their information can be protected from being disclosed to claimants are sensible to mitigate disproportionate liability risk exposure.

Outlook

Specifically, developers of high-risk AI systems will face additional burden going forward. They not only have to comply with the complementary future AI Act, which is likely to put in place a couple of onerous obligations before their AI systems can be brought on the EU market. Under the AI Liability Directive, developers will also have to find strategies to defend themselves against potential claims as another layer of AI-related legal burdens on top of the AI Act.

However, there is still enough time for providers, developers, and users of AI systems to influence the AI Liability Directive proposal. The European Parliament and the Council will soon start discussing and negotiating the Commission’s proposal. This may still not be the end of the road, at all. For now, the EU Commission has refrained from proposing strict liability regimes for AI systems, although the public consultations have highlighted a preference for such a regime among its respondents (whether with or without insurance).

However, the EU Commission also highlighted that if AI systems could affect the public at large, namely putting a risk to important legal rights, such as the right to life, health, and property, then such strict liability regime will be reconsidered. To monitor developments, the EU Commission put in place a program to obtain information of incidents involving AI systems.

With this information the EU Commission intends to assess whether additional measures would be needed, such as introducing a strict liability regime and/or mandatory insurance. This space must be closely watched!

Wakenya Kabui

Amet minim mollit non deserunt ullamco est sit aliqua dolor do amet sint. Velit officia consequat duis enim velit.