EUROPEAN COMMISSION 歐盟委員會
Brussels, 28.9.2022 布魯塞爾,2022 年 9 月 28 日
COM(2022) 496 final COM(2022) 496 最終
2022/0303(COD) 2022/0303(貨到付款)
Proposal for a 提案
DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
歐洲議會和理事會指示
on adapting non-contractual civil liability rules to artificial intelligence
論非合約民事責任規則適應人工智慧
(AI Liability Directive) (人工智慧責任指令)
(Text with EEA relevance)
(與歐洲經濟區相關的文字)
{SEC(2022) 344 final} - {SWD(2022) 318 final} - {SWD(2022) 319 final} - {SWD(2022) 320 final}
{SEC(2022) 344 最終版} - {SWD(2022) 318 最終版} - {SWD(2022) 319 最終版} - {SWD(2022) 320 最終版}
EXPLANATORY MEMORANDUM 解釋性備忘錄
1.CONTEXT OF THE PROPOSAL 提案背景
·Reasons for and objectives of the proposal
提案的理由和目標
This explanatory memorandum accompanies the proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI). In a representative survey of 2020, liability ranked amongst the top three barriers to the use of AI by European companies. It was cited as the most relevant external obstacle (43%) for companies that are planning to, but have not yet adopted AI.
本解釋性備忘錄隨附關於使非合約民事責任規則適應人工智慧的指令提案 (人工智慧). 在 2020 年的一項代表性調查中, 責任位列歐洲企業使用人工智慧的三大障礙之一。對於計劃但尚未採用人工智慧的公司來說,它被認為是最相關的外部障礙(43%)。
In her Political Guidelines, Commission President Ursula von der Leyen laid out a coordinated European approach on AI. In its White Paper on AI of 19 February 2020, the Commission undertook to promote the uptake of AI and to address the risks associated with some of its uses by fostering excellence and trust. In the Report on AI Liability accompanying the White Paper, the Commission identified the specific challenges posed by AI to existing liability rules. In its conclusions on shaping Europe’s digital future of 9 June 2020, the Council welcomed the consultation on the policy proposals in the White Paper on AI and called on the Commission to put forward concrete proposals. On 20 October 2020, the European Parliament adopted a legislative own-initiative resolution under Article 225 TFEU requesting the Commission to adopt a proposal for a civil liability regime for AI based on Article 114 of the Treaty on the Functioning of the EU (TFEU).
委員會主席烏蘇拉·馮德萊恩 (Ursula von der Leyen)在其政治指導方針中提出了歐洲在人工智慧方面協調一致的方法 。在2020 年 2 月 19 日的人工智慧白皮書中,委員會承諾促進人工智慧的採用,並透過培養卓越性和信任來解決與人工智慧某些用途相關的風險。在白皮書附帶的人工智慧責任報告中,委員會指出了人工智慧對現有責任規則帶來的具體挑戰。在 2020 年 6 月 9 日關於塑造歐洲數位未來的結論中,理事會歡迎就人工智慧白皮書中的政策建議進行磋商,並呼籲委員會提出具體建議。 2020 年 10 月 20 日,歐洲議會根據 TFEU 第 225 條通過了一項立法主動決議,要求委員會根據《歐盟運作條約》(TFEU)第 114 條通過人工智慧民事責任制度提案。
Current national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services. Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be deterred from claiming compensation altogether. These concerns have also been retained by the European Parliament (EP) in its resolution of 3 May 2022 on artificial intelligence in a digital age.
目前的國家責任規則,特別是基於過失的規則,不適合處理人工智慧產品和服務造成的損害的責任索賠。根據此類規則,受害者需要證明造成損害的人有錯誤行為或不作為。人工智慧的具體特徵,包括複雜性、自主性和不透明性(所謂的「黑盒子」效應),可能會使受害者難以識別責任人並證明成功責任索賠的要求,或者成本過高。 尤其, 與不涉及人工智慧的案件相比,在要求賠償時,受害者可能會承擔非常高的前期成本,並面臨明顯更長的法律程序。因此,受害者可能完全不敢要求賠償。歐洲議會 (EP) 在 2022 年 5 月 3 日關於數位時代人工智慧的決議中也保留了這些擔憂。
If a victim brings a claim, national courts, faced with the specific characteristics of AI, may adapt the way in which they apply existing rules on an ad hoc basis to come to a just result for the victim. This will cause legal uncertainty. Businesses will have difficulties to predict how the existing liability rules will be applied, and thus to assess and insure their liability exposure. The effect will be magnified for businesses trading across borders, as the uncertainty will cover different jurisdictions. It will particularly affect small and medium-sized enterprises (SMEs), which cannot rely on in-house legal expertise or capital reserves.
如果受害者提出索賠,面對人工智慧的具體特徵,國家法院可能會臨時調整現有規則的適用方式,以便為受害者帶來公正的結果。這將造成法律上的不確定性。企業將難以預測如何應用現有的責任規則,從而評估和確保其責任風險。對於跨境貿易的企業來說,這種影響將被放大,因為不確定性將涵蓋不同的司法管轄區。這將特別影響中小企業(SME),因為它們無法依賴內部法律專業知識或資本儲備。
National AI strategies show that several Member States are considering, or even concretely planning, legislative action on civil liability for AI. Therefore, it is expected that, if the EU does not act, Member States will adapt their national liability rules to the challenges of AI. This will result in further fragmentation and increased costs for businesses active throughout the EU.
國家人工智慧戰略表明,一些成員國正在考慮甚至具體規劃針對人工智慧民事責任的立法行動。因此,預計如果歐盟不採取行動,成員國將調整其國家責任規則以應對人工智慧的挑戰。這將導致活躍在整個歐盟的企業進一步分散並增加成本。
The open public consultation informing the Impact Assessment of this proposal, confirmed the problems explained above. In the opinion of the public, the ‘black box’ effect can make it difficult for the victim to prove fault and causality and there may be uncertainty as to how the courts will interpret and apply existing national liability rules in cases involving AI. Furthermore, it showed a public concern as to how legislative action on adapting liability rules initiated by individual Member States, and the ensuing fragmentation, would affect the costs for companies, especially SMEs, preventing the uptake of AI Union wide.
向該提案的影響評估通報的公開公眾諮詢證實了上述問題。公眾認為,「黑盒子」效應會使受害者難以證明過失和因果關係,法院在涉及人工智慧的案件中如何解釋和適用現有的國家責任規則可能存在不確定性。此外,公眾也擔心,個別成員國發起的調整責任規則的立法行動以及隨之而來的碎片化將如何影響公司(尤其是中小企業)的成本,從而阻礙人工智慧聯盟的廣泛採用。
Thus, the objective of this proposal is to promote the rollout of trustworthy AI to harvest its full benefits for the internal market. It does so by ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general. It also reduces legal uncertainty of businesses developing or using AI regarding their possible exposure to liability and prevents the emergence of fragmented AI-specific adaptations of national civil liability rules.
因此,該提案的目標是促進可信人工智慧的推出,為內部市場收穫其全部利益。它透過確保人工智慧造成的損害的受害者獲得與一般產品造成的損害的受害者同等的保護來實現這一目標。它還減少了開發或使用人工智慧的企業在可能承擔的責任方面的法律不確定性,並防止出現碎片化的針對人工智慧的國家民事責任規則調整。
·Consistency with existing policy provisions in the policy area
與政策領域現有政策規定的一致性
This proposal is part of a package of measures to support the roll-out of AI in Europe by fostering excellence and trust. This package comprises three complementary work streams:
該提案是透過培養卓越和信任來支援人工智慧在歐洲推廣的一攬子措施的一部分。該包包含三個互補的工作流程:
–a legislative proposal laying down horizontal rules on artificial intelligence systems (AI Act);
制定人工智慧系統橫向規則的立法提案(人工智慧法案);
–a revision of sectoral and horizontal product safety rules;
修訂部門和橫向產品安全規則;
–EU rules to address liability issues related to AI systems.
歐盟制定了解決與人工智慧系統相關的責任問題的規則。
In the AI Act proposal, the Commission has proposed rules that seek to reduce risks for safety and protect fundamental rights. Safety and liability are two sides of the same coin: they apply at different moments and reinforce each other. While rules to ensure safety and protect fundamental rights will reduce risks, they do not eliminate those risks entirely. Where such a risk materialises, damage may still occur. In such instances, the liability rules of this proposal will apply.
在《人工智慧法案》提案中,委員會提出了旨在降低安全風險和保護基本權利的規則。安全和責任是同一枚硬幣的兩個側面:它們適用於不同的時刻,並且相輔相成。雖然規則可以確保 安全和保護基本權利會減少風險,但並不能完全消除這些風險。 當這種風險發生時, 損害 仍有可能發生。在這種情況下,本提案的責任規則將 申請。
Effective liability rules also provide an economic incentive to comply with safety rules and therefore contribute to preventing the occurrence of damage. In addition, this proposal contributes to the enforcement of the requirements for high-risk AI systems imposed by the AI Act, because the failure to comply with those requirements constitutes an important element triggering the alleviations of the burden of proof. This proposal is also consistent with the general and sectoral product safety proposed rules applicable to AI-enabled machinery products and radio equipment.
有效的責任規則還提供遵守安全規則的經濟激勵,從而有助於防止損害的發生。 此外,該提案有助於執行《人工智慧法》對高風險人工智慧系統的要求,因為不遵守這些要求是引發舉證責任減輕的一個重要因素。該提案也與適用於人工智慧機械產品和無線電設備的通用和部門產品安全擬議規則一致。
The Commission takes a holistic approach in its AI policy to liability by proposing adaptations to the producer’s liability for defective products under the Product Liability Directive as well as the targeted harmonisation under this proposal. These two policy initiatives are closely linked and form a package, as claims falling within their scope deal with different types of liability. The Product Liability Directive covers producer’s no-fault liability for defective products, leading to compensation for certain types of damages, mainly suffered by individuals. This proposal covers national liability claims mainly based on the fault of any person with a view of compensating any type of damage and any type of victim. They complement one another to form an overall effective civil liability system.
委員會在其人工智慧責任政策中採取了整體方法,提出了根據產品責任指令調整生產者對缺陷產品的責任以及根據該提案進行有針對性的協調。這兩項政策舉措緊密相連並形成一個整體,因為屬於其範圍的索賠涉及不同類型的責任。產品責任指令涵蓋了生產者對缺陷產品的無過失責任,導致對某些類型的損害(主要是個人遭受的損害)進行賠償。該提案涵蓋主要基於任何人的過錯的國家責任索賠,旨在賠償任何類型的損害和任何類型的受害者。它們相輔相成,形成整體有效的民事責任體系。
Together these rules will promote trust in AI (and other digital technologies) by ensuring that victims are effectively compensated if damage occurs despite the preventive requirements of the AI Act and other safety rules.
這些規則將共同促進對人工智慧(和其他數位技術)的信任,確保儘管《人工智慧法案》和其他安全規則有預防性要求,但如果發生損害,受害者仍能得到有效賠償。
·Consistency with other Union policies
與其他聯盟政策的一致性
The proposal is coherent with the Union’s overall digital strategy as it contributes to promoting technology that works for people, one of the three main pillars of the policy orientation and objectives announced in the Communication ‘Shaping Europe's digital future’.
該提案與歐盟的整體數位策略一致,因為它有助於促進造福人類的技術,這是「塑造歐洲數位未來」 中宣布的政策導向和目標的三大支柱之一。
In this context, this proposal aims to build trust in the AI and to increase its uptake. This will achieve synergies and is complementary with the [Cyber Resilience Act], which also aims to increase trust in products with digital elements by reducing cyber vulnerabilities and to better protect business and consumer users.
在此背景下,該提案旨在建立對人工智慧的信任並提高其採用率。這將實現協同效應,並與《網路彈性法案》 相輔相成,該法案還旨在透過減少網路漏洞來增加對具有數位元素的產品的信任,並更好地保護企業和消費者用戶。
This proposal does not affect the rules set by [the Digital Services Act (DSA)], which provide for a comprehensive and fully harmonised framework for due diligence obligations for algorithmic decision making by online platforms, including its exemption of liability for providers of intermediary services.
該提案不影響[數位服務法(DSA)]制定的規則,該規則為線上平台演算法決策的盡職調查義務提供了全面且完全協調的框架,包括免除中介服務提供者的責任。
In addition, by promoting the roll-out of AI, this proposal is linked to the initiatives under the EU strategy for data. It also strengthens the Union’s role to help shape global norms and standards and promote trustworthy AI that is consistent with Union values and interests.
此外,透過促進人工智慧的推廣,該提案與歐盟數據戰略下的措施相關聯 。它還加強了聯盟在幫助制定全球規範和標準並促進符合聯盟價值觀和利益的值得信賴的人工智慧方面的作用。
The proposal also has indirect links with the European Green Deal. In particular, digital technologies, including AI, are a critical enabler for attaining the sustainability goals of the Green Deal in many different sectors (including healthcare, transport, environment and farming).
該提案也與歐洲綠色協議有間接聯繫。特別是,包括人工智慧在內的數位技術是許多不同領域(包括醫療保健、交通、環境和農業)實現綠色協議永續發展目標的關鍵推動因素。
·Main economic, social and environmental impacts
主要經濟、社會和環境影響
The Directive will contribute to the rollout of AI. The conditions for the roll-out and development of AI-technologies in the internal market can be significantly improved by preventing fragmentation and increasing legal certainty through harmonised measures at EU level, compared to possible adaptations of liability rules at national level. The economic study underpinning the Impact Assessment of this proposal concluded – as a conservative estimate – that targeted harmonisation measures on civil liability for AI would have a positive impact of 5 to 7 % on the production value of relevant cross-border trade as compared to the baseline scenario. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. This would lower stakeholders’ legal information/representation, internal risk management and compliance costs, facilitate financial planning as well as risk estimates for insurance purposes, and enable companies – in particular SMEs – to explore new markets across borders. Based on the overall value of the EU AI market affected by the liability-related problems addressed by this Directive, it is estimated that the latter will generate an additional market value between ca. EUR 500mln and ca. EUR 1.1bln.
該指令 將為人工智慧的推廣做出貢獻。 與國家層級可能調整的責任規則相比,透過歐盟層級的統一措施來防止分散化和提高法律確定性,可以顯著改善人工智慧技術在內部市場的推廣和發展條件。支持該提案影響評估的經濟研究得出的結論是——保守估計——人工智能民事責任的有針對性的協調措施將對相關跨境貿易的產值產生 5% 至 7% 的積極影響。情景。這種附加價值主要是透過減少分散性和提高利害關係人責任風險的法律確定性來產生的。這將降低利害關係人的法律資訊/代表、內部風險管理和合規成本,促進財務規劃以及出於保險目的的風險評估,並使公司(特別是中小企業)能夠跨境探索新市場。根據該指令所解決的責任相關問題影響的歐盟人工智慧市場的整體價值,預計後者將產生約 100 億美元的額外市場價值。 5億歐元及約11億歐元。
In terms of social impacts, the Directive will increase societal trust in AI-technologies and access to an effective justice system. It will contribute to an efficient civil liability regime, adapted to the specificities of AI, where justified claims for compensation of damage are successful. Increasing societal trust would also benefit all companies in the AI-value chain, because strengthening citizens’ confidence will contribute to a faster uptake of AI. Due to the incentivising effect of liability rules, preventing liability gaps would also indirectly benefit all citizens through an increased level of protection of health and safety (Article 114(3) TFEU) and the obviation of sources of health risks (Article 168(1) TFEU).
在社會影響方面,該指令將增加社會對人工智慧技術的信任和有效司法系統的使用。它將有助於建立一個有效的民事責任制度,適應人工智慧的特殊性,合理的損害賠償索賠是成功的。增加社會信任也將使人工智慧價值鏈中的所有公司受益,因為增強公民的信心將有助於更快地採用人工智慧。由於責任規則的激勵作用,透過提高健康和安全保護水準(TFEU 第 114(3) 條)和消除健康風險來源(第 168(1) 條),防止責任缺口也將間接惠及所有公民。 )。
As regards environmental impacts, the Directive is also expected to contribute to achieving the related Sustainable Development Goals (SDGs) and targets. The uptake of AI applications is beneficial for the environment. For instance, AI systems used in process optimisation make processes less wasteful (e.g. by reducing the amount of fertilizers and pesticides needed, decreasing the water consumption at equal output, etc.). The Directive would also impact positively on SDGs because effective legislation on transparency, accountability and fundamental rights will direct AI’s potential to benefit individuals and society towards achieving the SDGs.
在環境影響方面,該指令預計也將有助於實現相關的永續發展目標(SDG)和具體目標。人工智慧應用的採用對環境有益。例如,流程優化中使用的人工智慧系統可以減少流程浪費(例如,透過減少所需化肥和農藥的用量、減少同等產量下的水消耗等)。該指令也將對永續發展目標產生積極影響,因為有關透明度、問責制和基本權利的有效立法將引導人工智慧發揮造福個人和社會的潛力,從而實現永續發展目標。
2.LEGAL BASIS, SUBSIDIARITY AND PROPORTIONALITY
法律依據、輔助性和相稱性
·
Legal basis
法律依據
The legal basis for the proposal is Article 114 TFEU, which provides for the adoption of measures to ensure the establishment and functioning of the internal market.
該提案的法律依據是《TFEU》第 114 條,該條規定採取措施確保內部市場的建立和運作。
The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.
該提案旨在解決的問題,特別是法律不確定性和法律碎片化,阻礙了內部市場的發展,從而對人工智慧產品和服務的跨境貿易構成重大障礙。
The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. This uncertainty concerns particularly Member States where businesses will export to or operate their products and services. In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.
該提案解決了由於以下事實而產生的障礙:想要跨境生產、傳播和運營人工智慧產品和服務的企業不確定現有責任制度是否以及如何適用於人工智慧造成的損害。這種不確定性尤其關係到企業將出口或經營其產品和服務的成員國。在跨境背景下,適用於因侵權或不法行為引起的非合約責任的法律預設為損害發生國的法律。對於這些企業來說,了解相關的責任風險並能夠為自己投保至關重要。
In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. For example, AI strategies adopted in Czechia, Italy, Malta, Poland and Portugal mention initiatives to clarify liability. Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation.
此外,有具體跡象表明,一些成員國正在考慮採取單方面立法措施,以解決人工智慧在責任方面帶來的具體挑戰。例如,捷克ia 、義大利 、馬耳他 、波蘭和葡萄牙所採用的人工智慧策略提到了澄清責任的舉措。鑑於成員國現有民事責任規則之間存在巨大差異,任何國家人工智慧特定的責任措施都可能遵循現有的不同國家方法,從而增加碎片化。
Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.
因此,純粹在國家層面上調整責任規則將增加在內部市場推出人工智慧產品和服務的障礙,並進一步加劇分散化。
·Subsidiarity
子公司
The objectives of this proposal cannot be adequately achieved at national level because emerging divergent national rules would increase legal uncertainty and fragmentation, creating obstacles to the rollout of AI-enabled products and services across the internal market. Legal uncertainty would particularly affect companies active cross-borders by imposing the need for additional legal information/representation, risk management costs and foregone revenue. At the same time, differing national rules on compensation claims for damage caused by AI would increase transaction costs for businesses, especially for cross-border trade, entailing significant internal market barriers. Further, legal uncertainty and fragmentation disproportionately affect start-ups and SMEs, which account for most companies and the major share of investments in the relevant markets.
該提案的目標無法在國家層級充分實現,因為不斷出現的不同國家規則將增加法律的不確定性和碎片化,從而為在內部市場推出人工智慧產品和服務造成障礙。法律不確定性將特別影響活躍跨國的公司,因為它們需要額外的法律資訊/代表、風險管理成本和損失的收入。同時,各國對人工智慧造成的損害的賠償要求的不同規則將增加企業的交易成本,特別是跨境貿易,從而帶來巨大的內部市場壁壘。此外,法律的不確定性和分散性對新創公司和中小企業造成了不成比例的影響,這些企業佔相關市場中大多數公司和投資的主要份額。
In the absence of EU harmonised rules for compensating damage caused by AI systems, providers, operators and users of AI systems on the one hand and injured persons on the other hand would be faced with 27 different liability regimes, leading to different levels of protection and distorted competition among businesses from different Member States.
如果歐盟缺乏對人工智慧系統造成的損害進行賠償的統一規則,一方面人工智慧系統的提供者、營運商和用戶,另一方面受傷者將面臨27 種不同的責任制度,導致不同程度的保護和賠償。
Harmonised measures at EU level would significantly improve conditions for the rollout and development of AI-technologies in the internal market by preventing fragmentation and increasing legal certainty. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. Moreover, only EU action can consistently achieve the desired effect of promoting consumer trust in AI-enabled products and services by preventing liability gaps linked to the specific characteristics of AI across the internal market. This would ensure a consistent (minimum) level of protection for all victims (individuals and companies) and consistent incentives to prevent damage and ensure accountability.
歐盟層級的統一措施將透過防止分散化和提高法律確定性,顯著改善人工智慧技術在內部市場的推廣和發展條件。這種附加價值主要是透過減少分散性和提高利害關係人責任風險的法律確定性來產生的。此外,只有歐盟採取行動,才能透過防止內部市場中與人工智慧具體特徵相關的責任差距,持續達到促進消費者對人工智慧產品和服務信任的預期效果。這將確保對所有受害者(個人和公司)提供一致(最低)的保護,並提供一致的激勵措施,以防止損害並確保問責制。
·Proportionality 相稱性
The proposal is based on a staged approach. In the first stage, the objectives are achieved with a minimally invasive approach; the second stage involves re-assessing the need for more stringent or extensive measures.
該提案基於分階段方法。第一階段,透過微創方法達成目標;第二階段涉及重新評估是否需要採取更嚴格或更廣泛的措施。
The first stage is limited to the burden-of-proof measures to address the AI-specific problems identified. It builds on the substantive conditions of liability currently existing in national rules, such as causality or fault, but focuses on targeted proof-related measures, ensuring that victims have the same level of protection as in cases not involving AI systems. Moreover, from the various tools available in national law for easing the burden of proof, this proposal has chosen to use rebuttable presumptions as the least interventionist tool. Such presumptions are commonly found in national liability systems, and they balance the interests of claimants and defendants. At the same time they are designed to incentivise compliance with existing duties of care set at Union or national level. The proposal does not lead to a reversal of the burden of proof, to avoid exposing providers, operators and users of AI systems to higher liability risks, which may hamper innovation and reduce the uptake of AI-enabled products and services.
第一階段僅限於舉證責任措施,以解決已確定的人工智慧特定問題。它建立在國家規則中現有的實質責任條件的基礎上,例如因果關係。 或過錯,但專注於有針對性的證據相關措施,確保受害者獲得與不涉及人工智慧系統的案件相同程度的保護。此外,從國家法律中可用於減輕舉證負擔的各種工具中 ,該提案選擇使用可反駁的推定作為乾預性最小的工具。這種推定在國家責任體系中很常見,它們平衡了原告和被告的利益。同時,它們的目的是激勵人們遵守聯盟或國家層級現有的照顧義務。 該提案不會導致舉證責任倒置,以避免暴露 人工智慧系統的供應商、營運商和用戶面臨更高的責任風險,這可能會阻礙創新並減少人工智慧產品和服務的採用。
The second stage included in the proposal ensures that, when assessing the effect of the first stage in terms of victim protection and uptake of AI, future technological, regulatory and jurisprudential developments will be taken into account when re-assessing the need to harmonise other elements of the claims for compensation or other tools related to liability claims, including for situations where strict liability would be more appropriate, as requested by the European Parliament. Such assessment would also likely consider whether such a harmonisation would need to be coupled with mandatory insurance to ensure effectiveness.
該提案中包含的第二階段確保在評估第一階段在受害者保護和人工智慧應用方面的效果時,在重新評估協調其他要素的必要性時,將考慮未來的技術、監管和法理發展賠償索賠或與責任索賠相關的其他工具,包括按照歐洲議會的要求,在嚴格責任更合適的情況下。此類評估也可能會考慮這種協調是否需要與強制性保險相結合以確保有效性。
·Choice of instrument
儀器的選擇
A directive is the most suitable instrument for this proposal, as it provides the desired harmonisation effect and legal certainty, while also providing the flexibility to enable Member States to embed the harmonised measures without friction into their national liability regimes.
指令是該提案最合適的工具,因為它提供了所需的協調效果和法律確定性,同時也提供了靈活性,使成員國能夠將協調措施無摩擦地嵌入其國家責任制度中。
A mandatory instrument would prevent protection gaps stemming from partial or no implementation. While a non-binding instrument would be less intrusive, it is unlikely to address the identified problems in an effective manner. The implementation rate of non-binding instruments is difficult to predict and there is insufficient indication that the persuasive effect of a recommendation would be strong enough to produce consistent adaptation of national laws.
強制性文書將防止因部分實施或未實施而造成的保護差距。雖然不具約束力的文書侵入性較小,但不太可能有效解決已查明的問題。非約束性文書的執行率很難預測,也沒有足夠的跡象表明建議的說服力足以使國家法律得到一致的調整。
This effect is even more unlikely for measures concerning private law, of which non contractual liability rules form part. This area is characterised by long-standing legal traditions, which makes Member States reluctant to pursue coordinated reform unless driven by the clear prospect of internal market benefits under a binding EU instrument or the need to adapt to new technologies in the digital economy.
對於涉及私法的措施(非契約責任規則是私法的一部分),這種效果更不可能發生。該領域的特點是長期存在的法律傳統,這使得成員國不願推行協調改革,除非有約束力的歐盟文書帶來內部市場利益的明確前景或需要適應數位經濟中的新技術。
The existing significant divergences between Member States’ liability frameworks are another reason why a recommendation is unlikely to be implemented in a consistent manner.
成員國責任框架之間現有的重大分歧是建議不可能以一致方式實施的另一個原因。
1.RESULTS OF EX POST EVALUATIONS, STAKEHOLDER CONSULTATIONS AND IMPACT ASSESSMENTS
事後結果 評估、利害關係人諮詢和影響評估
·Stakeholder consultations
利害關係人諮詢
An extensive consultation strategy was implemented to ensure a wide participation of stakeholders throughout the policy cycle of this proposal. The consultation strategy was based on both public and several targeted consultations (webinars, bilateral discussions with companies and various organisations).
實施了廣泛的協商策略,以確保利害關係人廣泛參與該提案的整個政策週期。諮詢策略是基於公開諮詢和多次有針對性的諮詢(網路研討會、與公司和各種組織的雙邊討論)。
After the initial questions on liability which were part of the public consultation on the White Paper on AI and the Commission report on safety and liability, a dedicated online public consultation was open from 18 October 2021 to 10 January 2022 to gather views from a wide variety of stakeholders, including consumers, civil society organisations, industry associations, businesses, including SMEs, and public authorities. After analysing all the responses received, the Commission published a summary outcome and the individual responses on its website.
在人工智慧白皮書和委員會安全與責任報告的公眾諮詢中提出了關於責任的初步問題後,專門的線上公眾諮詢從 18 日開始開放。 2021 年 10 月至 10 日 2022 年 1 月,收集各種利害關係人的意見,包括消費者、民間社會組織、產業協會、企業(包括中小企業)和公共當局。在分析收到的所有答覆後,委員會在其網站上發布了總結結果和個人答覆 。
In total, 233 responses were received from respondents from 21 Member States, as well as from third countries. Overall, the majority of stakeholders confirmed the problems with burden of proof, legal uncertainty and fragmentation and supported action at EU level.
總共收到了 21 個成員國以及第三國受訪者的 233 份回覆。總體而言,大多數利害關係人確認了舉證責任、法律不確定性和碎片化等問題,並支持歐盟層面的行動。
EU citizens, consumer organizations and academic institutions overwhelmingly confirmed the need for EU action to ease victims’ problems with the burden of proof. Businesses, while recognising the negative effects of the uncertainty around the application of liability rules, were more cautious and asked for targeted measures to avoid limiting innovation.
歐盟公民、消費者組織和學術機構絕大多數確認歐盟需要採取行動,緩解受害者的舉證責任問題。企業雖然認識到責任規則應用的不確定性的負面影響,但更加謹慎,並要求採取有針對性的措施,以避免限制創新。
A similar picture appeared regarding the policy options. EU citizens, consumer organizations and academic institutions strongly supported measures on the burden of proof and harmonising no-fault liability (referred to as ‘strict liability’) coupled with mandatory insurance. Businesses were more divided on the policy options, with differences depending in part on their size. Strict liability was considered disproportionate by the majority of business respondents. Harmonisation of the easing of the burden of proof gained more support, particularly among SMEs. However, businesses cautioned against a complete shift of the burden of proof.
關於政策選擇也出現了類似的情況。歐盟公民、消費者組織和學術機構強烈支持舉證責任和協調無過失責任(稱為「嚴格責任」)以及強制保險的措施。企業對政策選擇的分歧較大,其差異部分取決於企業規模。大多數企業受訪者認為嚴格責任不成比例。統一放寬舉證責任獲得了更多支持,特別是在中小企業中。然而,企業警告不要完全轉移舉證責任。
Therefore, the preferred policy option was developed and refined in light of feedback received from stakeholders throughout the impact assessment process to strike a balance between the needs expressed and concerns raised by all relevant stakeholder groups.
因此,根據整個影響評估過程中利害關係人的回饋,制定和完善了首選政策選項,以在所有相關利害關係人群體表達的需求和提出的關切之間取得平衡。
·Collection and use of expertise
專業知識的收集和使用
The proposal builds on 4 years of analysis and close involvement of stakeholders, including academics, businesses, consumer associations, Member States and citizens. The preparatory work started in 2018 with the setting up of the Expert Group on Liability and New Technologies (New Technologies Formation). The Expert Group produced a Report in November 2019 that assessed the challenges some characteristics of AI pose to national civil liability rules.
該提案基於四年的分析和利益相關者(包括學術界、企業、消費者協會、成員國和公民)的密切參與。籌備工作於 2018 年啟動,成立了責任與新技術專家小組(新技術形成)。專家小組於 2019 年 11 月發布了一份報告 ,評估了人工智慧的某些特徵對國家民事責任規則帶來的挑戰。
The input from the Expert Group report was complemented by three additional external studies:
專家組報告的意見得到了另外三項外部研究的補充:
–a comparative law study based on a comparative legal analysis of European tort laws focused on key AI-related issues;
基於歐洲侵權法比較法律分析的比較法研究,重點在於與人工智慧相關的關鍵問題 ;
–a behavioural economics study on the impacts of targeted adaptations of the liability regime on consumers’ decision making, in particular their trust and willingness to take up AI-enabled products and services;
一項行為經濟學研究,研究有針對性地調整責任制度對消費者決策的影響,特別是他們對使用人工智慧產品和服務的信任和意願 ;
–an economic study covering a number of issues: the challenges faced by victims of AI applications compared to victims of non-AI devices when trying to obtain compensation for their loss; whether and to what extent businesses are uncertain about the application of current liability rules to their operations involving AI, and whether the impact of legal uncertainty can hamper investment in AI; whether further fragmentation of national liability laws would reduce the effectiveness of the internal market for AI applications and services, and whether and to what extent harmonising certain aspects of national civil liability via EU legislation would reduce these problems and facilitate the overall uptake of AI technology by EU companies.
一項經濟研究涵蓋了許多問題:與非人工智慧設備的受害者相比,人工智慧應用的受害者在試圖獲得損失賠償時所面臨的挑戰;企業對於當前責任規則在其涉及人工智慧的業務中的應用是否以及在多大程度上存在不確定性,以及法律不確定性的影響是否會阻礙對人工智慧的投資;國家責任法的進一步碎片化是否會降低人工智慧應用和服務內部市場的有效性,以及透過歐盟立法協調國家民事責任的某些方面是否以及在多大程度上會減少這些問題並促進人工智慧技術的整體採用歐盟公司。
·Impact assessment 影響評估
In line with its “Better Regulation” policy, the Commission conducted an impact assessment for this proposal examined by the Commission's Regulatory Scrutiny Board. The meeting of the Regulatory Scrutiny Board on 6 April 2022 led to a positive opinion with comments.
根據其「更好的監管」政策,委員會對該提案進行了影響評估,並由委員會的監管審查委員會進行了審查。 2022 年 4 月 6 日的監管審查委員會會議形成了正面的意見和評論。
Three policy options were assessed:
評估了三種政策選擇:
Policy option 1: three measures to ease the burden of proof for victims trying to prove their liability claim.
政策選擇1 :三項措施減輕試圖證明其責任主張的受害者的舉證負擔。
Policy option 2: the measures under option 1 + harmonising strict liability rules for AI use cases with a particular risk profile, coupled with a mandatory insurance.
政策選擇 2 :選擇 1 + 下的措施 將人工智慧用例的嚴格責任規則與特定風險狀況相協調,並加上強制性保險。
Policy option 3: a staged approach consisting of:
政策選擇 3 :分階段方法,包括:
–a first stage: the measures under option 1;
第一階段:備選方案1下的措施;
–a second stage: a review mechanism to re-assess, in particular, the need for harmonising strict liability for AI use cases with a particular risk profile (possibly coupled with a mandatory insurance).
第二階段:審查機制,特別是重新評估將人工智慧用例的嚴格責任與特定風險狀況(可能與強制性保險相結合)相協調的必要性。
The policy options were compared by way of a multi-criteria analysis taking into account their effectiveness, efficiency, coherence and proportionality. The results of the multi-criteria and sensitivity analysis show that policy option 3, easing the burden of proof for AI-related claims + targeted review regarding strict liability, possibly coupled with mandatory insurance, ranks highest and is therefore the preferred policy choice for this proposal.
透過多標準分析對政策選項進行比較,同時考慮其有效性、效率、連貫性和相稱性。多重標準和敏感性分析的結果表明,保單選項3(減輕人工智慧相關索賠的舉證負擔+針對嚴格責任的針對性審查,可能加上強制保險)排名最高,因此是本次保單的首選保單選擇。提議。
The preferred policy option would ensure that victims of AI-enabled products and services (natural persons, businesses and any other public or private entities) are no less protected than victims of traditional technologies. It would increase the level of trust in AI and promote its uptake.
首選的政策選擇將確保人工智慧產品和服務的受害者(自然人、企業和任何其他公共或私人實體)受到的保護不低於傳統技術受害者的保護。它將提高人們對人工智慧的信任程度並促進其應用。
Furthermore, it would reduce legal uncertainty and prevent fragmentation, thus helping companies, and most of all SMEs, that want to realise the full potential of the EU single market by rolling out AI-enabled products and services cross-border. The preferred policy option also creates better conditions for insurers to offer coverage of AI-related activities, which is crucial for businesses, especially SMEs to manage their risks. It is namely estimated that the preferred policy option would generate an increased AI market value in the EU-27 between ca. EUR 500mln and ca. EUR 1.1bln in 2025.
此外,它將減少法律不確定性並防止分裂,從而幫助那些希望透過跨國推出人工智慧產品和服務來充分發揮歐盟單一市場潛力的公司,尤其是中小企業。首選保單選項也為保險公司提供人工智慧相關活動的承保創造了更好的條件,這對於企業(尤其是中小企業)管理風險至關重要。據估計,首選政策選項將在約 20 至 20 年間增加歐盟 27 國的人工智慧市場價值。 5億歐元及約2025 年將達到 11 億歐元。
·Fundamental rights 基本權利
One of the most important functions of civil liability rules is to ensure that victims of damage can claim compensation. By guaranteeing effective compensation, these rules contribute to the protection of the right to an effective remedy and a fair trial (Article 47 of the EU Charter of Fundamental Rights, referred to below as 'the Charter') while also giving potentially liable persons an incentive to prevent damage, in order to avoid liability.
民事責任規則最重要的功能之一是確保損害的受害者能夠要求賠償。透過保證有效賠償,這些規則有助於保護獲得有效補救和公平審判的權利(《歐盟基本權利憲章》第47條,以下簡稱「憲章」),同時也給予潛在責任人激勵防止損壞,以免承擔責任。
With this proposal, the Commission aims to ensure that victims of damage caused by AI have an equivalent level of protection under civil liability rules as victims of damage caused without the involvement of AI. The proposal will enable effective private enforcement of fundamental rights and preserve the right to an effective remedy where AI-specific risks have materialised. In particular, the proposal will help protect fundamental rights, such as the right to life (Article 2 of the Charter), the right to the physical and mental integrity (Article 3), and the right to property (Article 17). In addition, depending on each Member State’s civil law system and traditions, victims will be able to claim compensation for damage to other legal interests, such as violations of personal dignity (Articles 1 and 4 of the Charter), respect for private and family life (Article 7), the right to equality (Article 20) and non-discrimination (Article 21).
透過這項提案,委員會旨在確保人工智慧造成的損害的受害者與沒有人工智慧參與的損害的受害者在民事責任規則下享有同等程度的保護。該提案將使私人有效執行基本權利,並在人工智慧特定風險出現時保留有效補救的權利。特別是,該提案將有助於保護基本權利,例如生命權(《憲章》第二條)、身心健全權(第三條)和財產權(第十七條)。此外,根據每個成員國的民法體系和傳統,受害者將能夠就其他合法利益受到的損害提出賠償,例如侵犯個人尊嚴(《憲章》第一條和第四條)、尊重私人和家庭生活(第7 條)、平等權(第20 條)及不歧視權(第21 條)。
In addition, this proposal complements other strands in the Commission’s AI policy based on preventive regulatory and supervisory requirements aimed directly at avoiding fundamental rights breaches (such as discrimination). These are the AI Act, the General Data Protection Regulation, the Digital Services Act and EU law on non-discrimination and equal treatment. At the same time, this proposal does not create or harmonise the duties of care or the liability of various entities whose activity is regulated under those legal acts and, therefore, does not create new liability claims or affect the exemptions from liability under those other legal acts. This proposal only introduces alleviations of the burden of proof for the victims of damage caused by AI systems in claims that can be based on national law or on these other EU laws. By complementing these other strands, this proposal protects the victim's right to compensation under private law, including compensation for fundamental rights breaches.
此外,該提案還補充了委員會基於預防性監管和監督要求的人工智慧政策中的其他部分,旨在直接避免侵犯基本權利(例如歧視)。它們是《人工智慧法》、《一般資料保護條例》、《數位服務法》和《歐盟非歧視和平等待遇法》。同時,本提案不會設立或協調其活動受這些法律行為管轄的各種實體的注意義務或責任,因此,不會產生新的責任索賠或影響這些其他法律規定的責任免除。該提案僅在基於國家法律或其他歐盟法律的索賠中減輕人工智慧系統造成損害的受害者的舉證責任。透過補充這些其他方面,該提案保護受害者根據私法獲得賠償的權利,包括對侵犯基本權利的賠償。
4.BUDGETARY IMPLICATIONS 預算影響
This proposal will not have implications for the budget of the European Union.
該提案不會對歐盟預算產生影響。
5.OTHER ELEMENTS 其他元素
·Implementation plans and monitoring, evaluation, monitoring programme and targeted review
實施計劃和監測、評估、監測計劃和針對性審查
This proposal puts forward a staged approach. To ensure that sufficient evidence is available for the targeted review in the second stage, the Commission will draw up a monitoring plan, detailing how and how often data and other necessary evidence will be collected.
該提案提出了分階段的方法。為確保第二階段的針對性審查有足夠的證據,委員會將制定監測計劃,詳細說明收集數據和其他必要證據的方式和頻率。
The monitoring mechanism could cover the following types of data and evidence:
監測機制可涵蓋以下類型的數據和證據:
–reporting and information sharing by Member States regarding application of measure to ease the burden of proof in national judicial or out-of-court settlement procedures;
成員國就在國家司法或庭外和解程序中採取減輕舉證責任的措施進行報告和資訊共享;
–information collected by the Commission or market surveillance authorities under the AI Act (in particular Article 62) or other relevant instruments;
委員會或市場監督機構根據《人工智慧法》(特別是第 1 條)收集的資訊 62)或其他相關文書;
–information and analyses supporting the evaluation of the AI Act and the reports to be prepared by the Commission on implementation of that Act;
支持評估《人工智慧法案》的資訊和分析以及委員會將編寫的關於該法案實施情況的報告;
–information and analyses supporting the assessment of relevant future policy measures under the ‘old approach’ safety legislation to ensure that products placed on the Union market meet high health, safety and environmental requirements;
支持根據「舊方法」安全立法評估相關未來政策措施的資訊和分析,以確保投放到歐盟市場的產品滿足較高的健康、安全和環境要求;
–information and analyses supporting the Commission’s report on the application of the Motor Insurance Directive to technological developments (in particular autonomous and semi-autonomous vehicles) pursuant to its Article 28c(2)(a).
支持委員會根據其條款將汽車保險指令應用於技術發展(特別是自動駕駛和半自動駕駛車輛)的報告的資訊和分析 28c(2)(a)。
·Detailed explanation of the specific provisions in the proposal
提案中具體條款的詳細說明
1. Subject matter and scope (Article 1)
主題與範圍(第 1 條)
The purpose of this Directive is to improve the functioning of the internal market by laying down uniform requirements for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems. It follows up on the European Parliament’s Resolution 2020/2014(INL) and adapts private law to the needs of the transition to the digital economy.
該指令的目的是透過對人工智慧系統造成的損害的非合約民事責任的某些方面製定統一要求,改善內部市場的運作。它是歐洲議會第 2020/2014(INL) 號決議的後續行動,並對私法進行調整,以適應數位經濟轉型的需要。
The choice of suitable legal tools is limited, given the nature of the burden-of-proof issue and the specific characteristics of AI that pose a problem for existing liability rules. In this respect, this Directive eases the burden of proof in a very targeted and proportionate manner through the use of disclosure and rebuttable presumptions. It establishes for those seeking compensation for damage a possibility to obtain information on high-risk AI systems to be recorded/documented pursuant to the AI Act. In addition to this, the rebuttable presumptions will give those seeking compensation for damage caused by AI systems a more reasonable burden of proof and a chance to succeed with justified liability claims.
鑑於舉證責任問題的性質以及人工智慧的具體特徵給現有責任規則帶來了問題,合適的法律工具的選擇是有限的。在這方面,該指令透過使用揭露和可反駁的推定,以非常有針對性和相稱的方式減輕了舉證責任。它為尋求損害賠償的人提供了根據《人工智慧法》記錄/記錄的高風險人工智慧系統資訊的可能性。除此之外,可反駁的推定將使那些尋求人工智慧系統造成的損害賠償的人承擔更合理的舉證責任,並有機會成功提出合理的責任索賠。
Such tools are not new; they can be found in national legislative systems. Hence, these national tools constitute helpful reference points on how to address the issues raised by AI for existing liability rules in a way which interferes as little as possible with the different national legal regimes.
這些工具並不新鮮。它們可以在國家立法體系中找到。因此,這些國家工具為如何以盡可能少地干擾不同國家法律制度的方式解決人工智慧對現有責任規則提出的問題提供了有用的參考點。
In addition, when asked about more far-reaching changes such as a reversal of the burden of proof or an irrebuttable presumption, businesses provided negative feedback in consultations. Targeted measures to ease the burden of proof in form of rebuttable presumptions were chosen as pragmatic and appropriate ways to help victims meet their burden of proof in the most targeted and proportionate manner possible.
此外,當被問及舉證責任倒置或不可反駁推定等更深遠的變化時,企業在諮詢中給出了負面回饋。選擇以可反駁的推定形式減輕舉證責任的有針對性的措施,作為務實和適當的方法,幫助受害者以盡可能有針對性和相稱的方式履行舉證責任。
Article 1 indicates the subject matter and scope of this Directive: it applies to non-contractual civil law claims for damages caused by an AI system, where such claims are brought under fault-based liability regimes. This means namely regimes that provide for a statutory responsibility to compensate for damage caused intentionally or by a negligent act or omission. The measures provided in this Directive can fit without friction in existing civil liability systems, since they reflect an approach that does not touch on the definition of fundamental concepts like ‘fault’ or ‘damage’, given that the meaning of those concepts varies considerably across the Member States. Thus, beyond the presumptions it establishes, this Directive does not affect Union or national rules determining, for instance, which party has the burden of proof, what degree of certainty is required as regards the standard of proof, or how fault is defined.
第 1 條表明了本指令的主題和範圍:它適用於對人工智慧系統造成的損害提出的非合約性民法索賠,此類索賠是根據基於過失的責任制度提出的。這意味著規定對故意或疏忽行為或不作為造成的損害承擔法定賠償責任的製度。本指令中提供的措施可以無摩擦地適應現有的民事責任體系,因為它們反映的方法不涉及「過失」或「損害」等基本概念的定義,因為這些概念的含義在不同國家有很大差異。因此,除了其確立的推定之外,本指令不影響聯盟或國家規則,例如確定哪一方負有舉證責任、證明標準需要何種程度的確定性或如何定義過錯等。
In addition, this Directive does not affect existing rules regulating the conditions of liability in the transport sector and those set by the Digital Services Act.
此外,該指令不影響監管運輸部門責任條件的現有規則以及《數位服務法》制定的規則。
While this Directive does not apply with respect to criminal liability, it may be applicable with respect to state liability. State authorities are also covered by the provisions of the AI Act as subjects of the obligations prescribed therein.
雖然本指令不適用於刑事責任,但可能適用於國家責任。國家當局也受到《人工智慧法》規定的約束,作為其中規定的義務主體。
This Directive does not apply retroactively, but only to claims for compensation of damages that occur as from the date of its transposition.
本指令不具追溯力,僅適用於自其換位之日起發生的損害賠償索賠。
The proposal for this Directive has been adopted together with the proposal for a revision of the Product Liability Directive 85/374/EEC, in a package aiming to adapt liability rules to the digital age and AI, ensuring the necessary alignment between these two complementary legal instruments.
該指令的提案已與產品責任指令 85/374/EEC 修訂提案一起通過,旨在使責任規則適應數位時代和人工智慧,確保這兩個補充法律之間必要的一致性儀器。
2.Definitions (Article 2) 定義(第 2 條)
The definitions in Article 2 follow those of the AI Act to ensure consistency.
第2條的定義遵循《人工智慧法》的定義,以確保一致性。
Article 2(6)(b) provides that claims for damages can be brought not only by the injured person but also by persons that have succeeded in or have been subrogated into the injured person’s rights. Subrogation is the assumption by a third party (such as an insurance company) of another party’s legal right to collect a debt or damages. Thus one person is entitled to enforce the rights of another for their own benefit. Subrogation would also cover heirs of a deceased victim.
第2(6)(b)條規定,損害賠償不僅可以由受害人提出,也可以由繼承或代位受害人權利的人提出。代位權是第三方(例如保險公司)承擔另一方收取債務或損害賠償的合法權利。因此,一個人有權為了自己的利益而強制執行另一個人的權利。代位權也涵蓋已故受害者的繼承人。
In addition, Article 2(6)(c) provides that an action for damages can also be brought by someone acting on behalf of one or more injured parties, in accordance with Union or national law. This provision aims to give more possibilities to persons injured by an AI system to have their claims assessed by a court, even in cases where individual actions may seem too costly or too cumbersome to bring, or where joint actions may entail a benefit of scale. To enable victims of damage caused by AI systems to enforce their rights in relation to this Directive through representative actions, Article 6 amends Annex I to Directive (EU) 2020/1828.
此外,第 2(6)(c) 條規定,代表一個或多個受害方的人也可以根據聯盟或國家法律提起損害賠償訴訟。該條款旨在為受人工智慧系統傷害的人提供更多機會讓法院評估其索賠,即使是在個人訴訟似乎成本太高或太麻煩而無法提起的情況下,或者聯合訴訟可能帶來規模效益的情況下。為了使人工智慧系統造成的損害的受害者能夠透過代表行動行使其與本指令相關的權利,第 6 條修訂了指令 (EU) 2020/1828 的附件一。
3.Disclosure of evidence (Article 3)
證據的揭露(第3條)
This Directive aims to provide persons seeking compensation for damage caused by high-risk AI systems with effective means to identify potentially liable persons and relevant evidence for a claim. At the same time, such means serve to exclude falsely identified potential defendants, saving time and costs for the parties involved and reducing the case load for courts.
該指令旨在為因高風險人工智慧系統造成的損害而尋求賠償的人提供有效的手段來識別潛在的責任人和索賠的相關證據。同時,這種手段有助於排除被錯誤識別的潛在被告,為當事人節省時間和成本,並減少法院的案件負擔。
In this respect, Article 3(1) of the Directive provides that a court may order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. Requests for evidence are addressed to the provider of an AI system, a person who is subject to the provider’s obligations laid down by Article 24 or Article 28 (1) of the AI Act or a user pursuant to the AI Act. The requests should be supported by facts and evidence sufficient to establish the plausibility of the contemplated claim for damages and the requested evidence should be at the addressees’ disposal. Requests cannot be addressed to parties that bear no obligations under the AI Act and therefore have no access to the evidence.
對此,指令第3條第(1)款規定,法院可以下令揭露涉嫌造成損害的特定高風險人工智慧系統的相關證據。證據請求是向人工智慧系統的提供者、根據《人工智慧法》第 24 條或第 28 (1) 條規定承擔提供者義務的人或根據《人工智慧法》的使用者提出的。請求應有事實和證據支持,足以證明預期損害索賠的合理性,並且所請求的證據應可供收件人使用。無法向根據《人工智慧法》不承擔任何義務、因此無法取得證據的當事人提出請求。
According to Article 3(2) the claimant can request the disclosure of evidence from providers or users that are not defendants only in case all proportionate attempts were unsuccessfully made to gather the evidence from the defendant.
根據第 3 條第(2)款,只有在所有相應嘗試從被告處收集證據均未成功的情況下,原告才可以要求非被告的提供者或使用者披露證據。
In order for the judicial means to be effective, Article 3(3) of the Directive provides that a court may also order the preservation of such evidence.
為了使司法手段有效,指示第3條第(3)款規定,法院也可以命令保全此類證據。
As provided in Article 3(4), first subparagraph, the court may order such disclosure, only to the extent necessary to sustain the claim, given that the information could be critical evidence to the injured person’s claim in the case of damage that involve AI systems.
根據第3 條第4 款第1 款的規定,法院可以下令披露此類信息,但僅限於維持索賠所必需的範圍內,因為在涉及人工智能的損害情況下,該信息可能是受害人索賠的關鍵證據。
By limiting the obligation to disclose or preserve to necessary and proportionate evidence, Article 3 (4), first subparagraph, aims to ensure proportionality in disclosing evidence, i.e. to limit the disclosure to the necessary minimum and prevent blanket requests.
透過將披露或保留必要且相稱的證據的義務限制在第 3 條第 4 款第 1 項中,旨在確保證據披露的相稱性,即將披露限制在必要的最低限度並防止一攬子請求。
The second and third subparagraphs of Article 3(4) further aim to strike a balance between the claimant’s rights and the need to ensure that such disclosure would be subject to safeguards to protect the legitimate interests of all parties concerned, such as trade secrets or confidential information.
The second and third subparagraphs of Article 3(4) further aim to strike a balance between the claimant's rights and the need to ensure that such disclosure would be subject to safeguards to protect the legitimate interests of all parties concerned, such as trade secrets or confidential資訊.
In the same context, the fourth subparagraph of Article 3(4) aims to ensure that procedural remedies against the order of disclosure or preservation are at the disposal of the person subject to it.
在同一背景下,第 3 條第 4 款第四項旨在確保受其管轄的人可以利用針對揭露或保存令的程序性補救措施。
Article 3(5) introduces a presumption of non-compliance with a duty of care. This is a procedural tool, relevant only in cases where it is the actual defendant in a claim for damages who bears the consequences of not complying with a request to disclose or preserve evidence. The defendant will have the right to rebut that presumption. The measure set out in this paragraph aims to promote disclosure but also to expedite court proceedings.
第 3 條第(5)款引入了不遵守注意義務的推定。這是一種程序工具,僅適用於損害賠償索賠中的實際被告承擔不遵守披露或保存證據請求的後果的情況。被告將有權反駁此一推定。本段規定的措施旨在促進資訊揭露,同時也加快法庭訴訟程序。
4.Presumption of causal link in the case of fault (Article 4)
過錯時因果關係的推定(第四條)
With respect to damage caused by AI systems, this Directive aims to provide an effective basis for claiming compensation in connection with the fault consisting in the lack of compliance with a duty of care under Union or national law.
對於人工智慧系統造成的損害,該指令旨在為因未遵守聯盟或國家法律規定的注意義務而導致的索賠提供有效的依據。
It can be challenging for claimants to establish a causal link between such non-compliance and the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the relevant damage. Therefore, a targeted rebuttable presumption of causality has been laid down in Article 4 (1) regarding this causal link. Such presumption is the least burdensome measure to address the need for fair compensation of the victim.
對於索賠人來說,要在此類違規行為與人工智慧系統產生的輸出或人工智慧系統未能產生導致相關損害的輸出之間建立因果關係可能具有挑戰性。因此,第四條第一款就這項因果關係規定了有針對性的可反駁的因果關係推定。這種推定是滿足受害者公平賠償需要的最不繁重的措施。
The fault of the defendant has to be proven by the claimant according to the applicable Union or national rules. Such fault can be established, for example, for non-compliance with a duty of care pursuant to the AI Act or pursuant to other rules set at Union level, such as those regulating the use of automated monitoring and decision-making for platform work or those regulating the operation of unmanned aircraft. Such fault can also be presumed by the court on the basis of a non-compliance with a court order for disclosure or preservation of evidence under Article 3(5). Still, it is only appropriate to introduce a presumption of causality when it can be considered likely that the given fault has influenced the relevant AI system output or lack thereof, which can be assessed on the basis of the overall circumstances of the case. At the same time, the claimant still has to prove that the AI system (i.e. its output or failure to produce one) gave rise to the damage.
原告必須根據適用的聯盟或國家規則證明被告的過錯。例如,可以因不遵守《人工智慧法》或根據聯盟層級制定的其他規則(例如規範平台工作或決策的自動監控和決策的使用)的注意義務而確定此類錯誤。操作。法院也可以根據不遵守第 3 條第(5)款規定的披露或保全證據的法院命令來推定此類過失。儘管如此,只有當可以認為給定的故障很可能影響了相關人工智慧系統的輸出或缺乏輸出時,才適合引入因果關係推定,這可以根據案件的整體情況進行評估。同時,索賠人仍需證明人工智慧系統(即其輸出或未能產生輸出)造成了損害。
Paragraphs (2) and (3) differentiate between, on the one hand, claims brought against the provider of a high-risk AI system or against a person subject to the provider’s obligations under the AI Act and, on the other hand, claims brought against the user of such systems. In this respect, it follows the respective provisions and relevant conditions of the AI Act. In the case of claims based on Article 4(2), the defendants’ compliance with the obligations listed in that paragraph have to be assessed also in the light of the risk management system and its results, i.e. risk management measures, under the AI Act.
第(2) 款和第(3) 款區分了一方面針對高風險人工智慧系統的提供者或根據《人工智慧法》承擔提供者義務的個人提出的索賠,另一方面針對高風險人工智慧系統的提供者提出的索賠針對此類系統的使用者。在此方面,遵循《人工智慧法》的相關規定和相關條件。在根據第 4 條第(2)款提出索賠的情況下,還必須根據《人工智慧法》的風險管理系統及其結果(即風險管理措施)來評估被告是否遵守該款所列義務。
In case of high-risk AI systems as defined by the AI Act, Article 4(4) establishes an exception from the presumption of causality, where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link. This possibility can incentivise defendants to comply with their disclosure obligations, with measures set by the AI Act to ensure a high level of transparency of the AI or with documenting and recording requirements.
對於《人工智慧法》定義的高風險人工智慧系統,第 4 條第 4 款規定了因果關係推定的例外情況,即被告證明原告可以合理地獲得足夠的證據和專業知識來證明因果關係。這種可能性可以激勵被告遵守《人工智慧法》規定的揭露義務,以確保人工智慧的高度透明度或記錄和記錄要求。
In the case of non-high risk AI systems, Article 4(5) establishes a condition for the applicability of the presumption of causality, whereby the latter is subject to the court determining that it is excessively difficult for the claimant to prove the causal link. Such difficulties are to be assessed in light of the characteristics of certain AI systems, such as autonomy and opacity, which render the explanation of the inner functioning of the AI system very difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output.
對於非高風險人工智慧系統,第 4 條第 5 款規定了因果關係推定的適用條件,即法院認為原告證明因果關係過於困難時,因果關係推定的適用條件。這些困難需要根據某些人工智慧系統的特徵來評估,例如自主性和不透明性,這使得在實踐中解釋人工智慧系統的內部功能非常困難,從而對申請人證明因果關係的能力產生負面影響被告的過失與人工智慧輸出之間的關聯。
In cases where the defendant uses the AI system in the course of a personal non-professional activity, Article 4(6) provides that the presumption of causality should only apply if the defendant has materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so. This condition is justified by the need to balance the interests of injured persons and non-professional users, by exempting from the application of the presumption of causality the cases in which non-professional users do not add risk through their behaviour.
對於被告在個人非專業活動過程中使用人工智慧系統的情況,第4條第6款規定,只有在被告對人工智慧系統的運作條件造成重大干擾的情況下,才應適用因果關係推定或被告被要求並且能夠確定人工智慧系統的運作條件但未能這樣做。這項條件的合理性在於需要平衡受傷人員和非專業使用者的利益,在非專業使用者未透過其行為增加風險的情況下免除因果關係推定的適用。
Finally, Article 4(7) provides that the defendant has the right to rebut the causality presumption based on Article 4(1).
最後,第四條第七款規定,被告有權反駁第四條第一款的因果關係推定。
Such effective civil liability rules have the additional advantage that they give all those involved in activities related to AI systems an additional incentive to respect their obligations regarding their expected conduct.
這種有效的民事責任規則還有一個額外的優勢,即它們為所有參與人工智慧系統相關活動的人提供了額外的激勵,以尊重其預期行為的義務。
5.Evaluation and targeted review (Article 5)
評估與針對性審查(第五條)
Various national legal systems provide for different strict liability regimes. Elements for such a regime at Union level were also suggested by the European Parliament in its own-initiative resolution of 20 October 2020, consisting of a limited strict liability regime for certain AI-enabled technologies and a facilitated burden of proof under fault-based liability rules. The public consultations also highlighted a preference for such a regime among respondents (except for non-SMEs businesses), whether or not coupled with mandatory insurance.
不同國家的法律制度規定了不同的嚴格責任制度。歐洲議會在 2020 年 10 月 20 日的主動決議中也提出了歐盟層級此類制度的要素,其中包括針對某些人工智慧技術的有限嚴格責任制度以及基於過失的責任下的便利舉證責任規則。公眾諮詢也強調了受訪者(非中小企業除外)對這種制度的偏好,無論是否與強制性保險相結合。
However, the proposal takes into account the differences between national legal traditions and the fact that the kind of products and services equipped with AI systems that could affect the public at large and put at risk important legal rights, such as the right to life, health and property, and therefore could be subject to a strict liability regime, are not yet widely available on the market.
然而,該提案考慮到了各國法律傳統之間的差異,以及配備人工智慧系統的產品和服務可能會影響廣大公眾並使生命權、健康權等重要合法權利面臨風險的事實。嚴格責任制度的約束,但尚未在市場上廣泛提供。
A monitoring programme is put in place to provide the Commission with information on incidents involving AI systems. The targeted review will assess whether additional measures would be needed, such as introducing a strict liability regime and/or mandatory insurance.
制定了一項監控計劃,向委員會提供涉及人工智慧系統的事件的資訊。有針對性的審查將評估是否需要採取額外措施,例如引入嚴格責任制度和/或強制保險。
6.Transposition (Article 7)
換位(第 7 條)
When notifying the Commission of national transposition measures to comply with this Directive, Member States should also provide explanatory documents which give sufficiently clear and precise information and state, for each provision of this Directive, the national provision(s) ensuring its transposition. This is necessary to enable the Commission to identify, for each provision of the Directive requiring transposition, the relevant part of national transposition measures creating the corresponding legal obligation in the national legal order, whatever the form chosen by the Member States.
在通知委員會遵守本指令的國家換位措施時,成員國還應提供解釋性文件,提供足夠清晰和準確的信息,並說明本指令的每項規定,確保其換位的國家規定。這對於使委員會能夠針對需要轉換的指令的每項規定,確定國家轉換措施的相關部分,在國家法律秩序中產生相應的法律義務,無論成員國選擇何種形式,都是必要的。
2022/0303 (COD) 2022/0303(貨到付款)
Proposal for a 提案
DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
歐洲議會和理事會指示
on adapting non-contractual civil liability rules to artificial intelligence
(AI Liability Directive)
(Text with EEA relevance)
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee,
Having regard to the opinion of the Committee of the Regions,
Acting in accordance with the ordinary legislative procedure,
Whereas:
(1)Artificial Intelligence (‘AI’) is a set of enabling technologies which can contribute to a wide array of benefits across the entire spectrum of the economy and society. It has a large potential for technological progress and allows new business models in many sectors of the digital economy.
(2)At the same time, depending on the circumstances of its specific application and use, AI can generate risks and harm interests and rights that are protected by Union or national law. For instance, the use of AI can adversely affect a number of fundamental rights, including life, physical integrity and in respect to non-discrimination and equal treatment. Regulation (EU) …/… of the European Parliament and of the Council [the AI Act] provides for requirements intended to reduce risks to safety and fundamental rights, while other Union law instruments regulate general and sectoral product safety rules applicable also to AI-enabled machinery products and radio equipment. While such requirements intended to reduce risks to safety and fundamental rights are meant to prevent, monitor and address risks and thus address societal concerns, they do not provide individual relief to those that have suffered damage caused by AI. Existing requirements provide in particular for authorisations, checks, monitoring and administrative sanctions in relation to AI systems in order to prevent damage. They do not provide for compensation of the injured person for damage caused by an output or the failure to produce an output by an AI system.
(3)When an injured person seeks compensation for damage suffered, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (‘fault’) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage. However, when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet this burden of proof. In particular, it may be excessively difficult to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake.
(4)In such cases, the level of redress afforded by national civil liability rules may be lower than in cases where technologies other than AI are involved in causing damage. Such compensation gaps may contribute to a lower level of societal acceptance of AI and trust in AI-enabled products and services.
(5)To reap the economic and societal benefits of AI and promote the transition to the digital economy, it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems. Such adaptations should contribute to societal and consumer trust and thereby promote the roll-out of AI. Such adaptations should also maintain trust in the judicial system, by ensuring that victims of damage caused with the involvement of AI have the same effective compensation as victims of damage caused by other technologies.
(6)Interested stakeholders – injured persons suffering damage, potentially liable persons, insurers – face legal uncertainty as to how national courts, when confronted with the specific challenges of AI, might apply the existing liability rules in individual cases in order to achieve just results. In the absence of Union action, at least some Member States are likely to adapt their civil liability rules to address compensation gaps and legal uncertainty linked to the specific characteristics of certain AI systems. This would create legal fragmentation and internal market barriers for businesses that develop or provide innovative AI-enabled products or services. Small and medium-sized enterprises would be particularly affected.
(7)The purpose of this Directive is to contribute to the proper functioning of the internal market by harmonising certain national non-contractual fault-based liability rules, so as to ensure that persons claiming compensation for damage caused to them by an AI system enjoy a level of protection equivalent to that enjoyed by persons claiming compensation for damage caused without the involvement of an AI system. This objective cannot be sufficiently achieved by the Member States because the relevant internal market obstacles are linked to the risk of unilateral and fragmented regulatory measures at national level. Given the digital nature of the products and services falling within the scope of this Directive, the latter is particularly relevant in a cross-border context.
(8)The objective of ensuring legal certainty and preventing compensation gaps in cases where AI systems are involved can thus be better achieved at Union level. Therefore, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Directive does not go beyond what is necessary in order to achieve that objective.
(9)It is therefore necessary to harmonise in a targeted manner specific aspects of fault-based liability rules at Union level. Such harmonisation should increase legal certainty and create a level playing field for AI systems, thereby improving the functioning of the internal market as regards the production and dissemination of AI-enabled products and services.
(10)To ensure proportionality, it is appropriate to harmonise in a targeted manner only those fault-based liability rules that govern the burden of proof for persons claiming compensation for damage caused by AI systems. This Directive should not harmonise general aspects of civil liability which are regulated in different ways by national civil liability rules, such as the definition of fault or causality, the different types of damage that give rise to claims for damages, the distribution of liability over multiple tortfeasors, contributory conduct, the calculation of damages or limitation periods.
(11)The laws of the Member States concerning the liability of producers for damage caused by the defectiveness of their products are already harmonised at Union level by Council Directive 85/374/EEC. Those laws do not, however, affect Member States’ rules of contractual or non-contractual liability, such as warranty, fault or strict liability, based on other grounds than the defect of the product. While at the same time the revision of Council Directive 85/374/EEC seeks to clarify and ensure that injured person can claim compensation for damages caused by defective AI-enabled products, it should therefore be clarified that the provisions of this Directive do not affect any rights which an injured person may have under national rules implementing Directive 85/374/EEC. In addition, in the field of transport, Union law regulating the liability of transport operators should remain unaffected by this Directive.
(12)[The Digital Services Act (DSA)] fully harmonises the rules applicable to providers of intermediary services in the internal market, covering the societal risks stemming from the services offered by those providers, including as regards the AI systems they use. This Directive does not affect the provisions of [the Digital Services Act (DSA)] that provide a comprehensive and fully harmonised framework for due diligence obligations for algorithmic decision-making by hosting service providers, including the exemption from liability for the dissemination of illegal content uploaded by recipients of their services where the conditions of that Regulation are met.
(13)Other than in respect of the presumptions it lays down, this Directive does not harmonise national laws regarding which party has the burden of proof or which degree of certainty is required as regards the standard of proof.
(14)This Directive should follow a minimum harmonisation approach. Such an approach allows claimants in cases of damage caused by AI systems to invoke more favourable rules of national law. Thus, national laws could, for example, maintain reversals of the burden of proof under national fault-based regimes, or national no-fault liability (referred to as ‘strict liability’) regimes of which there are already a large variety in national laws, possibly applying to damage caused by AI systems.
(15)Consistency with [the AI Act] should also be ensured. It is therefore appropriate for this Directive to use the same definitions in respect of AI systems, providers and users. In addition, this Directive should only cover claims for damages when the damage is caused by an output or the failure to produce an output by an AI system through the fault of a person, for example the provider or the user under [the AI Act]. There is no need to cover liability claims when the damage is caused by a human assessment followed by a human act or omission, while the AI system only provided information or advice which was taken into account by the relevant human actor. In the latter case, it is possible to trace back the damage to a human act or omission, as the AI system output is not interposed between the human act or omission and the damage, and thereby establishing causality is not more difficult than in situations where an AI system is not involved.
(16)Access to information about specific high-risk AI systems that are suspected of having caused damage is an important factor to ascertain whether to claim compensation and to substantiate claims for compensation. Moreover, for high risk AI systems, [the AI Act] provides for specific documentation, information and logging requirements, but does not provide a right to the injured person to access that information. It is therefore appropriate to lay down rules on the disclosure of relevant evidence by those that have it at their disposal, for the purposes of establishing liability. This should also provide an additional incentive to comply with the relevant requirements laid down in [the AI Act] to document or record the relevant information.
(17)The large number of people usually involved in the design, development, deployment and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for damage caused and to prove the conditions for a claim for damages. To allow injured persons to ascertain whether a claim for damages is well-founded, it is appropriate to grant potential claimants a right to request a court to order the disclosure of relevant evidence before submitting a claim for damages. Such disclosure should only be ordered where the potential claimant presents facts and information sufficient to support the plausibility of a claim for damages and it has made a prior request to the provider, the person subject to the obligations of a provider or the user to disclose such evidence at their disposal about specific high-risk AI systems that are suspected of having caused damage which has been refused. Ordering disclosure should lead to a reduction of unnecessary litigation and avoid costs for the possible litigants caused by claims which are unjustified or likely to be unsuccessful. The refusal of the provider, the person subject to the obligations of a provider or the user prior to the request to the court to disclose evidence should not trigger the presumption of non-compliance with relevant duties of care by the person who refuses such disclosure.
(18)The limitation of disclosure of evidence as regards high-risk AI systems is consistent with [the AI Act], which provides certain specific documentation, record keeping and information obligations for operators involved in the design, development and deployment of high-risk AI systems. Such consistency also ensures the necessary proportionality by avoiding that operators of AI systems posing lower or no risk would be expected to document information to a level similar to that required for high-risk AI systems under [the AI Act].
(19)National courts should be able, in the course of civil proceedings, to order the disclosure or preservation of relevant evidence related to the damage caused by high-risk AI systems from persons who are already under an obligation to document or record information pursuant to [the AI Act], be they providers, persons under the same obligations as providers, or users of an AI system, either as defendants or third parties to the claim. There could be situations where the evidence relevant for the case is held by entities that would not be parties to the claim for damages but which are under an obligation to document or record such evidence pursuant to [the AI Act]. It is thus necessary to provide for the conditions under which such third parties to the claim can be ordered to disclose the relevant evidence.
(20)To maintain the balance between the interests of the parties involved in the claim for damages and of third parties concerned, the courts should order the disclosure of evidence only where this is necessary and proportionate for supporting the claim or potential claim for damages. In this respect, disclosure should only concern evidence that is necessary for a decision on the respective claim for damages, for example only the parts of the relevant records or data sets required to prove non-compliance with a requirement laid down by [the AI Act]. To ensure the proportionality of such disclosure or preservation measures, national courts should have effective means to safeguard the legitimate interests of all parties involved, for instance the protection of trade secrets within the meaning of Directive (EU) 2016/943 of the European Parliament and of the Council and of confidential information, such as information related to public or national security. In respect of trade secrets or alleged trade secrets which the court has identified as confidential within the meaning of Directive (EU) 2016/943, national courts should be empowered to take specific measures to ensure the confidentiality of trade secrets during and after the proceedings, while achieving a fair and proportionate balance between the trade-secret holder's interest in maintaining secrecy and the interest of the injured person. This should include measures to restrict access to documents containing trade secrets and access to hearings or documents and transcripts thereof to a limited number of people. When deciding on such measures, national courts should take into account the need to ensure the right to an effective remedy and to a fair trial, the legitimate interests of the parties and, where appropriate, of third parties, and any potential harm to either party or, where appropriate, to third parties, resulting from the granting or rejection of such measures. Moreover, to ensure a proportionate application of a disclosure measure towards third parties in claims for damages, the national courts should order disclosure from third parties only if the evidence cannot be obtained from the defendant.
(21)While national courts have the means of enforcing their orders for disclosure through various measures, any such enforcement measures could delay claims for damages and thus potentially create additional expenses for the litigants. For injured persons, such delays and additional expenses may make their recourse to an effective judicial remedy more difficult. Therefore, where a defendant in a claim for damages fails to disclose evidence at its disposal ordered by a court, it is appropriate to lay down a presumption of non-compliance with those duties of care which that evidence was intended to prove. This rebuttable presumption will reduce the duration of litigation and facilitate more efficient court proceedings. The defendant should be able to rebut that presumption by submitting evidence to the contrary.
(22)In order to address the difficulties to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake, it is appropriate to provide, under certain conditions, for a presumption of causality. While in a fault-based claim the claimant usually has to prove the damage, the human act or omission constituting fault of the defendant and the causality link between the two, this Directive does not harmonise the conditions under which national courts establish fault. They remain governed by the applicable national law and, where harmonised, by applicable Union law. Similarly, this Directive does not harmonise the conditions related to the damage, for instance what damages are compensable, which are also regulated by applicable national and Union law. For the presumption of causality under this Directive to apply, the fault of the defendant should be established as a human act or omission which does not meet a duty of care under Union law or national law that is directly intended to protect against the damage that occurred. Thus, this presumption can apply, for example, in a claim for damages for physical injury when the court establishes the fault of the defendant for non-complying with the instructions of use which are meant to prevent harm to natural persons. Non-compliance with duties of care that were not directly intended to protect against the damage that occurred do not lead to the application of the presumption, for example a provider’s failure to file required documentation with competent authorities would not lead to the application of the presumption in claims for damages due to physical injury. It should also be necessary to establish that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output. Finally, the claimant should still be required to prove that the output or failure to produce an output gave rise to the damage.
(23)Such a fault can be established in respect of non-compliance with Union rules which specifically regulate high-risk AI systems like the requirements introduced for certain high-risk AI systems by [the AI Act], requirements which may be introduced by future sectoral legislation for other high-risk AI systems according to [Article 2(2) of the AI Act], or duties of care which are linked to certain activities and which are applicable irrespective whether AI is used for that activity. At the same time, this Directive neither creates nor harmonises the requirements or the liability of entities whose activity is regulated under those legal acts, and therefore does not create new liability claims. Establishing a breach of such a requirement that amounts to fault will be done according to the provisions of those applicable rules of Union Law, since this Directive neither introduces new requirements nor affects existing requirements. For example, the exemption of liability for providers of intermediary services and the due diligence obligations to which they are subject pursuant to [the Digital Services Act] are not affected by this Directive. Similarly, the compliance with requirements imposed on online platforms to avoid unauthorised communication to the public of copyright protected works is to be established under Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market and other relevant Union copyright law.
(24)In areas not harmonised by Union law, national law continues to apply and fault is established under the applicable national law. All national liability regimes have duties of care, taking as a standard of conduct different expressions of the principle how a reasonable person should act, which also ensure the safe operation of AI systems in order to prevent damage to recognised legal interests. Such duties of care could for instance require users of AI systems to choose for certain tasks a particular AI system with concrete characteristics or to exclude certain segments of a population from being exposed to a particular AI system. National law can also introduce specific obligations meant to prevent risks for certain activities, which are applicable irrespective whether AI is used for that activity, for example traffic rules or obligations specifically designed for AI systems, such as additional national requirements for users of high-risk AI systems pursuant to Article 29 (2) of [the AI Act]. This Directive neither introduces such requirements nor affects the conditions for establishing fault in case of breach of such requirements.
(25)Even when fault consisting of a non-compliance with a duty of care directly intended to protect against the damage that occurred is established, not every fault should lead to the application of the rebuttable presumption linking it to the output of the AI. Such a presumption should only apply when it can be considered reasonably likely, from the circumstances in which the damage occurred, that such fault has influenced the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the damage. It can be for example considered reasonably likely that the fault has influenced the output or failure to produce an output, when that fault consists in breaching a duty of care in respect of limiting the perimeter of operation of the AI system and the damage occurred outside the perimeter of operation. On the contrary, a breach of a requirement to file certain documents or to register with a given authority, even though this might be foreseen for that particular activity or even be applicable expressly to the operation of an AI system, could not be considered as reasonably likely to have influenced the output produced by the AI system or the failure of the AI system to produce an output.
(26)This Directive covers the fault constituting non-compliance with certain listed requirements laid down in Chapters 2 and 3 of [the AI Act] for providers and users of high-risk AI systems, the non-compliance with which can lead, under certain conditions, to a presumption of causality. The AI Act provides for full harmonisation of requirements for AI systems, unless otherwise explicitly laid down therein. It harmonises the specific requirements for high-risk AI systems. Hence, for the purposes of claims for damages in which a presumption of causality according to this Directive is applied, the potential fault of providers or persons subject to the obligations of a provider pursuant to [the AI Act] is established only through a non-compliance with such requirements. Given that in practice it may be difficult for the claimant to prove such non-compliance when the defendant is a provider of the AI system, and in full consistency with the logic of [the AI Act], this Directive should also provide that the steps undertaken by the provider within the risk management system and the results of the risk management system, i.e. the decision to adopt or not to adopt certain risk management measures, should be taken into account in the determination of whether the provider has complied with the relevant requirements under the AI Act referred to in this Directive. The risk management system put in place by the provider pursuant to [the AI Act] is a continuous iterative process run throughout the lifecycle of the high-risk AI system, whereby the provider ensures compliance with mandatory requirements meant to mitigate risks and can, therefore, be a useful element for the purpose of the assessment of this compliance. This Directive also covers the cases of users’ fault, when this fault consists in non-compliance with certain specific requirements set by [the AI Act]. In addition, the fault of users of high-risk AI systems may be established following non-compliance with other duties of care laid down in Union or national law, in light of Article 29 (2) of [the AI Act].
(27)While the specific characteristics of certain AI systems, like autonomy and opacity, could make it excessively difficult for the claimant to meet the burden of proof, there could be situations where such difficulties do not exist because there could be sufficient evidence and expertise available to the complainant to prove the causal link. This could be the case, for example, in respect of high-risk AI systems where the claimant could reasonably access sufficient evidence and expertise through documentation and logging requirements pursuant to [the AI Act]. In such situations, the court should not apply the presumption.
(28)The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant. For example, such difficulties could be assessed in light of the characteristics of certain AI systems, such as autonomy and opacity, which render the explanation of the inner functioning of the AI system very difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output. A national court should apply the presumption where the claimant is in an excessively difficult position to prove causation, since it is required to explain how the AI system was led by the human act or omission that constitutes fault to produce the output or the failure to produce an output which gave rise to the damage. However, the claimant should neither be required to explain the characteristics of the AI system concerned nor how these characteristics make it harder to establish the causal link.
(29)The application of the presumption of causality is meant to ensure for the injured person a similar level of protection as for situations where AI is not involved and where causality may therefore be easier to prove. Nevertheless, alleviating the burden of proving causation is not always appropriate under this Directive where the defendant is not a professional user but rather a person using the AI system for its private activities. In such circumstances, in order to balance interests between the injured person and the non-professional user, it needs to be taken into account whether such non-professional users can add to the risk of an AI system causing damage through their behaviour. If the provider of an AI system has complied with all its obligations and, in consequence, that system was deemed sufficiently safe to be put on the market for a given use by non-professional users and it is then used for that task, a presumption of causality should not apply for the simple launch of the operation of such a system by such non-professional users. A non-professional user that buys an AI system and simply launches it according to its purpose, without interfering materially with the conditions of operations, should not be covered by the causality presumption laid down by this Directive. However, if a national court determines that a non-professional user materially interfered with the conditions of operation of an AI system or was required and able to determine the conditions of operation of the AI system and failed to do so, then the presumption of causality should apply, where all the other conditions are fulfilled. This could be the case, for example, when the non-professional user does not comply with the instructions of use or with other applicable duties of care when choosing the area of operation or when setting performance conditions of the AI system. This is without prejudice to the fact that the provider should determine the intended purpose of an AI system, including the specific context and conditions of use, and eliminate or minimise the risks of that system as appropriate at the time of the design and development, taking into account the knowledge and expertise of the intended user.
(30)Since this Directive introduces a rebuttable presumption, the defendant should be able to rebut it, in particular by showing that its fault could not have caused the damage.
(31)It is necessary to provide for a review of this Directive [five years] after the end of the transposition period. In particular, that review should examine whether there is a need to create no-fault liability rules for claims against the operator, as long as not already covered by other Union liability rules in particular Directive 85/374/EEC, combined with a mandatory insurance for the operation of certain AI systems, as suggested by the European Parliament. In accordance with the principle of proportionality, it is appropriate to assess such a need in the light of relevant technological and regulatory developments in the coming years, taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs. Such a review should consider, among others, risks involving damage to important legal values like life, health and property of unwitting third parties through the operation of AI-enabled products or services. That review should also analyse the effectiveness of the measures provided for in this Directive in dealing with such risks, as well as the development of appropriate solutions by the insurance market. To ensure the availability of the information necessary to conduct such a review, it is necessary to collect data and other necessary evidence covering the relevant matters.
(32)Given the need to make adaptations to national civil liability and procedural rules to foster the rolling-out of AI-enabled products and services under beneficial internal market conditions, societal acceptance and consumer trust in AI technology and the justice system, it is appropriate to set a deadline of not later than [two years after the entry into force] of this Directive for Member States to adopt the necessary transposition measures.
(33)In accordance with the Joint Political Declaration of 28 September 2011 of Member States and the Commission on explanatory documents, Member States have undertaken to accompany, in justified cases, the notification of their transposition measures with one or more documents explaining the relationship between the components of a directive and the corresponding parts of national transposition instruments. With regard to this Directive, the legislator considers the transmission of such documents to be justified,
HAVE ADOPTED THIS DIRECTIVE:
Article 1
Subject matter and scope
1.This Directive lays down common rules on:
(a)the disclosure of evidence on high-risk artificial intelligence (AI) systems to enable a claimant to substantiate a non-contractual fault-based civil law claim for damages;
(b)the burden of proof in the case of non-contractual fault-based civil law claims brought before national courts for damages caused by an AI system.
2.This Directive applies to non-contractual fault-based civil law claims for damages, in cases where the damage caused by an AI system occurs after [the end of the transposition period].
This Directive does not apply to criminal liability.
3.This Directive shall not affect:
(a)rules of Union law regulating conditions of liability in the field of transport;
(b)any rights which an injured person may have under national rules implementing Directive 85/374/EEC;
(c)the exemptions from liability and the due diligence obligations as laid down in [the Digital Services Act] and
(d)national rules determining which party has the burden of proof, which degree of certainty is required as regards the standard of proof, or how fault is defined, other than in respect of what is provided for in Articles 3 and 4.
4.Member States may adopt or maintain national rules that are more favourable for claimants to substantiate a non-contractual civil law claim for damages caused by an AI system, provided such rules are compatible with Union law.
Article 2
Definitions
For the purposes of this Directive, the following definitions shall apply:
(1)‘AI system’ means an AI system as defined in [Article 3 (1) of the AI Act];
(2)‘high-risk AI system’ means an AI system referred to in [Article 6 of the AI Act];
(3)‘provider’ means a provider as defined in [Article 3 (2) of the AI Act];
(4)‘user’ means a user as defined in [Article 3 (4) of the AI Act];
(5)‘claim for damages’ means a non-contractual fault-based civil law claim for compensation of the damage caused by an output of an AI system or the failure of such a system to produce an output where such an output should have been produced;
(6)‘claimant’ means a person bringing a claim for damages that:
(a)has been injured by an output of an AI system or by the failure of such a system to produce an output where such an output should have been produced;
(b) has succeeded to or has been subrogated to the right of an injured person by virtue of law or contract; or
(c)is acting on behalf of one or more injured persons, in accordance with Union or national law.
(7)
‘potential claimant’ means a natural or legal person who is considering but has not yet brought a claim for damages;
(8)
‘defendant’ means the person against whom a claim for damages is brought;
(9)
‘duty of care’ means a required standard of conduct, set by national or Union law, in order to avoid damage to legal interests recognised at national or Union law level, including life, physical integrity, property and the protection of fundamental rights.
Article 3
Disclosure of evidence and rebuttable presumption of non-compliance
1.Member States shall ensure that national courts are empowered, either upon the request of a potential claimant who has previously asked a provider, a person subject to the obligations of a provider pursuant to [Article 24 or Article 28(1) of the AI Act] or a user to disclose relevant evidence at its disposal about a specific high-risk AI system that is suspected of having caused damage, but was refused, or a claimant, to order the disclosure of such evidence from those persons.
In support of that request, the potential claimant must present facts and evidence sufficient to support the plausibility of a claim for damages
2.In the context of a claim for damages, the national court shall only order the disclosure of the evidence by one of the persons listed in paragraph 1, if the claimant has undertaken all proportionate attempts at gathering the relevant evidence from the defendant.
3.Member States shall ensure that national courts, upon the request of a claimant, are empowered to order specific measures to preserve the evidence mentioned in paragraph 1.
4.National courts shall limit the disclosure of evidence to that which is necessary and proportionate to support a potential claim or a claim for damages and the preservation to that which is necessary and proportionate to support such a claim for damages.
In determining whether an order for the disclosure or preservation of evidence is proportionate, national courts shall consider the legitimate interests of all parties, including third parties concerned, in particular in relation to the protection of trade secrets within the meaning of Article 2(1) of Directive (EU) 2016/943 and of confidential information, such as information related to public or national security.
Member States shall ensure that, where the disclosure of a trade secret or alleged trade secret which the court has identified as confidential within the meaning of Article 9(1) of Directive (EU) 2016/943 is ordered, national courts are empowered, upon a duly reasoned request of a party or on their own initiative, to take specific measures necessary to preserve confidentiality when that evidence is used or referred to in legal proceedings.
Member States shall also ensure that the person ordered to disclose or to preserve the evidence mentioned in paragraphs 1 or 2 has appropriate procedural remedies in response to such orders.
5.Where a defendant fails to comply with an order by a national court in a claim for damages to disclose or to preserve evidence at its disposal pursuant to paragraphs 1 or 2, a national court shall presume the defendant’s non-compliance with a relevant duty of care, in particular in the circumstances referred to in Article 4(2) or (3), that the evidence requested was intended to prove for the purposes of the relevant claim for damages.
The defendant shall have the right to rebut that presumption.
Article 4
Rebuttable presumption of a causal link in the case of fault
1.Subject to the requirements laid down in this Article, national courts shall presume, for the purposes of applying liability rules to a claim for damages, the causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output, where all of the following conditions are met:
(a) the claimant has demonstrated or the court has presumed pursuant to Article 3(5), the fault of the defendant, or of a person for whose behaviour the defendant is responsible, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred;
(b)it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output;
(c)the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
2.In the case of a claim for damages against a provider of a high-risk AI system subject to the requirements laid down in chapters 2 and 3 of Title III of [the AI Act] or a person subject to the provider’s obligations pursuant to [Article 24 or Article 28(1) of the AI Act], the condition of paragraph 1 letter (a) shall be met only where the complainant has demonstrated that the provider or, where relevant, the person subject to the provider’s obligations, failed to comply with any of the following requirements laid down in those chapters, taking into account the steps undertaken in and the results of the risk management system pursuant to [Article 9 and Article 16 point (a) of the AI Act]:
(a) the AI system is a system which makes use of techniques involving the training of models with data and which was not developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in [Article 10(2) to (4) of the AI Act];
(b) the AI system was not designed and developed in a way that meets the transparency requirements laid down in [Article 13 of the AI Act];
(c)the AI system was not designed and developed in a way that allows for an effective oversight by natural persons during the period in which the AI system is in use pursuant to [Article 14 of the AI Act];
(d)the AI system was not designed and developed so as to achieve, in the light of its intended purpose, an appropriate level of accuracy, robustness and cybersecurity pursuant to [Article 15 and Article 16, point (a), of the AI Act]; or
(e)the necessary corrective actions were not immediately taken to bring the AI system in conformity with the obligations laid down in [Title III, Chapter 2 of the AI Act] or to withdraw or recall the system, as appropriate, pursuant to [Article 16, point (g), and Article 21 of the AI Act].
3.In the case of a claim for damages against a user of a high-risk AI system subject to the requirements laid down in chapters 2 and 3 of Title III of [the AI Act], the condition of paragraph 1 letter (a) shall be met where the claimant proves that the user:
(a)did not comply with its obligations to use or monitor the AI system in accordance with the accompanying instructions of use or, where appropriate, suspend or interrupt its use pursuant to [Article 29 of the AI Act]; or
(b)exposed the AI system to input data under its control which is not relevant in view of the system’s intended purpose pursuant to [Article 29(3) of the Act].
4.In the case of a claim for damages concerning a high-risk AI system, a national court shall not apply the presumption laid down in paragraph 1 where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link mentioned in paragraph 1.
5.In the case of a claim for damages concerning an AI system that is not a high-risk AI system, the presumption laid down in paragraph 1 shall only apply where the national court considers it excessively difficult for the claimant to prove the causal link mentioned in paragraph 1.
6.In the case of a claim for damages against a defendant who used the AI system in the course of a personal, non-professional activity, the presumption laid down in paragraph 1 shall apply only where the defendant materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.
7.The defendant shall have the right to rebut the presumption laid down in paragraph 1.
Article 5
Evaluation and targeted review
1.By [DATE five years after the end of the transposition period], the Commission shall review the application of this Directive and present a report to the European Parliament, to the Council and to the European Economic and Social Committee, accompanied, where appropriate, by a legislative proposal.
2.The report shall examine the effects of Articles 3 and 4 on achieving the objectives pursued by this Directive. In particular, it should evaluate the appropriateness of no-fault liability rules for claims against the operators of certain AI systems, as long as not already covered by other Union liability rules, and the need for insurance coverage, while taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs.
3.The Commission shall establish a monitoring programme for preparing the report pursuant to paragraphs 1 and 2, setting out how and at what intervals the data and other necessary evidence will be collected. The programme shall specify the action to be taken by the Commission and by the Member States in collecting and analysing the data and other evidence. For the purposes of that programme, Member States communicate the relevant data and evidence to the Commission, by [31 December of the second full year following the end of the transposition period] and by the end of each subsequent year.
Article 6
Amendment to Directive (EU) 2020/1828
In Annex I to Directive (EU) 2020/1828, the following point (67) is added:
"(67) Directive (EU) …/… of the European Parliament and of the Council of … on adapting non contractual civil liability rules to artificial intelligence (AI Liability Directive) (OJ L …, …, p. …).".
Article 7
Transposition
1.Member States shall bring into force the laws, regulations and administrative provisions necessary to comply with this Directive by [two years after the entry into force] at the latest. They shall forthwith communicate to the Commission the text of those provisions.
When Member States adopt those provisions, they shall contain a reference to this Directive or be accompanied by such a reference on the occasion of their official publication. Member States shall determine how such reference is to be made.
2.Member States shall communicate to the Commission the text of the main provisions of national law which they adopt in the field covered by this Directive.
Article 8
Entry into force
This Directive shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
Article 9
Addressees
This Directive is addressed to the Member States.
Done at Brussels,
For the European Parliament
For the Council
The President
The President