欧洲拟议人工智能法案中的可接受风险:决定多少风险管理足够的合理性和其他原则

IF 1.8 Q1 LAW European Journal of Risk Regulation Pub Date : 2023-08-18 DOI:10.1017/err.2023.57
Henry Fraser, José-Miguel Bello y Villarino
{"title":"欧洲拟议人工智能法案中的可接受风险:决定多少风险管理足够的合理性和其他原则","authors":"Henry Fraser, José-Miguel Bello y Villarino","doi":"10.1017/err.2023.57","DOIUrl":null,"url":null,"abstract":"Abstract This paper critically evaluates the European Commission’s proposed AI Act’s approach to risk management and risk acceptability for high-risk artificial intelligence systems that pose risks to fundamental rights and safety. The Act aims to promote “trustworthy” AI with a proportionate regulatory burden. Its provisions on risk acceptability require residual risks from high-risk systems to be reduced or eliminated “as far as possible”, having regard for the “state of the art”. This criterion, especially if interpreted narrowly, is unworkable and promotes neither proportionate regulatory burden nor trustworthiness. By contrast, the Parliament’s most recent draft amendments to the risk management provisions introduce “reasonableness” and cost–benefit analyses and are more transparent regarding the value-laden and contextual nature of risk acceptability judgments. This paper argues that the Parliament’s approach is more workable and better balances the goals of proportionality and trustworthiness. It explains what reasonableness in risk acceptability judgments would entail, drawing on principles from negligence law and European medical devices regulation. It also contends that the approach to risk acceptability judgments needs a firm foundation of civic legitimacy, including detailed guidance or involvement from regulators and meaningful input from affected stakeholders.","PeriodicalId":46207,"journal":{"name":"European Journal of Risk Regulation","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough\",\"authors\":\"Henry Fraser, José-Miguel Bello y Villarino\",\"doi\":\"10.1017/err.2023.57\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract This paper critically evaluates the European Commission’s proposed AI Act’s approach to risk management and risk acceptability for high-risk artificial intelligence systems that pose risks to fundamental rights and safety. The Act aims to promote “trustworthy” AI with a proportionate regulatory burden. Its provisions on risk acceptability require residual risks from high-risk systems to be reduced or eliminated “as far as possible”, having regard for the “state of the art”. This criterion, especially if interpreted narrowly, is unworkable and promotes neither proportionate regulatory burden nor trustworthiness. By contrast, the Parliament’s most recent draft amendments to the risk management provisions introduce “reasonableness” and cost–benefit analyses and are more transparent regarding the value-laden and contextual nature of risk acceptability judgments. This paper argues that the Parliament’s approach is more workable and better balances the goals of proportionality and trustworthiness. It explains what reasonableness in risk acceptability judgments would entail, drawing on principles from negligence law and European medical devices regulation. It also contends that the approach to risk acceptability judgments needs a firm foundation of civic legitimacy, including detailed guidance or involvement from regulators and meaningful input from affected stakeholders.\",\"PeriodicalId\":46207,\"journal\":{\"name\":\"European Journal of Risk Regulation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Risk Regulation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/err.2023.57\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Risk Regulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/err.2023.57","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

摘要

本文批判性地评估了欧盟委员会提议的人工智能法案的风险管理方法和风险可接受性,这些方法对基本权利和安全构成风险。该法案旨在促进“可信赖”的人工智能,并承担相应的监管负担。它关于风险可接受性的规定要求,在考虑到“最新技术”的情况下,“尽可能”减少或消除高风险系统的剩余风险。这一标准,特别是如果狭义解释的话,是行不通的,既不会增加相应的监管负担,也不会增加可信度。相比之下,议会对风险管理条款的最新修订草案引入了“合理性”和成本效益分析,并且在风险可接受性判断的价值和背景性质方面更加透明。本文认为,议会的做法更可行,更好地平衡了相称性和可信度的目标。它解释了风险可接受性判断的合理性,借鉴了疏忽法和欧洲医疗器械法规的原则。它还认为,风险可接受性判断的方法需要公民合法性的坚实基础,包括监管机构的详细指导或参与,以及受影响的利益相关者的有意义的投入。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough
Abstract This paper critically evaluates the European Commission’s proposed AI Act’s approach to risk management and risk acceptability for high-risk artificial intelligence systems that pose risks to fundamental rights and safety. The Act aims to promote “trustworthy” AI with a proportionate regulatory burden. Its provisions on risk acceptability require residual risks from high-risk systems to be reduced or eliminated “as far as possible”, having regard for the “state of the art”. This criterion, especially if interpreted narrowly, is unworkable and promotes neither proportionate regulatory burden nor trustworthiness. By contrast, the Parliament’s most recent draft amendments to the risk management provisions introduce “reasonableness” and cost–benefit analyses and are more transparent regarding the value-laden and contextual nature of risk acceptability judgments. This paper argues that the Parliament’s approach is more workable and better balances the goals of proportionality and trustworthiness. It explains what reasonableness in risk acceptability judgments would entail, drawing on principles from negligence law and European medical devices regulation. It also contends that the approach to risk acceptability judgments needs a firm foundation of civic legitimacy, including detailed guidance or involvement from regulators and meaningful input from affected stakeholders.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.10
自引率
0.00%
发文量
34
期刊介绍: European Journal of Risk Regulation is an interdisciplinary forum bringing together legal practitioners, academics, risk analysts and policymakers in a dialogue on how risks to individuals’ health, safety and the environment are regulated across policy domains globally. The journal’s wide scope encourages exploration of public health, safety and environmental aspects of pharmaceuticals, food and other consumer products alongside a wider interpretation of risk, which includes financial regulation, technology-related risks, natural disasters and terrorism.
期刊最新文献
Management and Enforcement Theories for Compliance with the Rule of Law A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities Standards for Including Scientific Evidence in Restrictions on Freedom of Movement: The Case of EU Covid Certificates Scheme Collaborative Governance Structures for Interoperability in the EU’s new data acts Dangerous Legacy of Food Contact Materials on the EU Market: Recall of Products Containing PFAS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1