人工智能共同监管?标准在欧盟人工智能法案中的作用

Marta Cantero Gamito, Christopher T Marsden
{"title":"人工智能共同监管?标准在欧盟人工智能法案中的作用","authors":"Marta Cantero Gamito, Christopher T Marsden","doi":"10.1093/ijlit/eaae011","DOIUrl":null,"url":null,"abstract":"This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardization, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organizations. The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article examines the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive, and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"14 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence co-regulation? The role of standards in the EU AI Act\",\"authors\":\"Marta Cantero Gamito, Christopher T Marsden\",\"doi\":\"10.1093/ijlit/eaae011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardization, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organizations. The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article examines the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive, and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.\",\"PeriodicalId\":44278,\"journal\":{\"name\":\"International Journal of Law and Information Technology\",\"volume\":\"14 1\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Law and Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ijlit/eaae011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Law and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ijlit/eaae011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了欧盟《人工智能法》中的人工智能(AI)共同监管以及标准在这一监管战略中的关键作用。文章探讨了欧盟标准化的民主合法性基础,强调了改革的必要性,以跟上人工智能能力快速发展的步伐,正如欧洲议会最近所建议的那样。文章强调了跨学科性带来的挑战以及民间社会在标准制定方面专业知识的缺乏。文章批评了社会利益相关者在人工智能标准制定过程中代表性不足的问题,提出了一个紧迫的问题,即由于缺乏民主监督和标准制定组织的全球构成,这对基本权利的保护带来了潜在风险。文章仔细研究了根据《人工智能法》,技术标准将如何定义人工智能风险和缓解措施,并质疑技术专家是否有足够能力将不同高风险环境下可接受的残余风险阈值标准化。更具体地说,文章探讨了人工智能监管的复杂性,提请注意识别人工智能系统风险的多维性和任务的价值性。文章对建立人工智能风险类型学的可能性提出了质疑,并强调有必要采用一种细致入微、具有包容性且针对具体情况的方法来识别和缓解风险。因此,我们在文章中强调,利益相关者必须持续参与制定、监督和完善高风险人工智能应用的技术规则和标准。我们还强调有必要采取严格的培训、认证和监督措施,以确保在人工智能发展中落实基本权利。最后,我们建议提高风险识别方法的透明度和包容性,敦促采取让利益相关者参与的方法,并要求风险评估具备多种技能。同时,我们还提请注意欧盟内部的多样性,以及因此需要考虑各国国情、语言、机构和文化的本地化风险评估。最后,文章认为,根据《人工智能法》进行共同监管需要对标准制定过程进行彻底的重新审查和改革,以确保对人工智能监管采取民主合法、跨学科、利益相关者参与和反应迅速的方法,从而保障基本权利,预测、识别和减轻广泛的人工智能风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial intelligence co-regulation? The role of standards in the EU AI Act
This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardization, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organizations. The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article examines the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive, and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.10
自引率
0.00%
发文量
15
期刊介绍: The International Journal of Law and Information Technology provides cutting-edge and comprehensive analysis of Information Technology, Communications and Cyberspace law as well as the issues arising from applying Information and Communications Technologies (ICT) to legal practice. International in scope, this journal has become essential for legal and computing professionals and legal scholars of the law related to IT.
期刊最新文献
Digital identity: an approach to its nature, concept, and functionalities Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive Quantum-safe global encryption policy Video-sharing-platforms and Brussels Ia regulation: navigating contractual jurisdictional challenges Artificial intelligence co-regulation? The role of standards in the EU AI Act
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1