欧盟对人工智能责任的监管方法及其在金融服务市场的应用

IF 3.3 3区 社会学 Q1 LAW Computer Law & Security Review Pub Date : 2024-05-31 DOI:10.1016/j.clsr.2024.105984
Maria Lillà Montagnani , Marie-Claire Najjar , Antonio Davola
{"title":"欧盟对人工智能责任的监管方法及其在金融服务市场的应用","authors":"Maria Lillà Montagnani ,&nbsp;Marie-Claire Najjar ,&nbsp;Antonio Davola","doi":"10.1016/j.clsr.2024.105984","DOIUrl":null,"url":null,"abstract":"<div><p>The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":3.3000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000517/pdfft?md5=f07f39582f3ead443d3362d92bf7c60f&pid=1-s2.0-S0267364924000517-main.pdf","citationCount":"0","resultStr":"{\"title\":\"The EU Regulatory approach(es) to AI liability, and its Application to the financial services market\",\"authors\":\"Maria Lillà Montagnani ,&nbsp;Marie-Claire Najjar ,&nbsp;Antonio Davola\",\"doi\":\"10.1016/j.clsr.2024.105984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.</p></div>\",\"PeriodicalId\":51516,\"journal\":{\"name\":\"Computer Law & Security Review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0267364924000517/pdfft?md5=f07f39582f3ead443d3362d92bf7c60f&pid=1-s2.0-S0267364924000517-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Law & Security Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0267364924000517\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924000517","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)的不断进步可以造福于社会的各个方面和经济的各个领域,但同时也给这些技术的提供者和使用者带来了重大风险。这些风险因人工智能技术发展的不可预测性(如自学系统自主性的提高)而变得更加突出,这使得建立一个全面的法律框架来应对人工智能使用过程中可能产生的所有法律和道德问题变得更加困难。因此,由于人工智能的特殊性--即复杂性、不透明性、自主性、不可预测性、开放性、数据驱动性和脆弱性,执法机关在检查适用法律的遵守情况和评估责任方面面临更多困难。这些问题在金融市场等领域尤为突出,因为在这些领域,人工智能系统失灵造成的后果可能会对个人保护和整体市场稳定产生重大影响。这种情况对日益数字化和全球化背景下的政策制定提出了挑战,监管机构难以预测和面对人工智能系统对经济和社会的影响,也难以确保这些系统以人为本、合乎道德、可解释、可持续并尊重基本权利和价值观。欧盟越来越重视填补现有法律框架与人工智能之间的空白。正在审议的一些立法提案要求制定预防性立法,并对不同行为者引入义务--如《人工智能法》,而另一些提案则具有补偿范围,并寻求建立一个责任框架--如拟议的《人工智能责任指令》和经修订的《产品责任指令》。同时,跨部门法规应与具体部门的倡议及其制定的规则共存。本文首先评估了现有欧洲责任制度与不断演变的人工智能环境的契合度,确定了此类技术的责任制度应建立的规范基础。然后,本文讨论了对立法的拟议增补和修订,重点关注它们如何管理人工智能系统,主要侧重于它们对金融市场等高度受监管的复杂系统的影响。最后,它考虑了可能的补充措施,这些措施可以继续平衡各方利益,即设法降低使用人工智能所带来的固有风险,并利用其为我们的社会和经济带来的重大利益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The EU Regulatory approach(es) to AI liability, and its Application to the financial services market

The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.60
自引率
10.30%
发文量
81
审稿时长
67 days
期刊介绍: CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.
期刊最新文献
Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges Asia-Pacific developments An Infrastructural Brussels Effect: The translation of EU Law into the UK's digital borders Mapping interpretations of the law in online content moderation in Germany A European right to end-to-end encryption?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1