人工智能责任难题和基于基金的解决方案

Olivia J. Erd'elyi, G'abor Erd'elyi
{"title":"人工智能责任难题和基于基金的解决方案","authors":"Olivia J. Erd'elyi, G'abor Erd'elyi","doi":"10.1145/3375627.3375806","DOIUrl":null,"url":null,"abstract":"Certainty around the regulatory environment is crucial to facilitate responsible AI innovation and its social acceptance. However, the existing legal liability system is inapt to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI incalculable, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best-practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory framework, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose at least partial industry-funding, with funding arrangements depending on whether it would pursue other potential policy goals.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"54 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"The AI Liability Puzzle and a Fund-Based Work-Around\",\"authors\":\"Olivia J. Erd'elyi, G'abor Erd'elyi\",\"doi\":\"10.1145/3375627.3375806\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Certainty around the regulatory environment is crucial to facilitate responsible AI innovation and its social acceptance. However, the existing legal liability system is inapt to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI incalculable, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best-practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory framework, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose at least partial industry-funding, with funding arrangements depending on whether it would pursue other potential policy goals.\",\"PeriodicalId\":93612,\"journal\":{\"name\":\"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"54 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3375627.3375806\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

监管环境的确定性对于促进负责任的人工智能创新及其社会接受度至关重要。然而,现有的法律责任制度无法在潜在的有害行为和/或伤害本身不可预见的情况下分配责任,而人工智能的一些实例和/或它们可能引发的伤害在法律意义上是不可预见的。法院如何处理此类案件的不可预测性使得投资和使用人工智能所涉及的风险无法估量,创造了一个不利于创新的环境,并可能剥夺人工智能可以提供的一些好处。为解决这一问题,我们建议借鉴金融监管最佳实践,建立人工智能担保机制体系。我们设想该系统将成为更广泛的市场结构监管框架的一部分,其主要功能是提供一个现成的、清晰和透明的融资机制,以补偿通过传统诉讼极其困难或不可能实现的索赔。我们建议至少提供部分行业资金,资金安排取决于该公司是否会追求其他潜在的政策目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The AI Liability Puzzle and a Fund-Based Work-Around
Certainty around the regulatory environment is crucial to facilitate responsible AI innovation and its social acceptance. However, the existing legal liability system is inapt to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI incalculable, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best-practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory framework, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose at least partial industry-funding, with funding arrangements depending on whether it would pursue other potential policy goals.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Bias in Artificial Intelligence Models in Financial Services Privacy Preserving Machine Learning Systems AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021 To Scale: The Universalist and Imperialist Narrative of Big Tech AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1