Find the Gap: AI, Responsible Agency and Vulnerability

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Minds and Machines Pub Date : 2024-06-05 DOI:10.1007/s11023-024-09674-0
Shannon Vallor, Tillmann Vierkant
{"title":"Find the Gap: AI, Responsible Agency and Vulnerability","authors":"Shannon Vallor, Tillmann Vierkant","doi":"10.1007/s11023-024-09674-0","DOIUrl":null,"url":null,"abstract":"<p>The <i>responsibility gap</i>, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless <i>is</i> a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of <i>vulnerability</i> between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":4.2000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minds and Machines","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11023-024-09674-0","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless is a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of vulnerability between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
寻找差距:人工智能、负责任的机构和脆弱性
责任差距通常被描述为人工智能和自主系统(AI/AS)的有效治理和信任所面临的核心挑战,传统上与道德责任的认识和/或控制条件失效有关:即知道我们在做什么并对这种行为进行有效控制的能力。然而,在理解人工智能/自动系统带来的责任挑战时,这两个条件只是一个障眼法,因为认知科学的证据表明,人类个体在这两个条件方面面临着非常相似的责任挑战。虽然认识上的不透明性和行为控制的减弱并不是人工智能/辅助系统技术所独有的问题(尽管这些问题会因此而加剧),但我们可以从哲学家们最近如何修订传统的道德责任概念,以应对认知科学对负责任的人类机构提出的这些挑战中,为人工智能/辅助系统的开发和治理汲取重要的经验教训。由此产生的工具主义责任观强调机构培养的前瞻性和灵活性作用,为将人工智能/AS 纳入健康的道德生态带来了巨大希望。我们注意到,在人工智能/AS 的责任方面还存在着一个有待广泛研究和解决的空白,这个空白的基础是人类代理与社会技术系统(如人工智能/AS)之间脆弱性的关系不对称。在本文的结论中,我们指出,对这一脆弱性差距的关注必须为未来构建值得信赖的人工智能/辅助系统和维护负责任的人类机构的条件提供信息和帮助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Minds and Machines
Minds and Machines 工程技术-计算机:人工智能
CiteScore
12.60
自引率
2.70%
发文量
30
审稿时长
>12 weeks
期刊介绍: Minds and Machines, affiliated with the Society for Machines and Mentality, serves as a platform for fostering critical dialogue between the AI and philosophical communities. With a focus on problems of shared interest, the journal actively encourages discussions on the philosophical aspects of computer science. Offering a global forum, Minds and Machines provides a space to debate and explore important and contentious issues within its editorial focus. The journal presents special editions dedicated to specific topics, invites critical responses to previously published works, and features review essays addressing current problem scenarios. By facilitating a diverse range of perspectives, Minds and Machines encourages a reevaluation of the status quo and the development of new insights. Through this collaborative approach, the journal aims to bridge the gap between AI and philosophy, fostering a tradition of critique and ensuring these fields remain connected and relevant.
期刊最新文献
Mapping the Ethics of Generative AI: A Comprehensive Scoping Review A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement Artificial Intelligence for the Internal Democracy of Political Parties A Causal Analysis of Harm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1