Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.

IF 1.5 4区 医学 Q3 HEALTH CARE SCIENCES & SERVICES Cambridge Quarterly of Healthcare Ethics Pub Date : 2024-07-01 Epub Date: 2023-01-09 DOI:10.1017/S0963180122000688
Charles Rathkopf, Bert Heinrichs
{"title":"Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.","authors":"Charles Rathkopf, Bert Heinrichs","doi":"10.1017/S0963180122000688","DOIUrl":null,"url":null,"abstract":"<p><p>Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called <i>trustworthy AI.</i> In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing the ethics of AI in terms of trustworthiness, we reinforce unjustified anthropocentric assumptions that stand in the way of clear analysis. Furthermore, even if we insist on a purely epistemic interpretation of the concept, according to which trustworthiness just means measurable reliability, it turns out that the analysis will, nevertheless, suffer from a subtle form of anthropocentrism. The paper goes on to develop the concept of strange error, which serves both to sharpen the initial diagnosis of the inadequacy of trustworthy AI and to articulate the novel epistemological situation created by the use of AI. The paper concludes with a discussion of how strange error puts pressure on standard practices of assessing moral culpability, particularly in the context of medicine.</p>","PeriodicalId":55300,"journal":{"name":"Cambridge Quarterly of Healthcare Ethics","volume":" ","pages":"333-345"},"PeriodicalIF":1.5000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cambridge Quarterly of Healthcare Ethics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1017/S0963180122000688","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/9 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing the ethics of AI in terms of trustworthiness, we reinforce unjustified anthropocentric assumptions that stand in the way of clear analysis. Furthermore, even if we insist on a purely epistemic interpretation of the concept, according to which trustworthiness just means measurable reliability, it turns out that the analysis will, nevertheless, suffer from a subtle form of anthropocentrism. The paper goes on to develop the concept of strange error, which serves both to sharpen the initial diagnosis of the inadequacy of trustworthy AI and to articulate the novel epistemological situation created by the use of AI. The paper concludes with a discussion of how strange error puts pressure on standard practices of assessing moral culpability, particularly in the context of medicine.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学会与奇怪的错误共存:超越人工智能伦理中的可信性》。
关于人工智能(AI)伦理的立场文件往往是试图制定技术和监管策略,以实现通常所说的可信赖的人工智能。在这些文件中,技术和监管战略经常得到详细分析,但值得信赖的人工智能的概念却没有得到分析。因此,这一概念仍不明确。本文列出了对这一概念的各种可能解释,并得出结论认为,这些解释都不恰当。核心问题在于,通过从可信度的角度来阐述人工智能伦理,我们强化了不合理的人类中心主义假设,阻碍了清晰的分析。此外,即使我们坚持对这一概念进行纯粹的认识论解释,即可信度只是指可测量的可靠性,但结果表明,这种分析仍然会受到一种微妙的人类中心主义的影响。本文接着提出了 "奇怪的错误"(range error)这一概念,它既有助于进一步明确对值得信赖的人工智能不足之处的初步诊断,也有助于阐明人工智能的使用所带来的新的认识论局面。论文最后讨论了奇怪的错误如何对评估道德罪责的标准做法造成压力,特别是在医学方面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.90
自引率
11.10%
发文量
127
审稿时长
>12 weeks
期刊介绍: The Cambridge Quarterly of Healthcare Ethics is designed to address the challenges of biology, medicine and healthcare and to meet the needs of professionals serving on healthcare ethics committees in hospitals, nursing homes, hospices and rehabilitation centres. The aim of the journal is to serve as the international forum for the wide range of serious and urgent issues faced by members of healthcare ethics committees, physicians, nurses, social workers, clergy, lawyers and community representatives.
期刊最新文献
At the Museum. Miracle. Neurorights versus Externalism about Mental Content: Characterizing the 'Harm' of Neurotechnological Mind Reading. Seeing and Having Seen: On Suffering and Intersubjectivity. The Moral Significance of Biofixtures: A Response to Nathan Goldstein, Bridget Tracy, and Rosamond Rhodes "But I have a pacer…there is no point in engaging in hypothetical scenarios": A Non-imminently Dying Patient's Request for Pacemaker Deactivation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1