人工智能和 XAI 第二意见:人类与人工智能合作中错误确认的危险。

IF 3.3 2区 哲学 Q1 ETHICS Journal of Medical Ethics Pub Date : 2024-07-29 DOI:10.1136/jme-2024-110074
Rikard Rosenbacke, Åsa Melhus, Martin McKee, David Stuckler
{"title":"人工智能和 XAI 第二意见:人类与人工智能合作中错误确认的危险。","authors":"Rikard Rosenbacke, Åsa Melhus, Martin McKee, David Stuckler","doi":"10.1136/jme-2024-110074","DOIUrl":null,"url":null,"abstract":"<p><p>Can AI substitute a human physician's second opinion? Recently the <i>Journal of Medical Ethics</i> published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician's while Jongsma and Sand argue for a second human opinion irrespective of AI's concurrence or dissent. The crux of this debate hinges on the prevalence and impact of 'false confirmation'-a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician-AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI and XAI second opinion: the danger of false confirmation in human-AI collaboration.\",\"authors\":\"Rikard Rosenbacke, Åsa Melhus, Martin McKee, David Stuckler\",\"doi\":\"10.1136/jme-2024-110074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Can AI substitute a human physician's second opinion? Recently the <i>Journal of Medical Ethics</i> published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician's while Jongsma and Sand argue for a second human opinion irrespective of AI's concurrence or dissent. The crux of this debate hinges on the prevalence and impact of 'false confirmation'-a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician-AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.</p>\",\"PeriodicalId\":16317,\"journal\":{\"name\":\"Journal of Medical Ethics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1136/jme-2024-110074\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2024-110074","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能能否取代人类医生的第二意见?最近,《医学伦理学杂志》发表了两种截然不同的观点:坎普特(Kempt)和纳格尔(Nagel)主张使用人工智能(AI)作为第二意见,除非人工智能的结论与最初医生的结论有明显差异;而琼斯马(Jongsma)和桑德(Sand)则主张无论人工智能是否同意,都应使用第二人类意见。这场争论的关键在于 "错误确认 "的普遍性和影响--即人工智能错误地验证了人类的错误决定。这些错误似乎极难发现,让人联想到类似确认偏差的启发式思维。然而,这场辩论还没有涉及到可解释人工智能(XAI)的出现,它阐述了人工智能工具得出诊断结果的原因。为了推进这一讨论,我们概述了一个框架,用于概念化医生-人工智能合作中的决策错误。然后,我们回顾了有关错误确认错误程度的新证据。我们的模拟结果表明,误诊可能在临床实践中普遍存在,会将诊断准确率降低到 5% 到 30%。最后,我们提出了将人工智能作为第二意见的务实方法,强调医生在咨询人工智能之前需要做出临床决定;采用提示以提高对错误确认的认识,并批判性地参与 XAI 解释。这种方法强调,在将人工智能纳入临床决策时,必须采用谨慎、循证的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AI and XAI second opinion: the danger of false confirmation in human-AI collaboration.

Can AI substitute a human physician's second opinion? Recently the Journal of Medical Ethics published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician's while Jongsma and Sand argue for a second human opinion irrespective of AI's concurrence or dissent. The crux of this debate hinges on the prevalence and impact of 'false confirmation'-a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician-AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
期刊最新文献
Strengthening harm-theoretic pro-life views. Wish to die trying to live: unwise or incapacitous? The case of University Hospitals Birmingham NHS Foundation Trust versus 'ST'. Pregnant women are often not listened to, but pathologising pregnancy isn't the solution. How ectogestation can impact the gestational versus moral parenthood debate. If not a right to children because of gestation, then not a duty towards them either.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1