On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.

IF 3.8 3区 医学 Q2 MEDICAL INFORMATICS BMC Medical Informatics and Decision Making Pub Date : 2025-03-05 DOI:10.1186/s12911-025-02891-2
Justin Blackman, Richard Veerapen
{"title":"On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments.","authors":"Justin Blackman, Richard Veerapen","doi":"10.1186/s12911-025-02891-2","DOIUrl":null,"url":null,"abstract":"<p><p>The necessity for explainability of artificial intelligence technologies in medical applications has been widely discussed and heavily debated within the literature. This paper comprises a systematized review of the arguments supporting and opposing this purported necessity. Both sides of the debate within the literature are quoted to synthesize discourse on common recurring themes and subsequently critically analyze and respond to it. While the use of autonomous black box algorithms is compellingly discouraged, the same cannot be said for the whole of medical artificial intelligence technologies that lack explainability. We contribute novel comparisons of unexplainable clinical artificial intelligence tools, diagnosis of idiopathy, and diagnoses by exclusion, to analyze implications on patient autonomy and informed consent. Applying a novel approach using comparisons with clinical practice guidelines, we contest the claim that lack of explainability compromises clinician due diligence and undermines epistemological responsibility. We find it problematic that many arguments in favour of the practical, ethical, or legal necessity of clinical artificial intelligence explainability conflate the use of unexplainable AI with automated decision making, or equate the use of clinical artificial intelligence with the exclusive use of clinical artificial intelligence.</p>","PeriodicalId":9340,"journal":{"name":"BMC Medical Informatics and Decision Making","volume":"25 1","pages":"111"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11881432/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Informatics and Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12911-025-02891-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

The necessity for explainability of artificial intelligence technologies in medical applications has been widely discussed and heavily debated within the literature. This paper comprises a systematized review of the arguments supporting and opposing this purported necessity. Both sides of the debate within the literature are quoted to synthesize discourse on common recurring themes and subsequently critically analyze and respond to it. While the use of autonomous black box algorithms is compellingly discouraged, the same cannot be said for the whole of medical artificial intelligence technologies that lack explainability. We contribute novel comparisons of unexplainable clinical artificial intelligence tools, diagnosis of idiopathy, and diagnoses by exclusion, to analyze implications on patient autonomy and informed consent. Applying a novel approach using comparisons with clinical practice guidelines, we contest the claim that lack of explainability compromises clinician due diligence and undermines epistemological responsibility. We find it problematic that many arguments in favour of the practical, ethical, or legal necessity of clinical artificial intelligence explainability conflate the use of unexplainable AI with automated decision making, or equate the use of clinical artificial intelligence with the exclusive use of clinical artificial intelligence.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
临床人工智能可解释性的实践、伦理和法律必要性:对关键论点的考察。
人工智能技术在医学应用中的可解释性的必要性在文献中得到了广泛的讨论和激烈的辩论。本文系统地回顾了支持和反对这种所谓必要性的论点。文献中辩论的双方都被引用来综合讨论共同的反复出现的主题,然后批判性地分析和回应它。虽然自主黑箱算法的使用受到强烈的劝阻,但对于缺乏可解释性的整个医疗人工智能技术来说,情况并非如此。我们对无法解释的临床人工智能工具、特发性诊断和排除诊断进行了新颖的比较,以分析对患者自主和知情同意的影响。采用与临床实践指南比较的新方法,我们对缺乏可解释性损害临床医生尽职调查和破坏认识论责任的说法提出异议。我们发现,许多支持临床人工智能可解释性的实践、伦理或法律必要性的论点,将不可解释的人工智能的使用与自动决策混为一谈,或者将临床人工智能的使用等同于临床人工智能的独家使用,这是有问题的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.20
自引率
5.70%
发文量
297
审稿时长
1 months
期刊介绍: BMC Medical Informatics and Decision Making is an open access journal publishing original peer-reviewed research articles in relation to the design, development, implementation, use, and evaluation of health information technologies and decision-making for human health.
期刊最新文献
Understanding how a tap on/tap off system supports clinical work in an emergency department: a qualitative study. Phenotypic subclassification of preeclampsia through cluster analysis of preterm birth-related factors. Respiratory sound analysis for ICU clinical decision support: deep learning-based classification of normal and abnormal sounds using real ICU data. Explainable counterfactual reasoning in depression medication selection at multi-levels (personalized and population). Radiomics features and clinical factors for predicting restenosis following endovascular therapy in patients with peripheral artery disease.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1