Medical artificial intelligence and the black box problem: a view based on the ethical principle of “do no harm”

IF 4.4 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Intelligent medicine Pub Date : 2024-02-01 DOI:10.1016/j.imed.2023.08.001
Hanhui Xu , Kyle Michael James Shuttleworth
{"title":"Medical artificial intelligence and the black box problem: a view based on the ethical principle of “do no harm”","authors":"Hanhui Xu ,&nbsp;Kyle Michael James Shuttleworth","doi":"10.1016/j.imed.2023.08.001","DOIUrl":null,"url":null,"abstract":"<div><p>One concern about the application of medical artificial intelligence (AI) regards the “black box” feature which can only be viewed in terms of its inputs and outputs, with no way to understand the AI's algorithm. This is problematic because patients, physicians, and even designers, do not understand why or how a treatment recommendation is produced by AI technologies. One view claims that the worry about black-box medicine is unreasonable because AI systems outperform human doctors in identifying the disease. Furthermore, under the medical AI-physician-patient model, the physician can undertake the responsibility of interpreting the medical AI's diagnosis. In this study, we focus on the potential harm caused by the unexplainability feature of medical AI and try to show that such possible harm is underestimated. We will seek to contribute to the literature from three aspects. First, we appealed to a thought experiment to show that although the medical AI systems perform better on accuracy, the harm caused by medical AI's misdiagnoses may be more serious than that caused by human doctors’ misdiagnoses in some cases. Second, in patient-centered medicine, physicians were obligated to provide adequate information to their patients in medical decision-making. However, the unexplainability feature of medical AI systems would limit the patient's autonomy. Last, we tried to illustrate the psychological and financial burdens that may be caused by the unexplainablity feature of medical AI systems, which seems to be ignored by the previous ethical discussions.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"4 1","pages":"Pages 52-57"},"PeriodicalIF":4.4000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000578/pdfft?md5=2e773b43b24965c9b85b33a8a1a2ab14&pid=1-s2.0-S2667102623000578-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102623000578","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

One concern about the application of medical artificial intelligence (AI) regards the “black box” feature which can only be viewed in terms of its inputs and outputs, with no way to understand the AI's algorithm. This is problematic because patients, physicians, and even designers, do not understand why or how a treatment recommendation is produced by AI technologies. One view claims that the worry about black-box medicine is unreasonable because AI systems outperform human doctors in identifying the disease. Furthermore, under the medical AI-physician-patient model, the physician can undertake the responsibility of interpreting the medical AI's diagnosis. In this study, we focus on the potential harm caused by the unexplainability feature of medical AI and try to show that such possible harm is underestimated. We will seek to contribute to the literature from three aspects. First, we appealed to a thought experiment to show that although the medical AI systems perform better on accuracy, the harm caused by medical AI's misdiagnoses may be more serious than that caused by human doctors’ misdiagnoses in some cases. Second, in patient-centered medicine, physicians were obligated to provide adequate information to their patients in medical decision-making. However, the unexplainability feature of medical AI systems would limit the patient's autonomy. Last, we tried to illustrate the psychological and financial burdens that may be caused by the unexplainablity feature of medical AI systems, which seems to be ignored by the previous ethical discussions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
医疗人工智能与黑匣子问题——基于“不伤害”伦理原则的观点
医疗人工智能(AI)应用的一个令人担忧的问题是其 "黑盒子 "功能,人们只能看到其输入和输出,而无法了解人工智能的算法。这是一个问题,因为患者、医生甚至设计者都不了解人工智能技术为什么或如何产生治疗建议。有一种观点认为,对黑箱医疗的担忧是不合理的,因为人工智能系统在识别疾病方面优于人类医生。此外,在医疗人工智能-医生-患者模式下,医生可以承担解释医疗人工智能诊断的责任。在本研究中,我们重点关注医疗人工智能的不可解释性特征可能造成的危害,并试图证明这种可能的危害被低估了。我们将从三个方面为相关文献做出贡献。首先,我们通过一个思想实验来说明,虽然医疗人工智能系统在准确性上表现更好,但在某些情况下,医疗人工智能误诊造成的危害可能比人类医生误诊造成的危害更严重。其次,在以患者为中心的医学中,医生有义务在医疗决策中为患者提供充分的信息。然而,医疗人工智能系统的不可解释性会限制患者的自主权。最后,我们试图说明医疗人工智能系统的不可解释性特征可能造成的心理和经济负担,而以往的伦理讨论似乎忽视了这一点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Intelligent medicine
Intelligent medicine Surgery, Radiology and Imaging, Artificial Intelligence, Biomedical Engineering
CiteScore
5.20
自引率
0.00%
发文量
19
期刊最新文献
Impact of data balancing a multiclass dataset before the creation of association rules to study bacterial vaginosis Neuropsychological detection and prediction using machine learning algorithms: a comprehensive review Improved neurological diagnoses and treatment strategies via automated human brain tissue segmentation from clinical magnetic resonance imaging Increasing the accuracy and reproducibility of positron emission tomography radiomics for predicting pelvic lymph node metastasis in patients with cervical cancer using 3D local binary pattern-based texture features A clinical decision support system using rough set theory and machine learning for disease prediction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1