{"title":"Black-box assisted medical decisions: AI power vs. ethical physician care.","authors":"Berman Chan","doi":"10.1007/s11019-023-10153-z","DOIUrl":null,"url":null,"abstract":"<p><p>I raise an ethical problem with physicians using \"black box\" medical AI algorithms, arguing that its use would compromise proper patient care. Even if AI results are reliable, my contention is that without being able to explain medical decisions to patients, physicians' use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a \"co-pilot\" (analogous to a spellchecker) but can independently confirm its accuracy. My argument will be further sharpened when, lastly, I give important attention to Alex John London's objection that physicians already sometimes prescribe treatment, such as lithium drugs, even though neither researchers nor doctors can explain why the treatment works.</p>","PeriodicalId":47449,"journal":{"name":"Medicine Health Care and Philosophy","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425517/pdf/","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Health Care and Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11019-023-10153-z","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 1
Abstract
I raise an ethical problem with physicians using "black box" medical AI algorithms, arguing that its use would compromise proper patient care. Even if AI results are reliable, my contention is that without being able to explain medical decisions to patients, physicians' use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a "co-pilot" (analogous to a spellchecker) but can independently confirm its accuracy. My argument will be further sharpened when, lastly, I give important attention to Alex John London's objection that physicians already sometimes prescribe treatment, such as lithium drugs, even though neither researchers nor doctors can explain why the treatment works.
我对医生使用“黑盒”医疗人工智能算法提出了一个伦理问题,认为它的使用会损害对病人的适当护理。即使人工智能的结果是可靠的,我的论点是,如果不能向患者解释医疗决定,医生使用黑盒人工智能会削弱他们为患者提供的有效和尊重的护理。此外,我认为医生应该只在陷入困境的病人身上使用人工智能黑匣子,或者当医生使用人工智能作为“副驾驶员”(类似于拼写检查器),但可以独立确认其准确性时。最后,当我对亚历克斯·约翰·伦敦(Alex John London)的反对意见给予重要关注时,我的论点将进一步加强。他认为,医生有时已经开出治疗处方,比如锂盐药物,尽管研究人员和医生都无法解释这种治疗为什么有效。
期刊介绍:
Medicine, Health Care and Philosophy: A European Journal is the official journal of the European Society for Philosophy of Medicine and Health Care. It provides a forum for international exchange of research data, theories, reports and opinions in bioethics and philosophy of medicine. The journal promotes interdisciplinary studies, and stimulates philosophical analysis centered on a common object of reflection: health care, the human effort to deal with disease, illness, death as well as health, well-being and life. Particular attention is paid to developing contributions from all European countries, and to making accessible scientific work and reports on the practice of health care ethics, from all nations, cultures and language areas in Europe.