Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen?

Pub Date : 2023-04-03 DOI:10.1007/s00481-023-00761-x
Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch, Cristian Timmermann
{"title":"Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen?","authors":"Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch, Cristian Timmermann","doi":"10.1007/s00481-023-00761-x","DOIUrl":null,"url":null,"abstract":"Abstract Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI? Arguments We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example. Conclusion We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00481-023-00761-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Abstract Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI? Arguments We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example. Conclusion We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
医疗人工智能的直观程度:什么是我们需要的规范?技术上我们能达到哪些目标?
总括性术语“可解释性”指的是减少人工智能(AI)系统的不透明性。这些努力对于医疗人工智能应用来说是具有挑战性的,因为更高的准确性往往是以增加不透明度为代价的。这就引发了伦理上的紧张关系,因为医生和患者都希望在不影响人工智能系统性能的情况下追踪结果是如何产生的。在医疗人工智能系统的知情同意过程中,可解释性的核心地位迫使人们对权衡进行道德反思。在使用医疗人工智能时,需要哪些级别的可解释性才能获得知情同意?我们分五个步骤进行:首先,我们绘制了通常与伦理和计算机科学文献中描述的可解释性相关的术语,即披露、可理解性、可解释性和可解释性。其次,我们对知情同意的可解释性的伦理要求进行了概念分析。第三,我们从认知和解释不透明的角度区分了可解释性的障碍。第四,这样就可以得出医生必须达到的可解释性水平和患者可以期望的水平。在最后一步中,我们将展示如何从计算机科学的角度在技术上满足已确定的可解释性水平。在我们的工作中,我们以放射学中的诊断人工智能系统为例。我们确定了在道德上可辩护的知情同意过程中需要区分的四个可解释性级别,并展示了医疗人工智能的开发人员如何在技术上满足这些要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1