生物医学中人工智能系统的因果关系和科学解释。

IF 2.9 4区 医学 Q2 PHYSIOLOGY Pflugers Archiv : European journal of physiology Pub Date : 2024-10-29 DOI:10.1007/s00424-024-03033-9
Florian Boge, Axel Mosig
{"title":"生物医学中人工智能系统的因果关系和科学解释。","authors":"Florian Boge, Axel Mosig","doi":"10.1007/s00424-024-03033-9","DOIUrl":null,"url":null,"abstract":"<p><p>With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Causality and scientific explanation of artificial intelligence systems in biomedicine.\",\"authors\":\"Florian Boge, Axel Mosig\",\"doi\":\"10.1007/s00424-024-03033-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.</p>\",\"PeriodicalId\":19954,\"journal\":{\"name\":\"Pflugers Archiv : European journal of physiology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pflugers Archiv : European journal of physiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00424-024-03033-9\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PHYSIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pflugers Archiv : European journal of physiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00424-024-03033-9","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

过去十年来,随着深度神经网络的快速发展,人工智能(AI)系统在生物医学的许多应用中已司空见惯。在临床研究中,这些系统往往能达到很高的预测准确性,在临床实践中也越来越多。然而,尽管人工智能系统通常具有很高的预测准确性,但当涉及到影响患者福祉的决策或对患者或受人工智能决策影响的其他利益相关者的公平性时,人工智能系统的可信度就需要受到质疑。为了解决这个问题,出现了可解释人工智能(简称XAI)领域,该领域试图提供一种方法,向专家、用户或其他利益相关者解释基于人工智能的决策。虽然人们普遍认为,对人工智能(AI)的解释可以建立基于人工智能的决策的可信度,但目前仍不清楚解释的哪些特征会导致其提高可信度。基于医学中科学解释的历史案例,我们在此宣传我们的观点,即为了提高可信度,生物医学人工智能中的解释应符合科学解释的标准。为了进一步削弱我们的方法,我们讨论了它与因果关系和随机干预概念的关系。在我们的视角中,我们结合了生物医学、机器学习和哲学这三个学科的各个方面。从这个跨学科的角度,我们阐明了人工智能的解释和可信性与因果关系和稳健性概念之间的关系。为了将我们的视角与人工智能的研究实践联系起来,我们回顾了最近基于人工智能的病理学研究案例,最后就如何将生物医学中的人工智能与科学解释联系起来提供了指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Causality and scientific explanation of artificial intelligence systems in biomedicine.

With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.80
自引率
2.20%
发文量
121
审稿时长
4-8 weeks
期刊介绍: Pflügers Archiv European Journal of Physiology publishes those results of original research that are seen as advancing the physiological sciences, especially those providing mechanistic insights into physiological functions at the molecular and cellular level, and clearly conveying a physiological message. Submissions are encouraged that deal with the evaluation of molecular and cellular mechanisms of disease, ideally resulting in translational research. Purely descriptive papers covering applied physiology or clinical papers will be excluded. Papers on methodological topics will be considered if they contribute to the development of novel tools for further investigation of (patho)physiological mechanisms.
期刊最新文献
Obituary for Prof. Stephen (Ben) Walsh, Professor of Nephrology at University College London. The role of non-coding RNAs in neuropathic pain. The emerging roles of necroptosis in skeletal muscle health and disease. Neuroprotective actions of norepinephrine in neurological diseases. BK channels promote action potential repolarization in skeletal muscle but contribute little to myotonia.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1