数字病理学的可解释性和因果性

IF 3.4 2区 医学 Q1 PATHOLOGY Journal of Pathology Clinical Research Pub Date : 2023-04-12 DOI:10.1002/cjp2.322
Markus Plass, Michaela Kargl, Tim-Rasmus Kiehl, Peter Regitnig, Christian Geißler, Theodore Evans, Norman Zerbe, Rita Carvalho, Andreas Holzinger, Heimo Müller
{"title":"数字病理学的可解释性和因果性","authors":"Markus Plass,&nbsp;Michaela Kargl,&nbsp;Tim-Rasmus Kiehl,&nbsp;Peter Regitnig,&nbsp;Christian Geißler,&nbsp;Theodore Evans,&nbsp;Norman Zerbe,&nbsp;Rita Carvalho,&nbsp;Andreas Holzinger,&nbsp;Heimo Müller","doi":"10.1002/cjp2.322","DOIUrl":null,"url":null,"abstract":"<p>The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what-if’-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.</p>","PeriodicalId":48612,"journal":{"name":"Journal of Pathology Clinical Research","volume":"9 4","pages":"251-260"},"PeriodicalIF":3.4000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://pathsocjournals.onlinelibrary.wiley.com/doi/epdf/10.1002/cjp2.322","citationCount":"4","resultStr":"{\"title\":\"Explainability and causability in digital pathology\",\"authors\":\"Markus Plass,&nbsp;Michaela Kargl,&nbsp;Tim-Rasmus Kiehl,&nbsp;Peter Regitnig,&nbsp;Christian Geißler,&nbsp;Theodore Evans,&nbsp;Norman Zerbe,&nbsp;Rita Carvalho,&nbsp;Andreas Holzinger,&nbsp;Heimo Müller\",\"doi\":\"10.1002/cjp2.322\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what-if’-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.</p>\",\"PeriodicalId\":48612,\"journal\":{\"name\":\"Journal of Pathology Clinical Research\",\"volume\":\"9 4\",\"pages\":\"251-260\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://pathsocjournals.onlinelibrary.wiley.com/doi/epdf/10.1002/cjp2.322\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Pathology Clinical Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cjp2.322\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pathology Clinical Research","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cjp2.322","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PATHOLOGY","Score":null,"Total":0}
引用次数: 4

摘要

目前数字病理学的发展使病理学家能够使用基于人工智能(AI)的计算机程序对整个幻灯片图像进行高级分析。然而,目前,用于图像分析的表现最好的人工智能算法被认为是黑盒子,因为它仍然——甚至对它们的开发人员来说——通常不清楚为什么算法会产生特定的结果。特别是在医学领域,更好地理解算法决策对于避免错误和对患者的不利影响至关重要。这篇综述文章旨在为医学专家提供关于数字病理学可解释性问题的见解。对机器学习相关核心概念的简短介绍将有助于读者理解为什么可解释性是该领域的一个特定问题。为了解决这个可解释性问题,快速发展的可解释人工智能(XAI)研究领域开发了许多技术和方法,使黑箱机器学习系统更加透明。这些XAI方法是使黑盒AI系统被人类理解的第一步。然而,我们认为解释界面必须补充这些可解释模型,使其结果对人类利益相关者有用,并实现高水平的因果关系,即用户对因果关系的高水平理解。这在医疗领域尤其重要,因为可解释性和因果性对于遵守监管要求也起着至关重要的作用。最后,我们提出了人工智能在病理学中的应用需要新颖的用户界面,这可以使上下文理解,并允许医学专家提出交互式的“假设”问题。在病理学中,这样的用户界面不仅对实现高水平的因果关系很重要。它们对于保持人类的参与以及将医学专家的经验和概念知识引入人工智能过程也至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainability and causability in digital pathology

The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what-if’-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Pathology Clinical Research
Journal of Pathology Clinical Research Medicine-Pathology and Forensic Medicine
CiteScore
7.40
自引率
2.40%
发文量
47
审稿时长
20 weeks
期刊介绍: The Journal of Pathology: Clinical Research and The Journal of Pathology serve as translational bridges between basic biomedical science and clinical medicine with particular emphasis on, but not restricted to, tissue based studies. The focus of The Journal of Pathology: Clinical Research is the publication of studies that illuminate the clinical relevance of research in the broad area of the study of disease. Appropriately powered and validated studies with novel diagnostic, prognostic and predictive significance, and biomarker discover and validation, will be welcomed. Studies with a predominantly mechanistic basis will be more appropriate for the companion Journal of Pathology.
期刊最新文献
High chromosomal instability is associated with higher 10-year risks of recurrence for hormone receptor-positive, human epidermal growth factor receptor 2-negative breast cancer patients: clinical evidence from a large-scale, multiple-site, retrospective study Large multimodal model-based standardisation of pathology reports with confidence and its prognostic significance Clinicopathological and epigenetic differences between primary neuroendocrine tumors and neuroendocrine metastases in the ovary Large language models as a diagnostic support tool in neuropathology Homologous recombination deficiency score is an independent prognostic factor in esophageal squamous cell carcinoma
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1