生物医学单细胞图像中多实例学习模型的像素级解释

A. Sadafi, Oleksandra Adonkina, Ashkan Khakzar, P. Lienemann, Rudolf Matthias Hehr, D. Rueckert, N. Navab, C. Marr
{"title":"生物医学单细胞图像中多实例学习模型的像素级解释","authors":"A. Sadafi, Oleksandra Adonkina, Ashkan Khakzar, P. Lienemann, Rudolf Matthias Hehr, D. Rueckert, N. Navab, C. Marr","doi":"10.48550/arXiv.2303.08632","DOIUrl":null,"url":null,"abstract":"Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients' blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"23 1","pages":"170-182"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images\",\"authors\":\"A. Sadafi, Oleksandra Adonkina, Ashkan Khakzar, P. Lienemann, Rudolf Matthias Hehr, D. Rueckert, N. Navab, C. Marr\",\"doi\":\"10.48550/arXiv.2303.08632\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients' blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.\",\"PeriodicalId\":73379,\"journal\":{\"name\":\"Information processing in medical imaging : proceedings of the ... conference\",\"volume\":\"23 1\",\"pages\":\"170-182\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information processing in medical imaging : proceedings of the ... conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2303.08632\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information processing in medical imaging : proceedings of the ... conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2303.08632","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

可解释性是计算机辅助诊断系统在临床决策中的关键要求。具有注意力池的多实例学习提供了实例级的可解释性,然而对于许多临床应用来说,更深入的、像素级的解释是可取的,但迄今为止还没有。在这项工作中,我们研究了使用四种归因方法来解释多实例学习模型:GradCAM,分层相关传播(LRP),信息瓶颈归因(IBA)和InputIBA。通过这些方法的集合,我们可以从患者的血液涂片中获得诊断血癌任务的像素级解释。我们研究了两个包含超过100,000个单细胞图像的急性髓系白血病数据集,并观察了每种归因方法在多实例学习架构上的表现,重点关注白细胞的不同特性。此外,我们将归因图与医学专家的注释进行比较,看看模型的决策与人类标准有何不同。我们的研究解决了在多实例学习模型中实现像素级可解释性的挑战,并为临床医生更好地理解和信任计算机辅助诊断系统的决策提供了见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients' blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume Segmentation Better Generalization of White Matter Tract Segmentation to Arbitrary Datasets with Scaled Residual Bootstrap Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine Self-Supervision Weakly Semi-supervised Detection in Lung Ultrasound Videos Bootstrapping Semi-supervised Medical Image Segmentation with Anatomical-Aware Contrastive Distillation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1