超越显著性地图:训练深度模型,解读深度模型。

Zixuan Liu, Ehsan Adeli, Kilian M Pohl, Qingyu Zhao
{"title":"超越显著性地图:训练深度模型,解读深度模型。","authors":"Zixuan Liu, Ehsan Adeli, Kilian M Pohl, Qingyu Zhao","doi":"10.1007/978-3-030-78191-0_6","DOIUrl":null,"url":null,"abstract":"<p><p>Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies. To interpret the decision process of a trained classifier, existing techniques typically rely on <i>saliency maps</i> to quantify the voxel-wise or feature-level importance for classification through partial derivatives. Despite providing some level of localization, these maps are not human-understandable from the neuroscience perspective as they often do not inform the specific type of morphological changes linked to the brain disorder. Inspired by the image-to-image translation scheme, we propose to train simulator networks to inject (or remove) patterns of the disease into a given MRI based on a warping operation, such that the classifier increases (or decreases) its confidence in labeling the simulated MRI as diseased. To increase the robustness of training, we propose to couple the two simulators into a unified model based on <i>conditional convolution</i>. We applied our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of Alzheimer's disease and alcohol dependence. Compared to the saliency maps generated by baseline approaches, our simulations and visualizations based on the Jacobian determinants of the warping field reveal meaningful and understandable patterns related to the diseases.</p>","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":" ","pages":"71-82"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8451265/pdf/nihms-1738816.pdf","citationCount":"0","resultStr":"{\"title\":\"Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models.\",\"authors\":\"Zixuan Liu, Ehsan Adeli, Kilian M Pohl, Qingyu Zhao\",\"doi\":\"10.1007/978-3-030-78191-0_6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies. To interpret the decision process of a trained classifier, existing techniques typically rely on <i>saliency maps</i> to quantify the voxel-wise or feature-level importance for classification through partial derivatives. Despite providing some level of localization, these maps are not human-understandable from the neuroscience perspective as they often do not inform the specific type of morphological changes linked to the brain disorder. Inspired by the image-to-image translation scheme, we propose to train simulator networks to inject (or remove) patterns of the disease into a given MRI based on a warping operation, such that the classifier increases (or decreases) its confidence in labeling the simulated MRI as diseased. To increase the robustness of training, we propose to couple the two simulators into a unified model based on <i>conditional convolution</i>. We applied our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of Alzheimer's disease and alcohol dependence. Compared to the saliency maps generated by baseline approaches, our simulations and visualizations based on the Jacobian determinants of the warping field reveal meaningful and understandable patterns related to the diseases.</p>\",\"PeriodicalId\":73379,\"journal\":{\"name\":\"Information processing in medical imaging : proceedings of the ... conference\",\"volume\":\" \",\"pages\":\"71-82\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8451265/pdf/nihms-1738816.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information processing in medical imaging : proceedings of the ... conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-78191-0_6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/6/14 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information processing in medical imaging : proceedings of the ... conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-78191-0_6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/6/14 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在神经成像研究中,应用复杂的深度学习模型来促进对脑部疾病的理解,可解释性是一个关键因素。为了解释训练有素的分类器的决策过程,现有技术通常依赖于显著性图,通过偏导数量化体素或特征级分类的重要性。尽管这些图谱提供了一定程度的定位,但从神经科学的角度来看,它们并不为人类所理解,因为它们通常无法告知与脑部疾病相关的形态变化的具体类型。受图像到图像转换方案的启发,我们建议训练模拟器网络,根据扭曲操作将疾病模式注入(或移除)给定的磁共振成像中,从而增加(或减少)分类器将模拟磁共振成像标记为疾病的置信度。为了提高训练的鲁棒性,我们建议将两个模拟器结合到一个基于条件卷积的统一模型中。我们将我们的方法应用于解释在合成数据集和两个神经成像数据集上训练的分类器,以直观显示阿尔茨海默病和酒精依赖的影响。与基线方法生成的显著性地图相比,我们基于翘曲场雅各布决定因素的模拟和可视化揭示了与疾病相关的有意义且可理解的模式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models.

Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies. To interpret the decision process of a trained classifier, existing techniques typically rely on saliency maps to quantify the voxel-wise or feature-level importance for classification through partial derivatives. Despite providing some level of localization, these maps are not human-understandable from the neuroscience perspective as they often do not inform the specific type of morphological changes linked to the brain disorder. Inspired by the image-to-image translation scheme, we propose to train simulator networks to inject (or remove) patterns of the disease into a given MRI based on a warping operation, such that the classifier increases (or decreases) its confidence in labeling the simulated MRI as diseased. To increase the robustness of training, we propose to couple the two simulators into a unified model based on conditional convolution. We applied our approach to interpreting classifiers trained on a synthetic dataset and two neuroimaging datasets to visualize the effect of Alzheimer's disease and alcohol dependence. Compared to the saliency maps generated by baseline approaches, our simulations and visualizations based on the Jacobian determinants of the warping field reveal meaningful and understandable patterns related to the diseases.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume Segmentation Better Generalization of White Matter Tract Segmentation to Arbitrary Datasets with Scaled Residual Bootstrap Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine Self-Supervision Weakly Semi-supervised Detection in Lung Ultrasound Videos Bootstrapping Semi-supervised Medical Image Segmentation with Anatomical-Aware Contrastive Distillation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1