RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging.

Ajay Jaiswal, Kumar Ashutosh, Justin F Rousseau, Yifan Peng, Zhangyang Wang, Ying Ding
{"title":"RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging.","authors":"Ajay Jaiswal,&nbsp;Kumar Ashutosh,&nbsp;Justin F Rousseau,&nbsp;Yifan Peng,&nbsp;Zhangyang Wang,&nbsp;Ying Ding","doi":"10.1109/icdm54844.2022.00118","DOIUrl":null,"url":null,"abstract":"<p><p>AI-powered Medical Imaging has recently achieved enormous attention due to its ability to provide fast-paced healthcare diagnoses. However, it usually suffers from a lack of high-quality datasets due to high annotation cost, inter-observer variability, human annotator error, and errors in computer-generated labels. Deep learning models trained on noisy labelled datasets are sensitive to the noise type and lead to less generalization on the unseen samples. To address this challenge, we propose a Robust Stochastic Knowledge Distillation (RoS-KD) framework which mimics the notion of learning a topic from multiple sources to ensure deterrence in learning noisy information. More specifically, RoS-KD learns a <i>smooth, well-informed, and robust student manifold</i> by distilling knowledge from multiple teachers trained on <i>overlapping subsets</i> of training data. Our extensive experiments on popular medical imaging classification tasks (cardiopulmonary disease and lesion classification) using real-world datasets, show the performance benefit of RoS-KD, its ability to distill knowledge from many popular large networks (ResNet-50, DenseNet-121, MobileNet-V2) in a comparatively small network, and its robustness to adversarial attacks (PGD, FSGM). More specifically, RoS-KD achieves > 2% and > 4% improvement on F1-score for lesion classification and cardiopulmonary disease classification tasks, respectively, when the underlying student is ResNet-18 against recent competitive knowledge distillation baseline. Additionally, on cardiopulmonary disease classification task, RoS-KD outperforms most of the SOTA baselines by ~1% gain in AUC score.</p>","PeriodicalId":74565,"journal":{"name":"Proceedings. IEEE International Conference on Data Mining","volume":"2022 ","pages":"981-986"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10082964/pdf/nihms-1888486.pdf","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Conference on Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icdm54844.2022.00118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/2/1 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

AI-powered Medical Imaging has recently achieved enormous attention due to its ability to provide fast-paced healthcare diagnoses. However, it usually suffers from a lack of high-quality datasets due to high annotation cost, inter-observer variability, human annotator error, and errors in computer-generated labels. Deep learning models trained on noisy labelled datasets are sensitive to the noise type and lead to less generalization on the unseen samples. To address this challenge, we propose a Robust Stochastic Knowledge Distillation (RoS-KD) framework which mimics the notion of learning a topic from multiple sources to ensure deterrence in learning noisy information. More specifically, RoS-KD learns a smooth, well-informed, and robust student manifold by distilling knowledge from multiple teachers trained on overlapping subsets of training data. Our extensive experiments on popular medical imaging classification tasks (cardiopulmonary disease and lesion classification) using real-world datasets, show the performance benefit of RoS-KD, its ability to distill knowledge from many popular large networks (ResNet-50, DenseNet-121, MobileNet-V2) in a comparatively small network, and its robustness to adversarial attacks (PGD, FSGM). More specifically, RoS-KD achieves > 2% and > 4% improvement on F1-score for lesion classification and cardiopulmonary disease classification tasks, respectively, when the underlying student is ResNet-18 against recent competitive knowledge distillation baseline. Additionally, on cardiopulmonary disease classification task, RoS-KD outperforms most of the SOTA baselines by ~1% gain in AUC score.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
RoS-KD:一种用于噪声医学成像的稳健随机知识提取方法。
人工智能医疗成像最近因其提供快节奏医疗诊断的能力而受到极大关注。然而,由于高注释成本、观察者之间的可变性、人工注释器错误和计算机生成标签中的错误,它通常缺乏高质量的数据集。在有噪声标记的数据集上训练的深度学习模型对噪声类型敏感,并且对看不见的样本的泛化能力较弱。为了应对这一挑战,我们提出了一个鲁棒随机知识提取(RoS-KD)框架,该框架模拟了从多个来源学习主题的概念,以确保在学习噪声信息时具有威慑力。更具体地说,RoS KD通过从接受过重叠训练数据子集训练的多名教师身上提取知识,学习到一个流畅、消息灵通、强健的学生歧管。我们使用真实世界的数据集对流行的医学成像分类任务(心肺疾病和病变分类)进行了广泛的实验,显示了RoS-KD的性能优势,它能够在相对较小的网络中从许多流行的大型网络(ResNet-50、DenseNet-121、MobileNet-V2)中提取知识,以及它对对抗性攻击(PGD、FSGM)的鲁棒性。更具体地说,当基础学生是ResNet-18时,与最近的竞争性知识提取基线相比,RoS KD在病变分类和心肺疾病分类任务的F1得分上分别提高了>2%和>4%。此外,在心肺疾病分类任务中,RoS-KD的AUC得分比大多数SOTA基线高出约1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhancing Personalized Healthcare via Capturing Disease Severity, Interaction, and Progression. Heterogeneous Treatment Effect Estimation with Subpopulation Identification for Personalized Medicine in Opioid Use Disorder. RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging. Robust Unsupervised Domain Adaptation from A Corrupted Source. Communication Efficient Tensor Factorization for Decentralized Healthcare Networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1