Learnable Context in Multiple Instance Learning for Whole Slide Image Classification and Segmentation.

Yu-Yuan Huang, Wei-Ta Chu
{"title":"Learnable Context in Multiple Instance Learning for Whole Slide Image Classification and Segmentation.","authors":"Yu-Yuan Huang, Wei-Ta Chu","doi":"10.1007/s10278-024-01302-8","DOIUrl":null,"url":null,"abstract":"<p><p>Multiple instance learning (MIL) has become a cornerstone in whole slide image (WSI) analysis. In this paradigm, a WSI is conceptualized as a bag of instances. Instance features are extracted by a feature extractor, and then a feature aggregator fuses these instance features into a bag representation. In this paper, we advocate that both feature extraction and aggregation can be enhanced by considering the context or correlation between instances. We learn contextual features between instances, and then fuse contextual features with instance features to enhance instance representations. For feature aggregation, we observe performance instability particularly when disease-positive instances are only a minor fraction of the WSI. We introduce a self-attention mechanism to discover correlation among instances and foster more effective bag representations. Through comprehensive testing, we have demonstrated that the proposed method outperforms existing WSI classification methods by 1 to 4% classification accuracy, based on the Camelyon16 and the TCGA-NSCLC datasets. The proposed method also outperforms the most recent weakly supervised WSI segmentation method by 0.6 in terms of the Dice coefficient, based on the Camelyon16 dataset.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2322-2336"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343447/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01302-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/4 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multiple instance learning (MIL) has become a cornerstone in whole slide image (WSI) analysis. In this paradigm, a WSI is conceptualized as a bag of instances. Instance features are extracted by a feature extractor, and then a feature aggregator fuses these instance features into a bag representation. In this paper, we advocate that both feature extraction and aggregation can be enhanced by considering the context or correlation between instances. We learn contextual features between instances, and then fuse contextual features with instance features to enhance instance representations. For feature aggregation, we observe performance instability particularly when disease-positive instances are only a minor fraction of the WSI. We introduce a self-attention mechanism to discover correlation among instances and foster more effective bag representations. Through comprehensive testing, we have demonstrated that the proposed method outperforms existing WSI classification methods by 1 to 4% classification accuracy, based on the Camelyon16 and the TCGA-NSCLC datasets. The proposed method also outperforms the most recent weakly supervised WSI segmentation method by 0.6 in terms of the Dice coefficient, based on the Camelyon16 dataset.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于整张幻灯片图像分类和分割的多实例学习中的可学习上下文。
多实例学习(MIL)已成为整个幻灯片图像(WSI)分析的基石。在这种模式下,WSI 被概念化为一袋实例。先由特征提取器提取实例特征,然后由特征聚合器将这些实例特征融合为袋表示。在本文中,我们主张通过考虑实例之间的上下文或相关性来加强特征提取和聚合。我们学习实例之间的上下文特征,然后将上下文特征与实例特征融合,从而增强实例表示。在特征聚合方面,我们观察到了性能的不稳定性,尤其是当疾病阳性实例只占 WSI 的一小部分时。我们引入了一种自我关注机制,以发现实例之间的相关性,并促进更有效的袋表征。通过综合测试,我们基于 Camelyon16 和 TCGA-NSCLC 数据集证明了所提出的方法比现有的 WSI 分类方法高出 1% 到 4% 的分类准确率。基于 Camelyon16 数据集,所提出的方法在 Dice 系数方面也比最新的弱监督 WSI 细分方法高出 0.6。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Robust Retinal Image Matching: A Modality-Resistant Descriptor Using Directional Anisotropic Texton-Like Features and Evaluation Across Diverse Datasets. Generating Lung Ventilation Images with Virtual Non-contrast Images from Dual-Energy CT Scans Using Multi-task Conditional Generative Adversarial Networks. COPD-TransNet: A Swin Transformer Network with Quantitative Emphysema Feature Fusion for COPD Detection and Staging from Opportunistic CT Scans. Uncertainty Quantification in Hemodynamic Metrics from 4D Flow MRI with Super-resolution in a Carotid Bifurcation Model. Comparative Analysis of Different Methods in the Quantification of Bone Matrix in Irradiated Rats.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1