SegAnyPath: A Foundation Model for Multi- Resolution Stain-Variant and Multi-Task Pathology Image Segmentation

Chong Wang;Yajie Wan;Shuxin Li;Kaili Qu;Xuezhi Zhou;Junjun He;Jing Ke;Yi Yu;Tianyun Wang;Yiqing Shen
{"title":"SegAnyPath: A Foundation Model for Multi- Resolution Stain-Variant and Multi-Task Pathology Image Segmentation","authors":"Chong Wang;Yajie Wan;Shuxin Li;Kaili Qu;Xuezhi Zhou;Junjun He;Jing Ke;Yi Yu;Tianyun Wang;Yiqing Shen","doi":"10.1109/TMI.2024.3501352","DOIUrl":null,"url":null,"abstract":"Foundation models like the Segment Anything Model (SAM) have shown promising performance in general image segmentation tasks. However, their effectiveness is limited when applied to pathology images due to the inherent multi-scale structural complexity and staining heterogeneity. To address these challenges, we introduce SegAnyPath, a foundational model specifically designed for pathology image segmentation. SegAnyPath is trained on an extensive public pathology dataset comprising over 1.5 million images and 3.5 million masks. We propose a multi-scale proxy task to handle the diverse resolutions in pathology images, complementing the reconstruction objective in the supervised learning stage. To enhance segmentation performance across stain variations, we introduce a novel self-distillation scheme based on stain augmentations. Furthermore, we propose an innovative task-guided Mixture of Experts (MoE) architecture in the decoder of SegAnyPath for efficient management of distinct pathology segmentation tasks, including cell, tissue, and tumor segmentation. Experimental results demonstrate SegAnyPath’s zero-shot generalization capability, achieving a Dice score of 0.6797 across multiple datasets and organs while maintaining consistent performance across varying staining styles and resolutions. In comparison, the fine-tuned SAM achieves a Dice score of only 0.5258 on the same external test sets, indicating a substantial 29.27% improvement by SegAnyPath. SegAnyPath has the potential to advance the field of pathology analysis and improve diagnostic accuracy in clinical settings. The code is available at <uri>https://github.com/wagnchogn/SegAnyPath</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 10","pages":"3924-3937"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10756743/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Foundation models like the Segment Anything Model (SAM) have shown promising performance in general image segmentation tasks. However, their effectiveness is limited when applied to pathology images due to the inherent multi-scale structural complexity and staining heterogeneity. To address these challenges, we introduce SegAnyPath, a foundational model specifically designed for pathology image segmentation. SegAnyPath is trained on an extensive public pathology dataset comprising over 1.5 million images and 3.5 million masks. We propose a multi-scale proxy task to handle the diverse resolutions in pathology images, complementing the reconstruction objective in the supervised learning stage. To enhance segmentation performance across stain variations, we introduce a novel self-distillation scheme based on stain augmentations. Furthermore, we propose an innovative task-guided Mixture of Experts (MoE) architecture in the decoder of SegAnyPath for efficient management of distinct pathology segmentation tasks, including cell, tissue, and tumor segmentation. Experimental results demonstrate SegAnyPath’s zero-shot generalization capability, achieving a Dice score of 0.6797 across multiple datasets and organs while maintaining consistent performance across varying staining styles and resolutions. In comparison, the fine-tuned SAM achieves a Dice score of only 0.5258 on the same external test sets, indicating a substantial 29.27% improvement by SegAnyPath. SegAnyPath has the potential to advance the field of pathology analysis and improve diagnostic accuracy in clinical settings. The code is available at https://github.com/wagnchogn/SegAnyPath.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SegAnyPath:用于多分辨率染色变异和多任务病理图像分割的基础模型
像分割任意模型(SAM)这样的基础模型在一般的图像分割任务中表现出了良好的性能。然而,由于固有的多尺度结构复杂性和染色的异质性,它们的有效性在应用于病理图像时受到限制。为了解决这些挑战,我们引入了SegAnyPath,这是一个专门为病理图像分割设计的基础模型。SegAnyPath是在一个广泛的公共病理学数据集上进行训练的,该数据集包括150多万张图像和350万个口罩。我们提出了一个多尺度代理任务来处理病理图像的不同分辨率,补充了监督学习阶段的重建目标。为了提高跨染色变化的分割性能,我们引入了一种基于染色增强的自蒸馏方案。此外,我们在SegAnyPath解码器中提出了一种创新的任务导向混合专家(MoE)架构,用于有效管理不同的病理分割任务,包括细胞、组织和肿瘤分割。实验结果证明了SegAnyPath的零射击泛化能力,在多个数据集和器官上实现了0.6797的Dice分数,同时在不同的染色风格和分辨率下保持了一致的性能。相比之下,经过微调的SAM在相同的外部测试集上获得的Dice分数仅为0.5258,这表明SegAnyPath提高了29.27%。SegAnyPath具有推进病理分析领域和提高临床诊断准确性的潜力。代码可在https://github.com/wagnchogn/SegAnyPath上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PWD: Prior-aware Wavelet Diffusion for Efficient Dental Limited-angle CT Reconstruction. Semantic Augmentation Variational Autoencoder for Unsupervised Anomaly Detection in Retinal OCT Images. Biomechanics-informed Non-rigid Medical Image Registration with Elasticity Theories. Universal Scale Transformer for Histology Image Segmentation. MambaMatch: A Novel Model for Semi-Supervised Spinal Cortical and Cancellous Bone Segmentation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1