Cervical OCT image classification using contrastive masked autoencoders with Swin Transformer.

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2024-11-19 DOI:10.1016/j.compmedimag.2024.102469
Qingbin Wang, Yuxuan Xiong, Hanfeng Zhu, Xuefeng Mu, Yan Zhang, Yutao Ma
{"title":"Cervical OCT image classification using contrastive masked autoencoders with Swin Transformer.","authors":"Qingbin Wang, Yuxuan Xiong, Hanfeng Zhu, Xuefeng Mu, Yan Zhang, Yutao Ma","doi":"10.1016/j.compmedimag.2024.102469","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and objective: </strong>Cervical cancer poses a major health threat to women globally. Optical coherence tomography (OCT) imaging has recently shown promise for non-invasive cervical lesion diagnosis. However, obtaining high-quality labeled cervical OCT images is challenging and time-consuming as they must correspond precisely with pathological results. The scarcity of such high-quality labeled data hinders the application of supervised deep-learning models in practical clinical settings. This study addresses the above challenge by proposing CMSwin, a novel self-supervised learning (SSL) framework combining masked image modeling (MIM) with contrastive learning based on the Swin-Transformer architecture to utilize abundant unlabeled cervical OCT images.</p><p><strong>Methods: </strong>In this contrastive-MIM framework, mixed image encoding is combined with a latent contextual regressor to solve the inconsistency problem between pre-training and fine-tuning and separate the encoder's feature extraction task from the decoder's reconstruction task, allowing the encoder to extract better image representations. Besides, contrastive losses at the patch and image levels are elaborately designed to leverage massive unlabeled data.</p><p><strong>Results: </strong>We validated the superiority of CMSwin over the state-of-the-art SSL approaches with five-fold cross-validation on an OCT image dataset containing 1,452 patients from a multi-center clinical study in China, plus two external validation sets from top-ranked Chinese hospitals: the Huaxi dataset from the West China Hospital of Sichuan University and the Xiangya dataset from the Xiangya Second Hospital of Central South University. A human-machine comparison experiment on the Huaxi and Xiangya datasets for volume-level binary classification also indicates that CMSwin can match or exceed the average level of four skilled medical experts, especially in identifying high-risk cervical lesions.</p><p><strong>Conclusion: </strong>Our work has great potential to assist gynecologists in intelligently interpreting cervical OCT images in clinical settings. Additionally, the integrated GradCAM module of CMSwin enables cervical lesion visualization and interpretation, providing good interpretability for gynecologists to diagnose cervical diseases efficiently.</p>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"102469"},"PeriodicalIF":5.4000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.compmedimag.2024.102469","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background and objective: Cervical cancer poses a major health threat to women globally. Optical coherence tomography (OCT) imaging has recently shown promise for non-invasive cervical lesion diagnosis. However, obtaining high-quality labeled cervical OCT images is challenging and time-consuming as they must correspond precisely with pathological results. The scarcity of such high-quality labeled data hinders the application of supervised deep-learning models in practical clinical settings. This study addresses the above challenge by proposing CMSwin, a novel self-supervised learning (SSL) framework combining masked image modeling (MIM) with contrastive learning based on the Swin-Transformer architecture to utilize abundant unlabeled cervical OCT images.

Methods: In this contrastive-MIM framework, mixed image encoding is combined with a latent contextual regressor to solve the inconsistency problem between pre-training and fine-tuning and separate the encoder's feature extraction task from the decoder's reconstruction task, allowing the encoder to extract better image representations. Besides, contrastive losses at the patch and image levels are elaborately designed to leverage massive unlabeled data.

Results: We validated the superiority of CMSwin over the state-of-the-art SSL approaches with five-fold cross-validation on an OCT image dataset containing 1,452 patients from a multi-center clinical study in China, plus two external validation sets from top-ranked Chinese hospitals: the Huaxi dataset from the West China Hospital of Sichuan University and the Xiangya dataset from the Xiangya Second Hospital of Central South University. A human-machine comparison experiment on the Huaxi and Xiangya datasets for volume-level binary classification also indicates that CMSwin can match or exceed the average level of four skilled medical experts, especially in identifying high-risk cervical lesions.

Conclusion: Our work has great potential to assist gynecologists in intelligently interpreting cervical OCT images in clinical settings. Additionally, the integrated GradCAM module of CMSwin enables cervical lesion visualization and interpretation, providing good interpretability for gynecologists to diagnose cervical diseases efficiently.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用带有 Swin 变换器的对比遮蔽自动编码器进行颈椎 OCT 图像分类。
背景和目的:宫颈癌对全球妇女的健康构成重大威胁。最近,光学相干断层扫描(OCT)成像技术在无创宫颈病变诊断方面显示出了良好的前景。然而,获得高质量的标记宫颈 OCT 图像具有挑战性且耗时,因为这些图像必须与病理结果精确对应。这种高质量标记数据的缺乏阻碍了有监督深度学习模型在实际临床环境中的应用。为了应对上述挑战,本研究提出了一种新颖的自我监督学习(SSL)框架--CMSwin,该框架结合了掩蔽图像建模(MIM)和基于 Swin-Transformer 架构的对比学习,以利用丰富的未标记颈椎 OCT 图像:在该对比-MIM 框架中,混合图像编码与潜在上下文回归器相结合,解决了预训练与微调之间的不一致问题,并将编码器的特征提取任务与解码器的重建任务分开,从而使编码器能够提取更好的图像表征。此外,我们还精心设计了补丁和图像层面的对比损失,以充分利用大量无标记数据:我们在一个包含来自中国多中心临床研究的 1,452 名患者的 OCT 图像数据集,以及两个来自中国顶级医院的外部验证集(四川大学华西医院的华西数据集和中南大学湘雅二医院的湘雅数据集)上进行了五倍交叉验证,验证了 CMSwin 优于最先进的 SSL 方法。在华西和湘雅数据集上进行的体积级二元分类的人机对比实验也表明,CMSwin 可以达到或超过四位技术熟练的医学专家的平均水平,尤其是在识别高危宫颈病变方面:我们的工作在帮助妇科医生在临床环境中智能解读宫颈 OCT 图像方面具有巨大潜力。此外,CMSwin 集成的 GradCAM 模块可实现宫颈病变的可视化和解读,为妇科医生有效诊断宫颈疾病提供了良好的可解读性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
Single color digital H&E staining with In-and-Out Net. Cervical OCT image classification using contrastive masked autoencoders with Swin Transformer. Circumpapillary OCT-based multi-sector analysis of retinal layer thickness in patients with glaucoma and high myopia. Dual attention model with reinforcement learning for classification of histology whole-slide images. CIS-UNet: Multi-class segmentation of the aorta in computed tomography angiography via context-aware shifted window self-attention.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1