Multi-view cross-consistency and multi-scale cross-layer contrastive learning for semi-supervised medical image segmentation

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Expert Systems with Applications Pub Date : 2025-06-05 Epub Date: 2025-03-17 DOI:10.1016/j.eswa.2025.127223
Xunhang Cao , Xiaoxin Guo , Guangqi Yang , Qi Chen , Hongliang Dong
{"title":"Multi-view cross-consistency and multi-scale cross-layer contrastive learning for semi-supervised medical image segmentation","authors":"Xunhang Cao ,&nbsp;Xiaoxin Guo ,&nbsp;Guangqi Yang ,&nbsp;Qi Chen ,&nbsp;Hongliang Dong","doi":"10.1016/j.eswa.2025.127223","DOIUrl":null,"url":null,"abstract":"<div><div>In semi-supervised learning (SSL), consistency learning and contrastive learning are key strategies for improving model performance, where the former encourages the model to maintain consistent predictions across different views and perturbations, whereas the latter helps the model learn more discriminative feature representations. By combining the complementary characteristics of consistency learning and contrastive learning, the model’s performance can be further enhanced. In this paper, a novel semi-supervised medical image segmentation model is proposed, based on multi-view cross-consistency and multi-scale cross-layer contrastive learning (MCMCCL). The former can reduce inconsistencies in predictions across different views and explore a broader perturbation space, while the latter can enhance the richness and diversity of extracted features. Their complementary characteristics is crucial for enhancing the segmentation performance. The novel image-level consistency based on cross bidirectional copy-paste (CBCP) is proposed to enhance the model’s ability to capture the overall distribution of unlabeled data. The multi-scale cross-layer contrastive learning (MCCL) strategy is proposed to allow the model to learn meaningful feature representations without relying on negative samples across multi-scale feature maps. The CNN and Transformer models are jointly trained using cross-teaching to seamlessly integrate both strategies and further enhance the model’s performance. The experiments are conducted on three publicly available medical image datasets, ACDC, PROMISE12, and LiTS, and the results confirm the effectiveness of our model. In particular, the experiment conducted on the PROMISE12 dataset with 20% labeled samples achieves a Dice score only 0.47 lower than the fully supervised approach, significantly narrowing the gap between semi-supervised and fully supervised learning. Our code is available at <span><span>https://github.com/caoxh23/MCMCCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"277 ","pages":"Article 127223"},"PeriodicalIF":7.5000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425008450","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/17 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In semi-supervised learning (SSL), consistency learning and contrastive learning are key strategies for improving model performance, where the former encourages the model to maintain consistent predictions across different views and perturbations, whereas the latter helps the model learn more discriminative feature representations. By combining the complementary characteristics of consistency learning and contrastive learning, the model’s performance can be further enhanced. In this paper, a novel semi-supervised medical image segmentation model is proposed, based on multi-view cross-consistency and multi-scale cross-layer contrastive learning (MCMCCL). The former can reduce inconsistencies in predictions across different views and explore a broader perturbation space, while the latter can enhance the richness and diversity of extracted features. Their complementary characteristics is crucial for enhancing the segmentation performance. The novel image-level consistency based on cross bidirectional copy-paste (CBCP) is proposed to enhance the model’s ability to capture the overall distribution of unlabeled data. The multi-scale cross-layer contrastive learning (MCCL) strategy is proposed to allow the model to learn meaningful feature representations without relying on negative samples across multi-scale feature maps. The CNN and Transformer models are jointly trained using cross-teaching to seamlessly integrate both strategies and further enhance the model’s performance. The experiments are conducted on three publicly available medical image datasets, ACDC, PROMISE12, and LiTS, and the results confirm the effectiveness of our model. In particular, the experiment conducted on the PROMISE12 dataset with 20% labeled samples achieves a Dice score only 0.47 lower than the fully supervised approach, significantly narrowing the gap between semi-supervised and fully supervised learning. Our code is available at https://github.com/caoxh23/MCMCCL.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
半监督医学图像分割的多视图交叉一致性和多尺度跨层对比学习
在半监督学习(SSL)中,一致性学习和对比学习是提高模型性能的关键策略,前者鼓励模型在不同的视图和扰动下保持一致的预测,而后者帮助模型学习更具判别性的特征表示。通过结合一致性学习和对比学习的互补特性,可以进一步提高模型的性能。提出了一种基于多视图交叉一致性和多尺度跨层对比学习(MCMCCL)的半监督医学图像分割模型。前者可以减少不同视角预测的不一致性,探索更广泛的扰动空间,而后者可以增强提取特征的丰富性和多样性。它们的互补特性对提高分割性能至关重要。提出了一种新的基于交叉双向复制粘贴(CBCP)的图像级一致性模型,以增强模型捕捉未标记数据整体分布的能力。提出了多尺度跨层对比学习(MCCL)策略,使模型能够在不依赖于多尺度特征映射的负样本的情况下学习有意义的特征表示。采用交叉教学的方式对CNN和Transformer模型进行联合训练,将两种策略无缝融合,进一步提高模型的性能。在ACDC、PROMISE12和LiTS三个公开的医学图像数据集上进行了实验,结果证实了我们模型的有效性。特别是在带有20%标记样本的PROMISE12数据集上进行的实验,其Dice得分仅比完全监督方法低0.47,显著缩小了半监督和完全监督学习之间的差距。我们的代码可在https://github.com/caoxh23/MCMCCL上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
期刊最新文献
A cognition-inspired multimodal framework with association features and pyramid graph fusion network for personality prediction KAVER: Knowledge-augmented verifiable chain-of-thought prompting for dialogue answer generation Adaptive cooperation in large-scale MARL via dynamic grouping and cooperation switching Implicit Motion State Modeling for Efficient and Effective Video-Level Object Tracking Robust multi-domain digital pathology image segmentation via joint balancing representation learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1