Multi-view cross-consistency and multi-scale cross-layer contrastive learning for semi-supervised medical image segmentation

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Expert Systems with Applications Pub Date : 2025-03-17 DOI:10.1016/j.eswa.2025.127223
Xunhang Cao , Xiaoxin Guo , Guangqi Yang , Qi Chen , Hongliang Dong
{"title":"Multi-view cross-consistency and multi-scale cross-layer contrastive learning for semi-supervised medical image segmentation","authors":"Xunhang Cao ,&nbsp;Xiaoxin Guo ,&nbsp;Guangqi Yang ,&nbsp;Qi Chen ,&nbsp;Hongliang Dong","doi":"10.1016/j.eswa.2025.127223","DOIUrl":null,"url":null,"abstract":"<div><div>In semi-supervised learning (SSL), consistency learning and contrastive learning are key strategies for improving model performance, where the former encourages the model to maintain consistent predictions across different views and perturbations, whereas the latter helps the model learn more discriminative feature representations. By combining the complementary characteristics of consistency learning and contrastive learning, the model’s performance can be further enhanced. In this paper, a novel semi-supervised medical image segmentation model is proposed, based on multi-view cross-consistency and multi-scale cross-layer contrastive learning (MCMCCL). The former can reduce inconsistencies in predictions across different views and explore a broader perturbation space, while the latter can enhance the richness and diversity of extracted features. Their complementary characteristics is crucial for enhancing the segmentation performance. The novel image-level consistency based on cross bidirectional copy-paste (CBCP) is proposed to enhance the model’s ability to capture the overall distribution of unlabeled data. The multi-scale cross-layer contrastive learning (MCCL) strategy is proposed to allow the model to learn meaningful feature representations without relying on negative samples across multi-scale feature maps. The CNN and Transformer models are jointly trained using cross-teaching to seamlessly integrate both strategies and further enhance the model’s performance. The experiments are conducted on three publicly available medical image datasets, ACDC, PROMISE12, and LiTS, and the results confirm the effectiveness of our model. In particular, the experiment conducted on the PROMISE12 dataset with 20% labeled samples achieves a Dice score only 0.47 lower than the fully supervised approach, significantly narrowing the gap between semi-supervised and fully supervised learning. Our code is available at <span><span>https://github.com/caoxh23/MCMCCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"277 ","pages":"Article 127223"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425008450","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In semi-supervised learning (SSL), consistency learning and contrastive learning are key strategies for improving model performance, where the former encourages the model to maintain consistent predictions across different views and perturbations, whereas the latter helps the model learn more discriminative feature representations. By combining the complementary characteristics of consistency learning and contrastive learning, the model’s performance can be further enhanced. In this paper, a novel semi-supervised medical image segmentation model is proposed, based on multi-view cross-consistency and multi-scale cross-layer contrastive learning (MCMCCL). The former can reduce inconsistencies in predictions across different views and explore a broader perturbation space, while the latter can enhance the richness and diversity of extracted features. Their complementary characteristics is crucial for enhancing the segmentation performance. The novel image-level consistency based on cross bidirectional copy-paste (CBCP) is proposed to enhance the model’s ability to capture the overall distribution of unlabeled data. The multi-scale cross-layer contrastive learning (MCCL) strategy is proposed to allow the model to learn meaningful feature representations without relying on negative samples across multi-scale feature maps. The CNN and Transformer models are jointly trained using cross-teaching to seamlessly integrate both strategies and further enhance the model’s performance. The experiments are conducted on three publicly available medical image datasets, ACDC, PROMISE12, and LiTS, and the results confirm the effectiveness of our model. In particular, the experiment conducted on the PROMISE12 dataset with 20% labeled samples achieves a Dice score only 0.47 lower than the fully supervised approach, significantly narrowing the gap between semi-supervised and fully supervised learning. Our code is available at https://github.com/caoxh23/MCMCCL.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
期刊最新文献
Hybrid optimal parent selection based energy efficient routing protocol for Low-Power and lossy networks (RPL) routing TransADMM: Transformer enhanced unrolling alternating direction method of multipliers framework for electrical impedance tomography Ensemble of neighborhood search operators for decomposition-based multi-objective evolutionary optimization PoliGuilt: Two level guilt detection from social media texts Multi-view cross-consistency and multi-scale cross-layer contrastive learning for semi-supervised medical image segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1