Xunhang Cao , Xiaoxin Guo , Guangqi Yang , Qi Chen , Hongliang Dong
{"title":"Multi-view cross-consistency and multi-scale cross-layer contrastive learning for semi-supervised medical image segmentation","authors":"Xunhang Cao , Xiaoxin Guo , Guangqi Yang , Qi Chen , Hongliang Dong","doi":"10.1016/j.eswa.2025.127223","DOIUrl":null,"url":null,"abstract":"<div><div>In semi-supervised learning (SSL), consistency learning and contrastive learning are key strategies for improving model performance, where the former encourages the model to maintain consistent predictions across different views and perturbations, whereas the latter helps the model learn more discriminative feature representations. By combining the complementary characteristics of consistency learning and contrastive learning, the model’s performance can be further enhanced. In this paper, a novel semi-supervised medical image segmentation model is proposed, based on multi-view cross-consistency and multi-scale cross-layer contrastive learning (MCMCCL). The former can reduce inconsistencies in predictions across different views and explore a broader perturbation space, while the latter can enhance the richness and diversity of extracted features. Their complementary characteristics is crucial for enhancing the segmentation performance. The novel image-level consistency based on cross bidirectional copy-paste (CBCP) is proposed to enhance the model’s ability to capture the overall distribution of unlabeled data. The multi-scale cross-layer contrastive learning (MCCL) strategy is proposed to allow the model to learn meaningful feature representations without relying on negative samples across multi-scale feature maps. The CNN and Transformer models are jointly trained using cross-teaching to seamlessly integrate both strategies and further enhance the model’s performance. The experiments are conducted on three publicly available medical image datasets, ACDC, PROMISE12, and LiTS, and the results confirm the effectiveness of our model. In particular, the experiment conducted on the PROMISE12 dataset with 20% labeled samples achieves a Dice score only 0.47 lower than the fully supervised approach, significantly narrowing the gap between semi-supervised and fully supervised learning. Our code is available at <span><span>https://github.com/caoxh23/MCMCCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"277 ","pages":"Article 127223"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425008450","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In semi-supervised learning (SSL), consistency learning and contrastive learning are key strategies for improving model performance, where the former encourages the model to maintain consistent predictions across different views and perturbations, whereas the latter helps the model learn more discriminative feature representations. By combining the complementary characteristics of consistency learning and contrastive learning, the model’s performance can be further enhanced. In this paper, a novel semi-supervised medical image segmentation model is proposed, based on multi-view cross-consistency and multi-scale cross-layer contrastive learning (MCMCCL). The former can reduce inconsistencies in predictions across different views and explore a broader perturbation space, while the latter can enhance the richness and diversity of extracted features. Their complementary characteristics is crucial for enhancing the segmentation performance. The novel image-level consistency based on cross bidirectional copy-paste (CBCP) is proposed to enhance the model’s ability to capture the overall distribution of unlabeled data. The multi-scale cross-layer contrastive learning (MCCL) strategy is proposed to allow the model to learn meaningful feature representations without relying on negative samples across multi-scale feature maps. The CNN and Transformer models are jointly trained using cross-teaching to seamlessly integrate both strategies and further enhance the model’s performance. The experiments are conducted on three publicly available medical image datasets, ACDC, PROMISE12, and LiTS, and the results confirm the effectiveness of our model. In particular, the experiment conducted on the PROMISE12 dataset with 20% labeled samples achieves a Dice score only 0.47 lower than the fully supervised approach, significantly narrowing the gap between semi-supervised and fully supervised learning. Our code is available at https://github.com/caoxh23/MCMCCL.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.