Haixiong Liu, Zuoyong Li, Jiawei Wu, Kun Zeng, Rong Hu, Wei Zeng
{"title":"ASCL: Accelerating semi-supervised learning via contrastive learning","authors":"Haixiong Liu, Zuoyong Li, Jiawei Wu, Kun Zeng, Rong Hu, Wei Zeng","doi":"10.1002/cpe.8293","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>SSL (semi-supervised learning) is widely used in machine learning, which leverages labeled and unlabeled data to improve model performance. SSL aims to optimize class mutual information, but noisy pseudo-labels introduce false class information due to the scarcity of labels. Therefore, these algorithms often need significant training time to refine pseudo-labels for performance improvement iteratively. To tackle this challenge, we propose a novel plug-and-play method named Accelerating semi-supervised learning via contrastive learning (ASCL). This method combines contrastive learning with uncertainty-based selection for performance improvement and accelerates the convergence of SSL algorithms. Contrastive learning initially emphasizes the mutual information between samples as a means to decrease dependence on pseudo-labels. Subsequently, it gradually turns to maximizing the mutual information between classes, aligning with the objective of semi-supervised learning. Uncertainty-based selection provides a robust mechanism for acquiring pseudo-labels. The combination of the contrastive learning module and the uncertainty-based selection module forms a virtuous cycle to improve the performance of the proposed model. Extensive experiments demonstrate that ASCL outperforms state-of-the-art methods in terms of both convergence efficiency and performance. In the experimental scenario where only one label is assigned per class in the CIFAR-10 dataset, the application of ASCL to Pseudo-label, UDA (unsupervised data augmentation for consistency training), and Fixmatch benefits substantial improvements in classification accuracy. Specifically, the results demonstrate notable improvements in respect of 16.32%, 6.9%, and 24.43% when compared to the original outcomes. Moreover, the required training time is reduced by almost 50%.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8293","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
SSL (semi-supervised learning) is widely used in machine learning, which leverages labeled and unlabeled data to improve model performance. SSL aims to optimize class mutual information, but noisy pseudo-labels introduce false class information due to the scarcity of labels. Therefore, these algorithms often need significant training time to refine pseudo-labels for performance improvement iteratively. To tackle this challenge, we propose a novel plug-and-play method named Accelerating semi-supervised learning via contrastive learning (ASCL). This method combines contrastive learning with uncertainty-based selection for performance improvement and accelerates the convergence of SSL algorithms. Contrastive learning initially emphasizes the mutual information between samples as a means to decrease dependence on pseudo-labels. Subsequently, it gradually turns to maximizing the mutual information between classes, aligning with the objective of semi-supervised learning. Uncertainty-based selection provides a robust mechanism for acquiring pseudo-labels. The combination of the contrastive learning module and the uncertainty-based selection module forms a virtuous cycle to improve the performance of the proposed model. Extensive experiments demonstrate that ASCL outperforms state-of-the-art methods in terms of both convergence efficiency and performance. In the experimental scenario where only one label is assigned per class in the CIFAR-10 dataset, the application of ASCL to Pseudo-label, UDA (unsupervised data augmentation for consistency training), and Fixmatch benefits substantial improvements in classification accuracy. Specifically, the results demonstrate notable improvements in respect of 16.32%, 6.9%, and 24.43% when compared to the original outcomes. Moreover, the required training time is reduced by almost 50%.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.