{"title":"Progressive Learning Strategy for Few-Shot Class-Incremental Learning","authors":"Kai Hu;Yunjiang Wang;Yuan Zhang;Xieping Gao","doi":"10.1109/TCYB.2025.3525724","DOIUrl":null,"url":null,"abstract":"The goal of few-shot class incremental learning (FSCIL) is to learn new concepts from a limited number of novel samples while preserving the knowledge of previously learned classes. The mainstream FSCIL framework begins with training in the base session, after which the feature extractor is frozen to accommodate novel classes. We observed that traditional base-session training approaches often lead to overfitting on challenging samples, which can lead to reduced robustness in the decision boundaries and exacerbate the forgetting phenomenon when introducing incremental data. To address this issue, we proposed the progressive learning strategy (PGLS). First, inspired by curriculum learning, we developed a covariance noise perturbation approach based on the statistical information as a difficulty measure for assessing sample robustness. We then reweighted the samples based on their robustness, initially concentrating on enhancing model stability by prioritizing robust samples and subsequently leveraging weakly robust samples to improve generalization. Second, we predefined forward compatibility for various virtual class augmentation models. Within base class training, we employed a curriculum learning strategy that progressively introduced fewer to more virtual classes in order to mitigate any adverse effects on model performance. This strategy enhances the adaptability of base classes to novel ones and alleviates forgetting problems. Finally, extensive experiments conducted on the CUB200, CIFAR100, and miniImageNet datasets demonstrate the significant advantages of our proposed method over state-of-the-art models.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 3","pages":"1210-1223"},"PeriodicalIF":9.4000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10849630/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The goal of few-shot class incremental learning (FSCIL) is to learn new concepts from a limited number of novel samples while preserving the knowledge of previously learned classes. The mainstream FSCIL framework begins with training in the base session, after which the feature extractor is frozen to accommodate novel classes. We observed that traditional base-session training approaches often lead to overfitting on challenging samples, which can lead to reduced robustness in the decision boundaries and exacerbate the forgetting phenomenon when introducing incremental data. To address this issue, we proposed the progressive learning strategy (PGLS). First, inspired by curriculum learning, we developed a covariance noise perturbation approach based on the statistical information as a difficulty measure for assessing sample robustness. We then reweighted the samples based on their robustness, initially concentrating on enhancing model stability by prioritizing robust samples and subsequently leveraging weakly robust samples to improve generalization. Second, we predefined forward compatibility for various virtual class augmentation models. Within base class training, we employed a curriculum learning strategy that progressively introduced fewer to more virtual classes in order to mitigate any adverse effects on model performance. This strategy enhances the adaptability of base classes to novel ones and alleviates forgetting problems. Finally, extensive experiments conducted on the CUB200, CIFAR100, and miniImageNet datasets demonstrate the significant advantages of our proposed method over state-of-the-art models.
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.