{"title":"Bilinear-experts network with self-adaptive sampler for long-tailed visual recognition","authors":"Qin Wang , Sam Kwong , Xizhao Wang","doi":"10.1016/j.neucom.2025.129832","DOIUrl":null,"url":null,"abstract":"<div><div>Long-tail distributed data hinders the practical application of state-of-the-art deep models in computer vision. Consequently, exclusive methodologies for handling the long-tailed problem are proposed, focusing on different hierarchies. For embedding hierarchy, existing works manually augment the diversity of tail-class features for specific datasets. However, prior knowledge about datasets is not always available for practical use, which brings unsatisfactory generalization ability in human fine-turned augmentation under such circumstances. To figure out this problem, we introduce a novel model named Bilinear-Experts Network (BENet) with Self-Adaptive Sampler (SAS). This model leverages model-driven perturbations to tail-class embeddings while preserving generalization capability on head classes through a designed bilinear experts system. The designed perturbations adaptively augment tail-class space and shift the class boundary away from the tail-class centers. Moreover, we find that SAS automatically assigns more significant perturbations to specific tail classes with relatively fewer training samples, which indicates SAS is capable of filtering tail classes with lower quality and enhancing them. Also, experiments conducted across various long-tailed benchmarks validate the comparable performance of the proposed BENet.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129832"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225005041","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Long-tail distributed data hinders the practical application of state-of-the-art deep models in computer vision. Consequently, exclusive methodologies for handling the long-tailed problem are proposed, focusing on different hierarchies. For embedding hierarchy, existing works manually augment the diversity of tail-class features for specific datasets. However, prior knowledge about datasets is not always available for practical use, which brings unsatisfactory generalization ability in human fine-turned augmentation under such circumstances. To figure out this problem, we introduce a novel model named Bilinear-Experts Network (BENet) with Self-Adaptive Sampler (SAS). This model leverages model-driven perturbations to tail-class embeddings while preserving generalization capability on head classes through a designed bilinear experts system. The designed perturbations adaptively augment tail-class space and shift the class boundary away from the tail-class centers. Moreover, we find that SAS automatically assigns more significant perturbations to specific tail classes with relatively fewer training samples, which indicates SAS is capable of filtering tail classes with lower quality and enhancing them. Also, experiments conducted across various long-tailed benchmarks validate the comparable performance of the proposed BENet.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.