{"title":"BRAVE:一种具有样本关注度的级联生成模型,用于对少量图像进行稳健分类","authors":"","doi":"10.1016/j.neucom.2024.128585","DOIUrl":null,"url":null,"abstract":"<div><p>Few-shot learning (FSL) confronts notable challenges due to the disparity between training and testing categories, leading to channel bias in neural networks and hindering accurate feature discernment. To address this, we introduce Biased-Reduction Attentive Network (BRAVE), an innovative model that incorporates a refined Vector Quantized Variational Autoencoder (VQ-VAE) backbone, enhanced with our Diverse Quantization (DQ) Module, for unbiased, fine-grained feature creation. Alongside, our Sample Attention (SA) Module is utilized for extracting discriminative features from these unbiased, fine-grained features. The DQ Module in BRAVE strategically integrates prior distribution regularization and stochastic masking with Gumbel sampling for balanced and diverse codebook engagement, while the SA Module leverages inter-sample dynamics for identifying critical features. This synergy effectively counters channel bias and boosts classification accuracy in FSL setups, surpassing current leading methods. Our approach represents a practical balance between preserving detailed features through the decoder and ensuring classification effectiveness, marking a significant advance in FSL. BRAVE’s implementation is accessible for community use and further exploration. Code and models available at <span><span>https://github.com/ApocalypsezZ/BRAVE</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BRAVE: A cascaded generative model with sample attention for robust few shot image classification\",\"authors\":\"\",\"doi\":\"10.1016/j.neucom.2024.128585\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Few-shot learning (FSL) confronts notable challenges due to the disparity between training and testing categories, leading to channel bias in neural networks and hindering accurate feature discernment. To address this, we introduce Biased-Reduction Attentive Network (BRAVE), an innovative model that incorporates a refined Vector Quantized Variational Autoencoder (VQ-VAE) backbone, enhanced with our Diverse Quantization (DQ) Module, for unbiased, fine-grained feature creation. Alongside, our Sample Attention (SA) Module is utilized for extracting discriminative features from these unbiased, fine-grained features. The DQ Module in BRAVE strategically integrates prior distribution regularization and stochastic masking with Gumbel sampling for balanced and diverse codebook engagement, while the SA Module leverages inter-sample dynamics for identifying critical features. This synergy effectively counters channel bias and boosts classification accuracy in FSL setups, surpassing current leading methods. Our approach represents a practical balance between preserving detailed features through the decoder and ensuring classification effectiveness, marking a significant advance in FSL. BRAVE’s implementation is accessible for community use and further exploration. Code and models available at <span><span>https://github.com/ApocalypsezZ/BRAVE</span><svg><path></path></svg></span>.</p></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224013560\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224013560","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
BRAVE: A cascaded generative model with sample attention for robust few shot image classification
Few-shot learning (FSL) confronts notable challenges due to the disparity between training and testing categories, leading to channel bias in neural networks and hindering accurate feature discernment. To address this, we introduce Biased-Reduction Attentive Network (BRAVE), an innovative model that incorporates a refined Vector Quantized Variational Autoencoder (VQ-VAE) backbone, enhanced with our Diverse Quantization (DQ) Module, for unbiased, fine-grained feature creation. Alongside, our Sample Attention (SA) Module is utilized for extracting discriminative features from these unbiased, fine-grained features. The DQ Module in BRAVE strategically integrates prior distribution regularization and stochastic masking with Gumbel sampling for balanced and diverse codebook engagement, while the SA Module leverages inter-sample dynamics for identifying critical features. This synergy effectively counters channel bias and boosts classification accuracy in FSL setups, surpassing current leading methods. Our approach represents a practical balance between preserving detailed features through the decoder and ensuring classification effectiveness, marking a significant advance in FSL. BRAVE’s implementation is accessible for community use and further exploration. Code and models available at https://github.com/ApocalypsezZ/BRAVE.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.