{"title":"Cross-attention based dual-similarity network for few-shot learning","authors":"Chan Sim, Gyeonghwan Kim","doi":"10.1016/j.patrec.2024.08.019","DOIUrl":null,"url":null,"abstract":"<div><p>Few-shot classification is a challenging task to recognize unseen classes with limited data. Following the success of Vision Transformer in various large-scale datasets image recognition domains, recent few-shot classification methods employ transformer-style. However, most of them focus only on cross-attention between support and query sets, mainly considering channel-similarity. To address this issue, we introduce <em>dual-similarity network</em> (DSN) in which attention maps for the same target within a class are made identical. With the network, a way of effective training through the integration of the channel-similarity and the map-similarity has been sought. Our method, while focused on <span><math><mi>N</mi></math></span>-way <span><math><mi>K</mi></math></span>-shot scenarios, also demonstrates strong performance in 1-shot settings through augmentation. The experimental results verify the effectiveness of DSN on widely used benchmark datasets.</p></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"186 ","pages":"Pages 1-6"},"PeriodicalIF":3.9000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865524002514","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Few-shot classification is a challenging task to recognize unseen classes with limited data. Following the success of Vision Transformer in various large-scale datasets image recognition domains, recent few-shot classification methods employ transformer-style. However, most of them focus only on cross-attention between support and query sets, mainly considering channel-similarity. To address this issue, we introduce dual-similarity network (DSN) in which attention maps for the same target within a class are made identical. With the network, a way of effective training through the integration of the channel-similarity and the map-similarity has been sought. Our method, while focused on -way -shot scenarios, also demonstrates strong performance in 1-shot settings through augmentation. The experimental results verify the effectiveness of DSN on widely used benchmark datasets.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.