Pub Date : 2024-09-17DOI: 10.1109/TIP.2024.3458858
Xi Yang;Dechen Kong;Ren Lin;Nannan Wang;Xinbo Gao
Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker’s trigger activates a hidden backdoor, it may result in the misclassification of images, profoundly affecting the model’s performance. In our research, we explore adaptive defenses against backdoor attacks for few-shot learning. We introduce a specialized stochastic process tailored to task characteristics that safeguards the classification model against attack-induced incorrect feature extraction. This process functions during forward propagation and is thus termed an “in-process defense.” Our method employs an adaptive strategy, effectively generating task-level representations, enabling rapid adaptation to pre-trained models, and proving effective in few-shot classification scenarios for countering backdoor attacks. We apply latent stochastic processes to approximate task distributions and derive task-level representations from the support set. This task-level representation guides feature extraction, leading to backdoor trigger mismatching and forming the foundation of our parameter defense strategy. Benchmark tests on Meta-Dataset reveal that our approach not only withstands backdoor attacks but also shows an improved adaptation in addressing few-shot classification tasks.
{"title":"Adapting Few-Shot Classification via In-Process Defense","authors":"Xi Yang;Dechen Kong;Ren Lin;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2024.3458858","DOIUrl":"10.1109/TIP.2024.3458858","url":null,"abstract":"Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker’s trigger activates a hidden backdoor, it may result in the misclassification of images, profoundly affecting the model’s performance. In our research, we explore adaptive defenses against backdoor attacks for few-shot learning. We introduce a specialized stochastic process tailored to task characteristics that safeguards the classification model against attack-induced incorrect feature extraction. This process functions during forward propagation and is thus termed an “in-process defense.” Our method employs an adaptive strategy, effectively generating task-level representations, enabling rapid adaptation to pre-trained models, and proving effective in few-shot classification scenarios for countering backdoor attacks. We apply latent stochastic processes to approximate task distributions and derive task-level representations from the support set. This task-level representation guides feature extraction, leading to backdoor trigger mismatching and forming the foundation of our parameter defense strategy. Benchmark tests on Meta-Dataset reveal that our approach not only withstands backdoor attacks but also shows an improved adaptation in addressing few-shot classification tasks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5232-5245"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.1109/TIP.2024.3458854
Bo Dang;Yansheng Li;Yongjun Zhang;Jiayi Ma
Semi-supervised semantic segmentation focuses on the exploration of a small amount of labeled data and a large amount of unlabeled data, which is more in line with the demands of real-world image understanding applications. However, it is still hindered by the inability to fully and effectively leverage unlabeled images. In this paper, we reveal that cross-window consistency (CWC) is helpful in comprehensively extracting auxiliary supervision from unlabeled data. Additionally, we propose a novel CWC-driven progressive learning framework to optimize the deep network by mining weak-to-strong constraints from massive unlabeled data. More specifically, this paper presents a biased cross-window consistency (BCC) loss with an importance factor, which helps the deep network explicitly constrain confidence maps from overlapping regions in different windows to maintain semantic consistency with larger contexts. In addition, we propose a dynamic pseudo-label memory bank (DPM) to provide high-consistency and high-reliability pseudo-labels to further optimize the network. Extensive experiments on three representative datasets of urban views, medical scenarios, and satellite scenes with consistent performance gain demonstrate the superiority of our framework. Our code is released at https://jack-bo1220.github.io/project/CWC.html