Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2025-01-07 DOI:10.1007/s40747-024-01704-9
Yinting Wu, Pai Peng, Bo Cai, Le Li
{"title":"Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection","authors":"Yinting Wu, Pai Peng, Bo Cai, Le Li","doi":"10.1007/s40747-024-01704-9","DOIUrl":null,"url":null,"abstract":"<p>Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection. Consequently, such methods may limit thorough probing of the vicinity around the original samples and possibly lead to unnecessary or even detrimental training. In this work, a simple yet effective training framework, called Batch-in-Batch (BB), is proposed to refine adversarial training from these two perspectives. The framework jointly generates <i>m</i> sets of initial perturbations for each original sample, seeking to provide high quality adversarial samples by fully exploring the vicinity. Then, it incorporates a sample selection procedure to prioritize training on higher-quality adversarial samples. Through extensive experiments on three benchmark datasets with two network architectures in both single-step (Noise-Fast Gradient Sign Method, N-FGSM) and multi-step (Projected Gradient Descent, PGD) scenarios, models trained within the BB framework consistently demonstrate superior adversarial accuracy across various adversarial settings, notably achieving an improvement of more than 13% on the SVHN dataset with an attack radius of 8/255 compared to N-FGSM. The analysis further demonstrates the efficiency and mechanisms of the proposed initial perturbation design and sample selection strategies. Finally, results concerning training time indicate that the BB framework is computational-effective, even with a relatively large <i>m</i>.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"66 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01704-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection. Consequently, such methods may limit thorough probing of the vicinity around the original samples and possibly lead to unnecessary or even detrimental training. In this work, a simple yet effective training framework, called Batch-in-Batch (BB), is proposed to refine adversarial training from these two perspectives. The framework jointly generates m sets of initial perturbations for each original sample, seeking to provide high quality adversarial samples by fully exploring the vicinity. Then, it incorporates a sample selection procedure to prioritize training on higher-quality adversarial samples. Through extensive experiments on three benchmark datasets with two network architectures in both single-step (Noise-Fast Gradient Sign Method, N-FGSM) and multi-step (Projected Gradient Descent, PGD) scenarios, models trained within the BB framework consistently demonstrate superior adversarial accuracy across various adversarial settings, notably achieving an improvement of more than 13% on the SVHN dataset with an attack radius of 8/255 compared to N-FGSM. The analysis further demonstrates the efficiency and mechanisms of the proposed initial perturbation design and sample selection strategies. Finally, results concerning training time indicate that the BB framework is computational-effective, even with a relatively large m.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Batch-in-Batch:一种新的针对初始扰动和样本选择的对抗训练框架
对抗训练方法通常产生跨时代独立的初始扰动,并在不选择的情况下获得后续对抗训练样本。因此,这种方法可能会限制对原始样本附近的彻底探测,并可能导致不必要的甚至有害的训练。在这项工作中,提出了一个简单而有效的训练框架,称为批中批(BB),从这两个角度来改进对抗性训练。该框架为每个原始样本共同生成m组初始扰动,寻求通过充分探索邻近区域来提供高质量的对抗样本。然后,它结合了一个样本选择程序来优先考虑高质量对抗性样本的训练。通过在单步(噪声快速梯度标志法,N-FGSM)和多步(投影梯度下降法,PGD)场景下具有两种网络架构的三个基准数据集上的广泛实验,在BB框架内训练的模型在各种对抗设置中始终表现出卓越的对抗精度,特别是在攻击半径为8/255的SVHN数据集上实现了超过13%的改进。分析进一步证明了所提出的初始扰动设计和样本选择策略的效率和机制。最后,关于训练时间的结果表明,即使具有相对较大的m, BB框架也是计算有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
Manet: motion-aware network for video action recognition A low-carbon scheduling method based on improved ant colony algorithm for underground electric transportation vehicles Vehicle positioning systems in tunnel environments: a review A survey of security threats in federated learning Barriers and enhance strategies for green supply chain management using continuous linear diophantine neural networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1