Adversarial Example Soups: Improving Transferability and Stealthiness for Free

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-30 DOI:10.1109/TIFS.2025.3536611
Bo Yang;Hengwei Zhang;Jindong Wang;Yulong Yang;Chenhao Lin;Chao Shen;Zhengyu Zhao
{"title":"Adversarial Example Soups: Improving Transferability and Stealthiness for Free","authors":"Bo Yang;Hengwei Zhang;Jindong Wang;Yulong Yang;Chenhao Lin;Chao Shen;Zhengyu Zhao","doi":"10.1109/TIFS.2025.3536611","DOIUrl":null,"url":null,"abstract":"Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge. A conventional recipe for maximizing transferability is to keep only the optimal adversarial example from all those obtained in the optimization pipeline. In this paper, for the first time, we revisit this convention and demonstrate that those discarded, sub-optimal adversarial examples can be reused to boost transferability. Specifically, we propose “Adversarial Example Soups” (AES), with AES-tune for averaging discarded adversarial examples in hyperparameter tuning and AES-rand for stability testing. In addition, our AES is inspired by “model soups”, which averages weights of multiple fine-tuned models for improved accuracy without increasing inference time. Extensive experiments validate the global effectiveness of our AES, boosting 10 state-of-the-art transfer attacks and their combinations by up to 13% against 10 diverse (defensive) target models. We also show the possibility of generalizing AES to other types, e.g., directly averaging multiple in-the-wild adversarial examples that yield comparable success. A promising byproduct of AES is the improved stealthiness of adversarial examples since the perturbation variances are naturally reduced.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1882-1894"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10858076/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge. A conventional recipe for maximizing transferability is to keep only the optimal adversarial example from all those obtained in the optimization pipeline. In this paper, for the first time, we revisit this convention and demonstrate that those discarded, sub-optimal adversarial examples can be reused to boost transferability. Specifically, we propose “Adversarial Example Soups” (AES), with AES-tune for averaging discarded adversarial examples in hyperparameter tuning and AES-rand for stability testing. In addition, our AES is inspired by “model soups”, which averages weights of multiple fine-tuned models for improved accuracy without increasing inference time. Extensive experiments validate the global effectiveness of our AES, boosting 10 state-of-the-art transfer attacks and their combinations by up to 13% against 10 diverse (defensive) target models. We also show the possibility of generalizing AES to other types, e.g., directly averaging multiple in-the-wild adversarial examples that yield comparable success. A promising byproduct of AES is the improved stealthiness of adversarial examples since the perturbation variances are naturally reduced.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对抗性示例汤:免费提高可转移性和隐身性
可转移的对抗性示例会导致实际的安全风险,因为它们可能在不了解目标模型内部知识的情况下误导目标模型。使可转移性最大化的传统方法是从优化管道中获得的所有示例中只保留最优的对抗示例。在本文中,我们第一次重新审视了这一惯例,并证明了那些被丢弃的、次优的对抗示例可以被重用以提高可转移性。具体来说,我们提出了“对抗示例汤”(AES),其中AES-tune用于在超参数调优中平均丢弃的对抗示例,AES-rand用于稳定性测试。此外,我们的AES受到“模型汤”的启发,它在不增加推理时间的情况下平均多个微调模型的权重以提高准确性。广泛的实验验证了我们的AES的全球有效性,针对10种不同的(防御性)目标模型,将10种最先进的传输攻击及其组合提高了13%。我们还展示了将AES推广到其他类型的可能性,例如,直接平均多个在野外对抗的例子,产生类似的成功。AES的一个很有希望的副产品是对抗实例的隐身性的提高,因为扰动方差自然地减小了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
RFFRDet: A Refined Feature Fusion Rotation Detector for Prohibited Item Recognition in X-ray Images SE-ASSO: A Security-Enhanced Anonymous Single-Sign-On Authentication Scheme Your Non-Transferable Learning is Fragile: Practical Breach of Protected Models State Partition-Particle Filter Detection for Cyber-Physical Attacks BrainprintNet: A Multiscale Cross-Band Fusion Network for EEG-based Brainprint Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1