Improving the transferability of adversarial examples with path tuning

IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Applied Intelligence Pub Date : 2024-09-11 DOI:10.1007/s10489-024-05820-4
Tianyu Li, Xiaoyu Li, Wuping Ke, Xuwei Tian, Desheng Zheng, Chao Lu
{"title":"Improving the transferability of adversarial examples with path tuning","authors":"Tianyu Li,&nbsp;Xiaoyu Li,&nbsp;Wuping Ke,&nbsp;Xuwei Tian,&nbsp;Desheng Zheng,&nbsp;Chao Lu","doi":"10.1007/s10489-024-05820-4","DOIUrl":null,"url":null,"abstract":"<p>Adversarial attacks pose a significant threat to real-world applications based on deep neural networks (DNNs), especially in security-critical applications. Research has shown that adversarial examples (AEs) generated on a surrogate model can also succeed on a target model, which is known as transferability. Feature-level transfer-based attacks improve the transferability of AEs by disrupting intermediate features. They target the intermediate layer of the model and use feature importance metrics to find these features. However, current methods overfit feature importance metrics to surrogate models, which results in poor sharing of the importance metrics across models and insufficient destruction of deep features. This work demonstrates the trade-off between feature importance metrics and feature corruption generalization, and categorizes feature destructive causes of misclassification. This work proposes a generative framework named PTNAA to guide the destruction of deep features across models, thus improving the transferability of AEs. Specifically, the method introduces path methods into integrated gradients. It selects path functions using only a priori knowledge and approximates neuron attribution using nonuniform sampling. In addition, it measures neurons based on the attribution results and performs feature-level attacks to remove inherent features of the image. Extensive experiments demonstrate the effectiveness of the proposed method. The code is available at https://github.com/lounwb/PTNAA.</p>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"54 23","pages":"12194 - 12214"},"PeriodicalIF":3.4000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-05820-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Adversarial attacks pose a significant threat to real-world applications based on deep neural networks (DNNs), especially in security-critical applications. Research has shown that adversarial examples (AEs) generated on a surrogate model can also succeed on a target model, which is known as transferability. Feature-level transfer-based attacks improve the transferability of AEs by disrupting intermediate features. They target the intermediate layer of the model and use feature importance metrics to find these features. However, current methods overfit feature importance metrics to surrogate models, which results in poor sharing of the importance metrics across models and insufficient destruction of deep features. This work demonstrates the trade-off between feature importance metrics and feature corruption generalization, and categorizes feature destructive causes of misclassification. This work proposes a generative framework named PTNAA to guide the destruction of deep features across models, thus improving the transferability of AEs. Specifically, the method introduces path methods into integrated gradients. It selects path functions using only a priori knowledge and approximates neuron attribution using nonuniform sampling. In addition, it measures neurons based on the attribution results and performs feature-level attacks to remove inherent features of the image. Extensive experiments demonstrate the effectiveness of the proposed method. The code is available at https://github.com/lounwb/PTNAA.

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过路径调整提高对抗性示例的可移植性
摘要 对抗性攻击对基于深度神经网络(DNN)的现实世界应用构成重大威胁,尤其是在安全关键型应用中。研究表明,在代理模型上生成的对抗性示例(AE)也能在目标模型上成功,这就是所谓的可转移性。基于特征层的转移攻击通过破坏中间特征来提高 AE 的可转移性。它们以模型的中间层为目标,并使用特征重要性度量来查找这些特征。然而,目前的方法过度拟合了代用模型的特征重要性度量,导致模型间的重要性度量共享性差,对深层特征的破坏不足。这项工作展示了特征重要性度量与特征破坏泛化之间的权衡,并对造成误分类的特征破坏原因进行了分类。这项工作提出了一个名为 PTNAA 的生成框架,用于指导跨模型的深度特征破坏,从而提高 AE 的可转移性。具体来说,该方法将路径方法引入集成梯度。它仅使用先验知识选择路径函数,并使用非均匀采样近似神经元归属。此外,它还能根据归因结果测量神经元,并执行特征级攻击以去除图像的固有特征。大量实验证明了所提方法的有效性。代码见 https://github.com/lounwb/PTNAA.Graphical 摘要
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
期刊最新文献
ZPDSN: spatio-temporal meteorological forecasting with topological data analysis DTR4Rec: direct transition relationship for sequential recommendation Unsupervised anomaly detection and imputation in noisy time series data for enhancing load forecasting A prototype evolution network for relation extraction Highway spillage detection using an improved STPM anomaly detection network from a surveillance perspective
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1