后门学习的窍门袋

IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Wireless Networks Pub Date : 2024-04-05 DOI:10.1007/s11276-024-03724-2
Ruitao Hou, Anli Yan, Hongyang Yan, Teng Huang
{"title":"后门学习的窍门袋","authors":"Ruitao Hou, Anli Yan, Hongyang Yan, Teng Huang","doi":"10.1007/s11276-024-03724-2","DOIUrl":null,"url":null,"abstract":"<p>Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22% and 6% for the two different trigger patterns, respectively. These facts further reveal the vulnerability of deep learning models to backdoor attacks.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"32 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bag of tricks for backdoor learning\",\"authors\":\"Ruitao Hou, Anli Yan, Hongyang Yan, Teng Huang\",\"doi\":\"10.1007/s11276-024-03724-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22% and 6% for the two different trigger patterns, respectively. These facts further reveal the vulnerability of deep learning models to backdoor attacks.</p>\",\"PeriodicalId\":23750,\"journal\":{\"name\":\"Wireless Networks\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Wireless Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11276-024-03724-2\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wireless Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11276-024-03724-2","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

深度学习模型容易受到后门攻击,即对手通过数据下毒来愚弄模型,使受害模型在干净样本上表现良好,但在中毒样本上表现错误。虽然研究人员对后门攻击进行了深入研究,但他们关注的是特定的攻击和防御方法,而忽视了基本训练技巧对后门攻击效果的影响。分析这些影响因素有助于促进深度学习系统的安全,并探索新的防御视角。为此,我们利用对 CIFAR10 的弱清洁标签后门攻击进行了全面评估,重点研究了各种被忽视的训练技巧对后门攻击的影响。具体来说,我们主要从批量大小、数据增强、热身和混合等十个方面进行了研究。结果表明,后门攻击对一些训练技巧很敏感,而优化基本训练技巧可以显著改善后门攻击的效果。例如,对于两种不同的触发模式,适当的热身设置可以将后门攻击的效果分别提高 22% 和 6%。这些事实进一步揭示了深度学习模型在后门攻击面前的脆弱性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Bag of tricks for backdoor learning

Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22% and 6% for the two different trigger patterns, respectively. These facts further reveal the vulnerability of deep learning models to backdoor attacks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Wireless Networks
Wireless Networks 工程技术-电信学
CiteScore
7.70
自引率
3.30%
发文量
314
审稿时长
5.5 months
期刊介绍: The wireless communication revolution is bringing fundamental changes to data networking, telecommunication, and is making integrated networks a reality. By freeing the user from the cord, personal communications networks, wireless LAN''s, mobile radio networks and cellular systems, harbor the promise of fully distributed mobile computing and communications, any time, anywhere. Focusing on the networking and user aspects of the field, Wireless Networks provides a global forum for archival value contributions documenting these fast growing areas of interest. The journal publishes refereed articles dealing with research, experience and management issues of wireless networks. Its aim is to allow the reader to benefit from experience, problems and solutions described.
期刊最新文献
An EEG signal-based music treatment system for autistic children using edge computing devices A DV-Hop localization algorithm corrected based on multi-strategy sparrow algorithm in sea-surface wireless sensor networks Multi-Layer Collaborative Federated Learning architecture for 6G Open RAN Cloud-edge collaboration-based task offloading strategy in railway IoT for intelligent detection Exploiting data transmission for route discoveries in mobile ad hoc networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1