{"title":"后门学习的窍门袋","authors":"Ruitao Hou, Anli Yan, Hongyang Yan, Teng Huang","doi":"10.1007/s11276-024-03724-2","DOIUrl":null,"url":null,"abstract":"<p>Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22% and 6% for the two different trigger patterns, respectively. These facts further reveal the vulnerability of deep learning models to backdoor attacks.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"32 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bag of tricks for backdoor learning\",\"authors\":\"Ruitao Hou, Anli Yan, Hongyang Yan, Teng Huang\",\"doi\":\"10.1007/s11276-024-03724-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22% and 6% for the two different trigger patterns, respectively. These facts further reveal the vulnerability of deep learning models to backdoor attacks.</p>\",\"PeriodicalId\":23750,\"journal\":{\"name\":\"Wireless Networks\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Wireless Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11276-024-03724-2\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wireless Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11276-024-03724-2","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22% and 6% for the two different trigger patterns, respectively. These facts further reveal the vulnerability of deep learning models to backdoor attacks.
期刊介绍:
The wireless communication revolution is bringing fundamental changes to data networking, telecommunication, and is making integrated networks a reality. By freeing the user from the cord, personal communications networks, wireless LAN''s, mobile radio networks and cellular systems, harbor the promise of fully distributed mobile computing and communications, any time, anywhere.
Focusing on the networking and user aspects of the field, Wireless Networks provides a global forum for archival value contributions documenting these fast growing areas of interest. The journal publishes refereed articles dealing with research, experience and management issues of wireless networks. Its aim is to allow the reader to benefit from experience, problems and solutions described.