Chao Han, Linyuan Wang, Dongyang Li, Weijia Cui, Bin Yan
{"title":"A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models","authors":"Chao Han, Linyuan Wang, Dongyang Li, Weijia Cui, Bin Yan","doi":"10.1007/s11036-024-02333-9","DOIUrl":null,"url":null,"abstract":"<p>In the rapidly evolving landscape of wireless communication systems, the vulnerability of automatic modulation classification (AMC) models to adversarial attacks presents a significant security challenge. This study introduces a pruning and training methodology tailored to address the nuances of signal processing within these systems. Leveraging a pruning method based on channel activation contributions, our approach optimizes adversarial training potential, enhancing the model’s capacity to improve robustness against attacks. Additionally, the approach constructs a resilient training method based on a composite strategy, integrating balanced adversarial training, soft target regularization, and gradient masking. This combination effectively broadens the model’s uncertainty space and obfuscates gradients, thereby enhancing the model’s defenses against a wide spectrum of adversarial tactics. The training regimen is carefully adjusted to retain sensitivity to adversarial inputs while maintaining accuracy on original data. Comprehensive evaluations conducted on the RML2016.10A dataset demonstrate the effectiveness of our method in defending against both gradient-based and optimization-based attacks within the realm of wireless communication. This research offers insightful and practical approaches to improving the security and performance of AMC models against the complex and evolving threats present in modern wireless communication environments.</p>","PeriodicalId":501103,"journal":{"name":"Mobile Networks and Applications","volume":"46 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mobile Networks and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11036-024-02333-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the rapidly evolving landscape of wireless communication systems, the vulnerability of automatic modulation classification (AMC) models to adversarial attacks presents a significant security challenge. This study introduces a pruning and training methodology tailored to address the nuances of signal processing within these systems. Leveraging a pruning method based on channel activation contributions, our approach optimizes adversarial training potential, enhancing the model’s capacity to improve robustness against attacks. Additionally, the approach constructs a resilient training method based on a composite strategy, integrating balanced adversarial training, soft target regularization, and gradient masking. This combination effectively broadens the model’s uncertainty space and obfuscates gradients, thereby enhancing the model’s defenses against a wide spectrum of adversarial tactics. The training regimen is carefully adjusted to retain sensitivity to adversarial inputs while maintaining accuracy on original data. Comprehensive evaluations conducted on the RML2016.10A dataset demonstrate the effectiveness of our method in defending against both gradient-based and optimization-based attacks within the realm of wireless communication. This research offers insightful and practical approaches to improving the security and performance of AMC models against the complex and evolving threats present in modern wireless communication environments.