B. Manoj, P. M. Santos, Meysam Sadeghi, E. Larsson
{"title":"Toward Robust Networks against Adversarial Attacks for Radio Signal Modulation Classification","authors":"B. Manoj, P. M. Santos, Meysam Sadeghi, E. Larsson","doi":"10.1109/spawc51304.2022.9833926","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) is a powerful technique for many real-time applications, but it is vulnerable to adversarial attacks. Herein, we consider DL-based modulation classification, with the objective to create DL models that are robust against attacks. Specifically, we introduce three defense techniques: i) randomized smoothing, ii) hybrid projected gradient descent adversarial training, and iii) fast adversarial training, and evaluate them under both white-box (WB) and black-box (BB) attacks. We show that the proposed fast adversarial training is more robust and computationally efficient than the other techniques, and can create models that are extremely robust to practical (BB) attacks.","PeriodicalId":423807,"journal":{"name":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","volume":"289 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/spawc51304.2022.9833926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Deep learning (DL) is a powerful technique for many real-time applications, but it is vulnerable to adversarial attacks. Herein, we consider DL-based modulation classification, with the objective to create DL models that are robust against attacks. Specifically, we introduce three defense techniques: i) randomized smoothing, ii) hybrid projected gradient descent adversarial training, and iii) fast adversarial training, and evaluate them under both white-box (WB) and black-box (BB) attacks. We show that the proposed fast adversarial training is more robust and computationally efficient than the other techniques, and can create models that are extremely robust to practical (BB) attacks.