{"title":"Targeted Adversarial Examples Against RF Deep Classifiers","authors":"S. Kokalj-Filipovic, Rob Miller, Joshua Morman","doi":"10.1145/3324921.3328792","DOIUrl":null,"url":null,"abstract":"Adversarial examples (AdExs) in machine learning for classification of radio frequency (RF) signals can be created in a targeted manner such that they go beyond general misclassification and result in the detection of a specific targeted class. Moreover, these drastic, targeted misclassifications can be achieved with minimal waveform perturbations, resulting in catastrophic impact to deep learning based spectrum sensing applications (e.g. WiFi is mistaken for Bluetooth). This work addresses targeted deep learning AdExs, specifically those obtained using the Carlini-Wagner algorithm, and analyzes previously introduced defense mechanisms that performed successfully against non-targeted FGSM-based attacks. To analyze the effects of the Carlini-Wagner attack, and the defense mechanisms, we trained neural networks on two datasets. The first dataset is a subset of the DeepSig dataset, comprised of three synthetic modulations BPSK, QPSK, 8-PSK, which we use to train a simple network for Modulation Recognition. The second dataset contains real-world, well-labeled, curated data from the 2.4 GHz Industrial, Scientific and Medical (ISM) band, that we use to train a network for wireless technology (protocol) classification using three classes: WiFi 802.11n, Bluetooth (BT) and ZigBee. We show that for attacks of limited intensity the impact of the attack in terms of percentage of misclassifications is similar for both datasets, and that the proposed defense is effective in both cases. Finally, we use our ISM data to show that the targeted attack is effective against the deep learning classifier but not against a classical demodulator.","PeriodicalId":435733,"journal":{"name":"Proceedings of the ACM Workshop on Wireless Security and Machine Learning","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Workshop on Wireless Security and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3324921.3328792","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40
Abstract
Adversarial examples (AdExs) in machine learning for classification of radio frequency (RF) signals can be created in a targeted manner such that they go beyond general misclassification and result in the detection of a specific targeted class. Moreover, these drastic, targeted misclassifications can be achieved with minimal waveform perturbations, resulting in catastrophic impact to deep learning based spectrum sensing applications (e.g. WiFi is mistaken for Bluetooth). This work addresses targeted deep learning AdExs, specifically those obtained using the Carlini-Wagner algorithm, and analyzes previously introduced defense mechanisms that performed successfully against non-targeted FGSM-based attacks. To analyze the effects of the Carlini-Wagner attack, and the defense mechanisms, we trained neural networks on two datasets. The first dataset is a subset of the DeepSig dataset, comprised of three synthetic modulations BPSK, QPSK, 8-PSK, which we use to train a simple network for Modulation Recognition. The second dataset contains real-world, well-labeled, curated data from the 2.4 GHz Industrial, Scientific and Medical (ISM) band, that we use to train a network for wireless technology (protocol) classification using three classes: WiFi 802.11n, Bluetooth (BT) and ZigBee. We show that for attacks of limited intensity the impact of the attack in terms of percentage of misclassifications is similar for both datasets, and that the proposed defense is effective in both cases. Finally, we use our ISM data to show that the targeted attack is effective against the deep learning classifier but not against a classical demodulator.