Francesco Restuccia, Salvatore D’oro, Amani Al-Shawabka, Bruno Costa Rendon, K. Chowdhury, Stratis Ioannidis, T. Melodia
{"title":"广义无线对抗性深度学习","authors":"Francesco Restuccia, Salvatore D’oro, Amani Al-Shawabka, Bruno Costa Rendon, K. Chowdhury, Stratis Ioannidis, T. Melodia","doi":"10.1145/3395352.3402625","DOIUrl":null,"url":null,"abstract":"Deep learning techniques can classify spectrum phenomena (e.g., waveform modulation) with accuracy levels that were once thought impossible. Although we have recently seen many advances in this field, extensive work in computer vision has demonstrated that an adversary can \"crack\" a classifier by designing inputs that \"steer\" the classifier away from the ground truth. This paper advances the state of the art by proposing a generalized analysis and evaluation of adversarial machine learning (AML) attacks to deep learning systems in the wireless domain. We postulate a series of adversarial attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks. We extensively evaluate the performance of our attacks on a state-of-the-art 1,000-device radio fingerprinting dataset, and a 24-class modulation dataset. Results show that our algorithms can decrease the classifiers' accuracy up to 3x while keeping the waveform distortion to a minimum.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"29 11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Generalized wireless adversarial deep learning\",\"authors\":\"Francesco Restuccia, Salvatore D’oro, Amani Al-Shawabka, Bruno Costa Rendon, K. Chowdhury, Stratis Ioannidis, T. Melodia\",\"doi\":\"10.1145/3395352.3402625\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning techniques can classify spectrum phenomena (e.g., waveform modulation) with accuracy levels that were once thought impossible. Although we have recently seen many advances in this field, extensive work in computer vision has demonstrated that an adversary can \\\"crack\\\" a classifier by designing inputs that \\\"steer\\\" the classifier away from the ground truth. This paper advances the state of the art by proposing a generalized analysis and evaluation of adversarial machine learning (AML) attacks to deep learning systems in the wireless domain. We postulate a series of adversarial attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks. We extensively evaluate the performance of our attacks on a state-of-the-art 1,000-device radio fingerprinting dataset, and a 24-class modulation dataset. Results show that our algorithms can decrease the classifiers' accuracy up to 3x while keeping the waveform distortion to a minimum.\",\"PeriodicalId\":370816,\"journal\":{\"name\":\"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning\",\"volume\":\"29 11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3395352.3402625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3395352.3402625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep learning techniques can classify spectrum phenomena (e.g., waveform modulation) with accuracy levels that were once thought impossible. Although we have recently seen many advances in this field, extensive work in computer vision has demonstrated that an adversary can "crack" a classifier by designing inputs that "steer" the classifier away from the ground truth. This paper advances the state of the art by proposing a generalized analysis and evaluation of adversarial machine learning (AML) attacks to deep learning systems in the wireless domain. We postulate a series of adversarial attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks. We extensively evaluate the performance of our attacks on a state-of-the-art 1,000-device radio fingerprinting dataset, and a 24-class modulation dataset. Results show that our algorithms can decrease the classifiers' accuracy up to 3x while keeping the waveform distortion to a minimum.