针对深度学习调制识别的针对性对抗性规避攻击的局限性

Samuel Bair, Matthew DelVecchio, Bryse Flowers, Alan J. Michaels, W. Headley
{"title":"针对深度学习调制识别的针对性对抗性规避攻击的局限性","authors":"Samuel Bair, Matthew DelVecchio, Bryse Flowers, Alan J. Michaels, W. Headley","doi":"10.1145/3324921.3328785","DOIUrl":null,"url":null,"abstract":"Wireless communications has greatly benefited in recent years from advances in machine learning. A new subfield, commonly termed Radio Frequency Machine Learning (RFML), has emerged that has demonstrated the application of Deep Neural Networks to multiple spectrum sensing tasks such as modulation recognition and specific emitter identification. Yet, recent research in the RF domain has shown that these models are vulnerable to over-the-air adversarial evasion attacks, which seek to cause minimum harm to the underlying transmission to a cooperative receiver, while greatly lowering the performance of spectrum sensing tasks by an eavesdropper. While prior work has focused on untargeted evasion, which simply degrades classification accuracy, this paper focuses on targeted evasion attacks, which aim to masquerade as a specific signal of interest. The current work examines how a Convolutional Neural Network (CNN) based Automatic Modulation Classification (AMC) model breaks down in the presence of an adversary with direct access to its inputs. Specifically, the current work uses the adversarial perturbation power needed to change the classification from a specific source modulation to a specific target modulation as a proxy for the model's estimation of their similarity and compares this with the known hierarchy of these human engineered modulations. The findings conclude that the reference model breaks down in an intuitive way, which can have implications on progress towards hardening RFML models.","PeriodicalId":435733,"journal":{"name":"Proceedings of the ACM Workshop on Wireless Security and Machine Learning","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":"{\"title\":\"On the Limitations of Targeted Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition\",\"authors\":\"Samuel Bair, Matthew DelVecchio, Bryse Flowers, Alan J. Michaels, W. Headley\",\"doi\":\"10.1145/3324921.3328785\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Wireless communications has greatly benefited in recent years from advances in machine learning. A new subfield, commonly termed Radio Frequency Machine Learning (RFML), has emerged that has demonstrated the application of Deep Neural Networks to multiple spectrum sensing tasks such as modulation recognition and specific emitter identification. Yet, recent research in the RF domain has shown that these models are vulnerable to over-the-air adversarial evasion attacks, which seek to cause minimum harm to the underlying transmission to a cooperative receiver, while greatly lowering the performance of spectrum sensing tasks by an eavesdropper. While prior work has focused on untargeted evasion, which simply degrades classification accuracy, this paper focuses on targeted evasion attacks, which aim to masquerade as a specific signal of interest. The current work examines how a Convolutional Neural Network (CNN) based Automatic Modulation Classification (AMC) model breaks down in the presence of an adversary with direct access to its inputs. Specifically, the current work uses the adversarial perturbation power needed to change the classification from a specific source modulation to a specific target modulation as a proxy for the model's estimation of their similarity and compares this with the known hierarchy of these human engineered modulations. The findings conclude that the reference model breaks down in an intuitive way, which can have implications on progress towards hardening RFML models.\",\"PeriodicalId\":435733,\"journal\":{\"name\":\"Proceedings of the ACM Workshop on Wireless Security and Machine Learning\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Workshop on Wireless Security and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3324921.3328785\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Workshop on Wireless Security and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3324921.3328785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

摘要

近年来,无线通信从机器学习的进步中受益匪浅。一个新的子领域,通常被称为射频机器学习(RFML),已经出现,它已经证明了深度神经网络在多频谱感知任务中的应用,如调制识别和特定发射器识别。然而,最近在射频领域的研究表明,这些模型容易受到空中对抗性规避攻击的攻击,这些攻击寻求对合作接收器的底层传输造成最小的伤害,同时大大降低了窃听者频谱感知任务的性能。虽然之前的工作主要集中在非目标规避上,这只会降低分类准确性,但本文关注的是目标规避攻击,其目的是伪装成感兴趣的特定信号。目前的工作研究了基于卷积神经网络(CNN)的自动调制分类(AMC)模型如何在对手直接访问其输入的情况下崩溃。具体来说,目前的工作使用了将分类从特定源调制更改为特定目标调制所需的对抗摄动功率作为模型对其相似性估计的代理,并将其与这些人类工程调制的已知层次进行比较。研究结果得出的结论是,参考模型以一种直观的方式崩溃,这可能对强化RFML模型的进展产生影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
On the Limitations of Targeted Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition
Wireless communications has greatly benefited in recent years from advances in machine learning. A new subfield, commonly termed Radio Frequency Machine Learning (RFML), has emerged that has demonstrated the application of Deep Neural Networks to multiple spectrum sensing tasks such as modulation recognition and specific emitter identification. Yet, recent research in the RF domain has shown that these models are vulnerable to over-the-air adversarial evasion attacks, which seek to cause minimum harm to the underlying transmission to a cooperative receiver, while greatly lowering the performance of spectrum sensing tasks by an eavesdropper. While prior work has focused on untargeted evasion, which simply degrades classification accuracy, this paper focuses on targeted evasion attacks, which aim to masquerade as a specific signal of interest. The current work examines how a Convolutional Neural Network (CNN) based Automatic Modulation Classification (AMC) model breaks down in the presence of an adversary with direct access to its inputs. Specifically, the current work uses the adversarial perturbation power needed to change the classification from a specific source modulation to a specific target modulation as a proxy for the model's estimation of their similarity and compares this with the known hierarchy of these human engineered modulations. The findings conclude that the reference model breaks down in an intuitive way, which can have implications on progress towards hardening RFML models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
RAPID: Real-time Anomaly-based Preventive Intrusion Detection Targeted Adversarial Examples Against RF Deep Classifiers Efficient Power Adaptation against Deep Learning Based Predictive Adversaries Detecting Drones Status via Encrypted Traffic Analysis Machine Learning-based Prevention of Battery-oriented Illegitimate Task Injection in Mobile Crowdsensing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1