首页 > 最新文献

Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning最新文献

英文 中文
Retracted on July 26, 2022: Open set recognition through unsupervised and class-distance learning 2022年7月26日撤销:通过无监督和课堂远程学习开放集识别
Pub Date : 2020-07-13 DOI: 10.1145/3395352.3402901
Andrew Draganov, Carter Brown, Enrico Mattei, Cass Dalton, Jaspreet Ranjit
This article has been retracted from the ACM Digital Library because of Author Misrepresentation. The ACM published paper used an earlier work written by Xudong Wang, Stella Yu, Long Lian, Andrew Draganov, Carter Brown, Enrico Mattie, Cass Dalton and Jasprett Ranit. Xudong Wang, Stella Yu and Long Lian were not included as authors on the ACM paper. As a result, ACM retracted the Work from the Digital Library on July 26, 2022. The retracted Work remains in the ACM Digital Library for archiving purposes only and should not be used for further research or citation purposes.
由于作者失实陈述,本文已从ACM数字图书馆撤回。美国计算机学会发表的论文使用了由王绪东、Stella Yu、Long Lian、Andrew Draganov、Carter Brown、Enrico Mattie、Cass Dalton和Jasprett Ranit撰写的早期工作。王旭东、Stella Yu和Long Lian没有被列入ACM论文的作者。因此,ACM于2022年7月26日从数字图书馆撤回了该作品。撤回的作品保留在ACM数字图书馆中,仅用于存档目的,不应用于进一步的研究或引用目的。
{"title":"Retracted on July 26, 2022: Open set recognition through unsupervised and class-distance learning","authors":"Andrew Draganov, Carter Brown, Enrico Mattei, Cass Dalton, Jaspreet Ranjit","doi":"10.1145/3395352.3402901","DOIUrl":"https://doi.org/10.1145/3395352.3402901","url":null,"abstract":"This article has been retracted from the ACM Digital Library because of Author Misrepresentation. The ACM published paper used an earlier work written by Xudong Wang, Stella Yu, Long Lian, Andrew Draganov, Carter Brown, Enrico Mattie, Cass Dalton and Jasprett Ranit. Xudong Wang, Stella Yu and Long Lian were not included as authors on the ACM paper. As a result, ACM retracted the Work from the Digital Library on July 26, 2022. The retracted Work remains in the ACM Digital Library for archiving purposes only and should not be used for further research or citation purposes.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121177505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adversarial machine learning based partial-model attack in IoT 物联网中基于部分模型攻击的对抗性机器学习
Pub Date : 2020-06-25 DOI: 10.1145/3395352.3402619
Zhengping Luo, Shangqing Zhao, Zhuo Lu, Y. Sagduyu, Jie Xu
As Internet of Things (IoT) has emerged as the next logical stage of the Internet, it has become imperative to understand the vulnerabilities of the IoT systems when supporting diverse applications. Because machine learning has been applied in many IoT systems, the security implications of machine learning need to be studied following an adversarial machine learning approach. In this paper, we propose an adversarial machine learning based partial-model attack in the data fusion/aggregation process of IoT by only controlling a small part of the sensing devices. Our numerical results demonstrate the feasibility of this attack to disrupt the decision making in data fusion with limited control of IoT devices, e.g., the attack success rate reaches 83% when the adversary tampers with only 8 out of 20 IoT devices. These results show that the machine learning engine of IoT system is highly vulnerable to attacks even when the adversary manipulates a small portion of IoT devices, and the outcome of these attacks severely disrupts IoT system operations.
随着物联网(IoT)成为互联网的下一个逻辑阶段,在支持各种应用程序时,了解物联网系统的漏洞已变得势在必行。由于机器学习已应用于许多物联网系统,因此需要采用对抗性机器学习方法来研究机器学习的安全影响。在本文中,我们提出了一种基于对抗性机器学习的部分模型攻击,通过只控制一小部分传感设备,在物联网的数据融合/聚合过程中。我们的数值结果证明了这种攻击在物联网设备控制有限的情况下破坏数据融合决策的可行性,例如,当攻击者篡改20个物联网设备中的8个时,攻击成功率达到83%。这些结果表明,即使攻击者操纵了一小部分物联网设备,物联网系统的机器学习引擎也极易受到攻击,这些攻击的结果严重破坏了物联网系统的运行。
{"title":"Adversarial machine learning based partial-model attack in IoT","authors":"Zhengping Luo, Shangqing Zhao, Zhuo Lu, Y. Sagduyu, Jie Xu","doi":"10.1145/3395352.3402619","DOIUrl":"https://doi.org/10.1145/3395352.3402619","url":null,"abstract":"As Internet of Things (IoT) has emerged as the next logical stage of the Internet, it has become imperative to understand the vulnerabilities of the IoT systems when supporting diverse applications. Because machine learning has been applied in many IoT systems, the security implications of machine learning need to be studied following an adversarial machine learning approach. In this paper, we propose an adversarial machine learning based partial-model attack in the data fusion/aggregation process of IoT by only controlling a small part of the sensing devices. Our numerical results demonstrate the feasibility of this attack to disrupt the decision making in data fusion with limited control of IoT devices, e.g., the attack success rate reaches 83% when the adversary tampers with only 8 out of 20 IoT devices. These results show that the machine learning engine of IoT system is highly vulnerable to attacks even when the adversary manipulates a small portion of IoT devices, and the outcome of these attacks severely disrupts IoT system operations.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121722962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Over-the-air membership inference attacks as privacy threats for deep learning-based wireless signal classifiers 无线成员推理攻击对基于深度学习的无线信号分类器的隐私威胁
Pub Date : 2020-06-25 DOI: 10.1145/3395352.3404070
Yi Shi, Kemal Davaslioglu, Y. Sagduyu
This paper presents how to leak private information from a wireless signal classifier by launching an over-the-air membership inference attack (MIA). As machine learning (ML) algorithms are used to process wireless signals to make decisions such as PHY-layer authentication, the training data characteristics (e.g., device-level information) and the environment conditions (e.g., channel information) under which the data is collected may leak to the ML model. As a privacy threat, the adversary can use this leaked information to exploit vulnerabilities of the ML model following an adversarial ML approach. In this paper, the MIA is launched against a deep learning-based classifier that uses waveform, device, and channel characteristics (power and phase shifts) in the received signals for RF fingerprinting. By observing the spectrum, the adversary builds first a surrogate classifier and then an inference model to determine whether a signal of interest has been used in the training data of the receiver (e.g., a service provider). The signal of interest can then be associated with particular device and channel characteristics to launch subsequent attacks. The probability of attack success is high (more than 88% depending on waveform and channel conditions) in identifying signals of interest (and potentially the device and channel information) used to build a target classifier. These results show that wireless signal classifiers are vulnerable to privacy threats due to the over-the-air information leakage of their ML models.
本文提出了一种利用无线成员推理攻击(MIA)从无线信号分类器中泄漏私有信息的方法。由于使用机器学习(ML)算法处理无线信号以做出物理层认证等决策,因此收集数据的训练数据特征(如设备级信息)和环境条件(如信道信息)可能会泄漏到ML模型中。作为一种隐私威胁,攻击者可以使用这些泄露的信息来利用ML模型的漏洞,并采用对抗性ML方法。在本文中,MIA是针对基于深度学习的分类器启动的,该分类器使用接收信号中的波形、设备和通道特性(功率和相移)进行射频指纹识别。通过观察频谱,攻击者首先建立一个代理分类器,然后建立一个推理模型,以确定感兴趣的信号是否已在接收者(例如,服务提供商)的训练数据中使用。然后,可以将感兴趣的信号与特定的设备和信道特性相关联,以发起后续攻击。在识别用于构建目标分类器的感兴趣信号(以及潜在的设备和通道信息)方面,攻击成功的概率很高(超过88%,具体取决于波形和通道条件)。这些结果表明,由于其ML模型的无线信息泄漏,无线信号分类器容易受到隐私威胁。
{"title":"Over-the-air membership inference attacks as privacy threats for deep learning-based wireless signal classifiers","authors":"Yi Shi, Kemal Davaslioglu, Y. Sagduyu","doi":"10.1145/3395352.3404070","DOIUrl":"https://doi.org/10.1145/3395352.3404070","url":null,"abstract":"This paper presents how to leak private information from a wireless signal classifier by launching an over-the-air membership inference attack (MIA). As machine learning (ML) algorithms are used to process wireless signals to make decisions such as PHY-layer authentication, the training data characteristics (e.g., device-level information) and the environment conditions (e.g., channel information) under which the data is collected may leak to the ML model. As a privacy threat, the adversary can use this leaked information to exploit vulnerabilities of the ML model following an adversarial ML approach. In this paper, the MIA is launched against a deep learning-based classifier that uses waveform, device, and channel characteristics (power and phase shifts) in the received signals for RF fingerprinting. By observing the spectrum, the adversary builds first a surrogate classifier and then an inference model to determine whether a signal of interest has been used in the training data of the receiver (e.g., a service provider). The signal of interest can then be associated with particular device and channel characteristics to launch subsequent attacks. The probability of attack success is high (more than 88% depending on waveform and channel conditions) in identifying signals of interest (and potentially the device and channel information) used to build a target classifier. These results show that wireless signal classifiers are vulnerable to privacy threats due to the over-the-air information leakage of their ML models.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132244626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Algorithm selection framework for cyber attack detection 网络攻击检测算法选择框架
Pub Date : 2020-05-28 DOI: 10.1145/3395352.3402623
Marc Chalé, Nathaniel D. Bastian, J. Weir
The number of cyber threats against both wired and wireless computer systems and other components of the Internet of Things continues to increase annually. In this work, an algorithm selection framework is employed on the NSL-KDD data set and a novel paradigm of machine learning taxonomy is presented. The framework uses a combination of user input and meta-features to select the best algorithm to detect cyber attacks on a network. Performance is compared between a rule-of-thumb strategy and a meta-learning strategy. The framework removes the conjecture of the common trial-and-error algorithm selection method. The framework recommends five algorithms from the taxonomy. Both strategies recommend a high-performing algorithm, though not the best performing. The work demonstrates the close connectedness between algorithm selection and the taxonomy for which it is premised.
针对有线和无线计算机系统以及物联网其他组件的网络威胁数量每年都在持续增加。在这项工作中,在NSL-KDD数据集上采用了一种算法选择框架,并提出了一种新的机器学习分类范式。该框架结合使用用户输入和元特征来选择最佳算法来检测网络上的网络攻击。将经验法则策略和元学习策略的性能进行比较。该框架消除了常见的试错算法选择方法的猜想。该框架从分类法中推荐了五种算法。这两种策略都推荐了一种高性能的算法,尽管不是最好的。这项工作证明了算法选择和分类法之间的密切联系,它是前提。
{"title":"Algorithm selection framework for cyber attack detection","authors":"Marc Chalé, Nathaniel D. Bastian, J. Weir","doi":"10.1145/3395352.3402623","DOIUrl":"https://doi.org/10.1145/3395352.3402623","url":null,"abstract":"The number of cyber threats against both wired and wireless computer systems and other components of the Internet of Things continues to increase annually. In this work, an algorithm selection framework is employed on the NSL-KDD data set and a novel paradigm of machine learning taxonomy is presented. The framework uses a combination of user input and meta-features to select the best algorithm to detect cyber attacks on a network. Performance is compared between a rule-of-thumb strategy and a meta-learning strategy. The framework removes the conjecture of the common trial-and-error algorithm selection method. The framework recommends five algorithms from the taxonomy. Both strategies recommend a high-performing algorithm, though not the best performing. The work demonstrates the close connectedness between algorithm selection and the taxonomy for which it is premised.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132827631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Investigating a spectral deception loss metric for training machine learning-based evasion attacks 研究用于训练基于机器学习的逃避攻击的频谱欺骗损失度量
Pub Date : 2020-05-27 DOI: 10.1145/3395352.3402624
Matthew DelVecchio, Vanessa Arndorfer, W. Headley
Adversarial evasion attacks have been very successful in causing poor performance in a wide variety of machine learning applications. One such application is radio frequency spectrum sensing. While evasion attacks have proven particularly successful in this area, they have done so at the detriment of the signal's intended purpose. More specifically for real-world applications of interest, the resulting perturbed signal that is transmitted to evade an eavesdropper must not deviate far from the original signal, less the intended information is destroyed. Recent work by the authors and others has demonstrated an attack framework that allows for intelligent balancing between these conflicting goals of evasion and communication. However, while these methodologies consider creating adversarial signals that minimize communications degradation, they have been shown to do so at the expense of the spectral shape of the signal. This opens the adversarial signal up to defenses at the eavesdropper such as filtering, which could render the attack ineffective. To remedy this, this work introduces a new spectral deception loss metric that can be implemented during the training process to force the spectral shape to be more in-line with the original signal. As an initial proof of concept, a variety of methods are presented that provide a starting point for this proposed loss. Through performance analysis, it is shown that these techniques are effective in controlling the shape of the adversarial signal.
在各种各样的机器学习应用中,对抗性规避攻击已经非常成功地导致了糟糕的性能。其中一个应用是无线电频谱传感。虽然逃避攻击在这方面被证明是特别成功的,但他们这样做是在损害信号的预期目的。更具体地说,对于感兴趣的现实世界应用来说,为躲避窃听者而传输的干扰信号不得偏离原始信号太远,否则预期的信息就会被破坏。作者和其他人最近的工作已经证明了一种攻击框架,它允许在逃避和交流这两个相互冲突的目标之间实现智能平衡。然而,尽管这些方法考虑创建对抗性信号以最大限度地减少通信退化,但事实证明,这样做是以牺牲信号的频谱形状为代价的。这打开了对抗性信号在窃听者的防御,如过滤,这可能使攻击无效。为了解决这个问题,这项工作引入了一种新的频谱欺骗损失度量,可以在训练过程中实现,以迫使频谱形状更符合原始信号。作为概念的初步证明,提出了各种方法,为提出的损失提供了一个起点。通过性能分析表明,这些技术对对抗信号的形状控制是有效的。
{"title":"Investigating a spectral deception loss metric for training machine learning-based evasion attacks","authors":"Matthew DelVecchio, Vanessa Arndorfer, W. Headley","doi":"10.1145/3395352.3402624","DOIUrl":"https://doi.org/10.1145/3395352.3402624","url":null,"abstract":"Adversarial evasion attacks have been very successful in causing poor performance in a wide variety of machine learning applications. One such application is radio frequency spectrum sensing. While evasion attacks have proven particularly successful in this area, they have done so at the detriment of the signal's intended purpose. More specifically for real-world applications of interest, the resulting perturbed signal that is transmitted to evade an eavesdropper must not deviate far from the original signal, less the intended information is destroyed. Recent work by the authors and others has demonstrated an attack framework that allows for intelligent balancing between these conflicting goals of evasion and communication. However, while these methodologies consider creating adversarial signals that minimize communications degradation, they have been shown to do so at the expense of the spectral shape of the signal. This opens the adversarial signal up to defenses at the eavesdropper such as filtering, which could render the attack ineffective. To remedy this, this work introduces a new spectral deception loss metric that can be implemented during the training process to force the spectral shape to be more in-line with the original signal. As an initial proof of concept, a variety of methods are presented that provide a starting point for this proposed loss. Through performance analysis, it is shown that these techniques are effective in controlling the shape of the adversarial signal.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134559866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1