Towards an Adversarial Machine Learning Framework in Cyber-Physical Systems

John Mulo, Pu Tian, Adamu Hussaini, Hengshuo Liang, Wei Yu
{"title":"Towards an Adversarial Machine Learning Framework in Cyber-Physical Systems","authors":"John Mulo, Pu Tian, Adamu Hussaini, Hengshuo Liang, Wei Yu","doi":"10.1109/SERA57763.2023.10197774","DOIUrl":null,"url":null,"abstract":"The applications of machine learning (ML) in cyber-physical systems (CPS), such as the smart energy grid has increased significantly. While ML technology can be integrated into CPS, the security risk of ML technology has to be considered. In particular, adversarial examples provide inputs to a ML model with intentionally attached perturbations (noise) that could pose the model to make incorrect decisions. Perturbations are expected to be small or marginal so that adversarial examples could be invisible to humans, but can significantly affect the output of ML models. In this paper, we design a taxonomy to provide the problem space for investigating the adversarial example generation techniques based on state-of-the-art literature. We propose a three-dimensional framework containing three dimensions for adversarial attack scenarios (i.e., black-box, white-box, and gray-box), target type, and adversarial examples generation methods (gradient-based, score-based, decision-based, transfer- based, and others). Based on the designed taxonomy, we systematically review the existing research efforts on adversarial ML in representative CPS (i.e., transportation, healthcare, and energy). Furthermore, we provide one case study to demonstrate the impact of adversarial examples of attacks on a smart energy CPS deployment. The results indicate that the accuracy can decrease significantly from 92.62% to 55.42% with a 30% adversarial sample injection. Finally, we discuss potential countermeasures and future research directions for adversarial ML.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"240 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SERA57763.2023.10197774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The applications of machine learning (ML) in cyber-physical systems (CPS), such as the smart energy grid has increased significantly. While ML technology can be integrated into CPS, the security risk of ML technology has to be considered. In particular, adversarial examples provide inputs to a ML model with intentionally attached perturbations (noise) that could pose the model to make incorrect decisions. Perturbations are expected to be small or marginal so that adversarial examples could be invisible to humans, but can significantly affect the output of ML models. In this paper, we design a taxonomy to provide the problem space for investigating the adversarial example generation techniques based on state-of-the-art literature. We propose a three-dimensional framework containing three dimensions for adversarial attack scenarios (i.e., black-box, white-box, and gray-box), target type, and adversarial examples generation methods (gradient-based, score-based, decision-based, transfer- based, and others). Based on the designed taxonomy, we systematically review the existing research efforts on adversarial ML in representative CPS (i.e., transportation, healthcare, and energy). Furthermore, we provide one case study to demonstrate the impact of adversarial examples of attacks on a smart energy CPS deployment. The results indicate that the accuracy can decrease significantly from 92.62% to 55.42% with a 30% adversarial sample injection. Finally, we discuss potential countermeasures and future research directions for adversarial ML.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面向网络物理系统中的对抗性机器学习框架
机器学习(ML)在智能电网等网络物理系统(CPS)中的应用显著增加。虽然可以将ML技术集成到CPS中,但必须考虑ML技术的安全风险。特别是,对抗性示例为ML模型提供了带有故意附加扰动(噪声)的输入,这些扰动(噪声)可能会使模型做出错误的决策。预计扰动很小或很小,因此对抗性示例对人类来说可能是不可见的,但会显著影响ML模型的输出。在本文中,我们设计了一个分类法,为研究基于最新文献的对抗性示例生成技术提供了问题空间。我们提出了一个包含三个维度的三维框架,用于对抗性攻击场景(即黑盒、白盒和灰盒)、目标类型和对抗性示例生成方法(基于梯度、基于分数、基于决策、基于转移等)。基于设计的分类法,我们系统地回顾了代表性CPS(即交通、医疗和能源)中对抗性机器学习的现有研究成果。此外,我们提供了一个案例研究,以展示对抗性攻击示例对智能能源CPS部署的影响。结果表明,当对抗性样品加量为30%时,正确率从92.62%显著下降到55.42%。最后,讨论了对抗性机器学习的潜在对策和未来的研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhancing Students’ Job Seeking Process Through A Digital Badging System Classification of Multilingual Medical Documents using Deep Learning Data-Driven Smart Manufacturing Technologies for Prop Shop Systems Identifying Code Tampering Using A Bytecode Comparison Analysis Tool Evaluating the Performance of Containerized Webservers against web servers on Virtual Machines using Bombardment and Siege
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1