John Mulo, Pu Tian, Adamu Hussaini, Hengshuo Liang, Wei Yu
{"title":"Towards an Adversarial Machine Learning Framework in Cyber-Physical Systems","authors":"John Mulo, Pu Tian, Adamu Hussaini, Hengshuo Liang, Wei Yu","doi":"10.1109/SERA57763.2023.10197774","DOIUrl":null,"url":null,"abstract":"The applications of machine learning (ML) in cyber-physical systems (CPS), such as the smart energy grid has increased significantly. While ML technology can be integrated into CPS, the security risk of ML technology has to be considered. In particular, adversarial examples provide inputs to a ML model with intentionally attached perturbations (noise) that could pose the model to make incorrect decisions. Perturbations are expected to be small or marginal so that adversarial examples could be invisible to humans, but can significantly affect the output of ML models. In this paper, we design a taxonomy to provide the problem space for investigating the adversarial example generation techniques based on state-of-the-art literature. We propose a three-dimensional framework containing three dimensions for adversarial attack scenarios (i.e., black-box, white-box, and gray-box), target type, and adversarial examples generation methods (gradient-based, score-based, decision-based, transfer- based, and others). Based on the designed taxonomy, we systematically review the existing research efforts on adversarial ML in representative CPS (i.e., transportation, healthcare, and energy). Furthermore, we provide one case study to demonstrate the impact of adversarial examples of attacks on a smart energy CPS deployment. The results indicate that the accuracy can decrease significantly from 92.62% to 55.42% with a 30% adversarial sample injection. Finally, we discuss potential countermeasures and future research directions for adversarial ML.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"240 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SERA57763.2023.10197774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The applications of machine learning (ML) in cyber-physical systems (CPS), such as the smart energy grid has increased significantly. While ML technology can be integrated into CPS, the security risk of ML technology has to be considered. In particular, adversarial examples provide inputs to a ML model with intentionally attached perturbations (noise) that could pose the model to make incorrect decisions. Perturbations are expected to be small or marginal so that adversarial examples could be invisible to humans, but can significantly affect the output of ML models. In this paper, we design a taxonomy to provide the problem space for investigating the adversarial example generation techniques based on state-of-the-art literature. We propose a three-dimensional framework containing three dimensions for adversarial attack scenarios (i.e., black-box, white-box, and gray-box), target type, and adversarial examples generation methods (gradient-based, score-based, decision-based, transfer- based, and others). Based on the designed taxonomy, we systematically review the existing research efforts on adversarial ML in representative CPS (i.e., transportation, healthcare, and energy). Furthermore, we provide one case study to demonstrate the impact of adversarial examples of attacks on a smart energy CPS deployment. The results indicate that the accuracy can decrease significantly from 92.62% to 55.42% with a 30% adversarial sample injection. Finally, we discuss potential countermeasures and future research directions for adversarial ML.