{"title":"Adaptive Normalized Attacks for Learning Adversarial Attacks and Defenses in Power Systems","authors":"Jiwei Tian, Tengyao Li, Fute Shang, Kunrui Cao, Jing Li, M. Ozay","doi":"10.1109/SmartGridComm.2019.8909713","DOIUrl":null,"url":null,"abstract":"Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a more accurate and computationally efficient method called Adaptive Normalized Attack (ANA) to attack power systems using generate adversarial examples. We then adopt adversarial training to defend against attacks of adversarial examples. Experimental analyses demonstrate that our attack method provides less perturbation compared to the state-of-the-art FGSM (Fast Gradient Sign Method) and DeepFool, while our proposed method increases misclassification rate of learning methods for attacking power systems. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples compared to using state-of-the-art methods.","PeriodicalId":377150,"journal":{"name":"2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartGridComm.2019.8909713","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a more accurate and computationally efficient method called Adaptive Normalized Attack (ANA) to attack power systems using generate adversarial examples. We then adopt adversarial training to defend against attacks of adversarial examples. Experimental analyses demonstrate that our attack method provides less perturbation compared to the state-of-the-art FGSM (Fast Gradient Sign Method) and DeepFool, while our proposed method increases misclassification rate of learning methods for attacking power systems. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples compared to using state-of-the-art methods.