{"title":"Reducing Adversarial Vulnerability Using GANs","authors":"Ciprian-Alin Simion","doi":"10.1109/SYNASC57785.2022.00064","DOIUrl":null,"url":null,"abstract":"The Cyber-Threat industry is ever-growing and it is very likely that malware creators are using generative methods to create new malware as these algorithms prove to be very potent. As the majority of researchers in this field are focused on new methods to generate better adversarial examples (w.r.t. fidelity, variety or number) just a small portion of them are concerned with defense methods. This paper explores three methods of feature selection in the context of adversarial attacks. These methods aim to reduce the vulnerability of a Multi-Layer Perceptron to GAN-inflicted attacks by removing features based on rankings computed by type or by using LIME or F-Score. Even if no strong conclusion can be drawn, this paper stands as a Proof-of-Concept that because of good results in some cases, adversarial feature selection is a worthy exploration path.","PeriodicalId":446065,"journal":{"name":"2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SYNASC57785.2022.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The Cyber-Threat industry is ever-growing and it is very likely that malware creators are using generative methods to create new malware as these algorithms prove to be very potent. As the majority of researchers in this field are focused on new methods to generate better adversarial examples (w.r.t. fidelity, variety or number) just a small portion of them are concerned with defense methods. This paper explores three methods of feature selection in the context of adversarial attacks. These methods aim to reduce the vulnerability of a Multi-Layer Perceptron to GAN-inflicted attacks by removing features based on rankings computed by type or by using LIME or F-Score. Even if no strong conclusion can be drawn, this paper stands as a Proof-of-Concept that because of good results in some cases, adversarial feature selection is a worthy exploration path.