Fangwei Wang, Yuanyuan Lu, Qingru Li, Changguang Wang, Yonglei Bai
{"title":"A Co-evolutionary Algorithm-Based Malware Adversarial Sample Generation Method","authors":"Fangwei Wang, Yuanyuan Lu, Qingru Li, Changguang Wang, Yonglei Bai","doi":"10.1109/DSC54232.2022.9888884","DOIUrl":null,"url":null,"abstract":"The study of adversarial attacks on malicious code detection models will help identify and improve the flaws of detection models, improve the detection ability of adversarial attacks, and enhance the security of AI (Artificial Intelligent) algorithm-based applications. To address the problems of low efficiency, long time, and low evasion rate in generating adversarial samples, we propose a co-evolutionary algorithm-based adversarial sample generation method. We decompose the adversarial sample generation problem into three sub-problems, which are minimizing the number of modification actions, injecting less content, and being detected as benign by the target model. The two sub-problems of injecting less content and being detected as benign by the target model can be completed by minimizing the fitness function through the cooperation of two populations in coevolution. Minimizing the number of actions is achieved by a selection operation in the evolutionary process. We perform attack experiments on static malicious detection models and commercial detection engines. The experimental results show the generated adversarial samples can improve the evasion rate of some detection engines while ensuring the minimum number of modification actions and injecting less content. On the two static malicious detection models, our approach achieves more than an 80% evasion rate with fewer modification actions and injected content. The evasion rate on three commercial detection engines can reach 58.9%. Uploading the generated adversarial samples to the VirusTotal platform can evade an average of 54.0% of the anti-virus programs on the platform. Our approach is also compared with the adversarial attack approach based on an evolutionary algorithm to verify the necessity of minimizing the number of modification actions and injecting less content in adversarial sample generation.","PeriodicalId":368903,"journal":{"name":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSC54232.2022.9888884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The study of adversarial attacks on malicious code detection models will help identify and improve the flaws of detection models, improve the detection ability of adversarial attacks, and enhance the security of AI (Artificial Intelligent) algorithm-based applications. To address the problems of low efficiency, long time, and low evasion rate in generating adversarial samples, we propose a co-evolutionary algorithm-based adversarial sample generation method. We decompose the adversarial sample generation problem into three sub-problems, which are minimizing the number of modification actions, injecting less content, and being detected as benign by the target model. The two sub-problems of injecting less content and being detected as benign by the target model can be completed by minimizing the fitness function through the cooperation of two populations in coevolution. Minimizing the number of actions is achieved by a selection operation in the evolutionary process. We perform attack experiments on static malicious detection models and commercial detection engines. The experimental results show the generated adversarial samples can improve the evasion rate of some detection engines while ensuring the minimum number of modification actions and injecting less content. On the two static malicious detection models, our approach achieves more than an 80% evasion rate with fewer modification actions and injected content. The evasion rate on three commercial detection engines can reach 58.9%. Uploading the generated adversarial samples to the VirusTotal platform can evade an average of 54.0% of the anti-virus programs on the platform. Our approach is also compared with the adversarial attack approach based on an evolutionary algorithm to verify the necessity of minimizing the number of modification actions and injecting less content in adversarial sample generation.