一种基于协同进化算法的恶意软件对抗样本生成方法

Fangwei Wang, Yuanyuan Lu, Qingru Li, Changguang Wang, Yonglei Bai
{"title":"一种基于协同进化算法的恶意软件对抗样本生成方法","authors":"Fangwei Wang, Yuanyuan Lu, Qingru Li, Changguang Wang, Yonglei Bai","doi":"10.1109/DSC54232.2022.9888884","DOIUrl":null,"url":null,"abstract":"The study of adversarial attacks on malicious code detection models will help identify and improve the flaws of detection models, improve the detection ability of adversarial attacks, and enhance the security of AI (Artificial Intelligent) algorithm-based applications. To address the problems of low efficiency, long time, and low evasion rate in generating adversarial samples, we propose a co-evolutionary algorithm-based adversarial sample generation method. We decompose the adversarial sample generation problem into three sub-problems, which are minimizing the number of modification actions, injecting less content, and being detected as benign by the target model. The two sub-problems of injecting less content and being detected as benign by the target model can be completed by minimizing the fitness function through the cooperation of two populations in coevolution. Minimizing the number of actions is achieved by a selection operation in the evolutionary process. We perform attack experiments on static malicious detection models and commercial detection engines. The experimental results show the generated adversarial samples can improve the evasion rate of some detection engines while ensuring the minimum number of modification actions and injecting less content. On the two static malicious detection models, our approach achieves more than an 80% evasion rate with fewer modification actions and injected content. The evasion rate on three commercial detection engines can reach 58.9%. Uploading the generated adversarial samples to the VirusTotal platform can evade an average of 54.0% of the anti-virus programs on the platform. Our approach is also compared with the adversarial attack approach based on an evolutionary algorithm to verify the necessity of minimizing the number of modification actions and injecting less content in adversarial sample generation.","PeriodicalId":368903,"journal":{"name":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Co-evolutionary Algorithm-Based Malware Adversarial Sample Generation Method\",\"authors\":\"Fangwei Wang, Yuanyuan Lu, Qingru Li, Changguang Wang, Yonglei Bai\",\"doi\":\"10.1109/DSC54232.2022.9888884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The study of adversarial attacks on malicious code detection models will help identify and improve the flaws of detection models, improve the detection ability of adversarial attacks, and enhance the security of AI (Artificial Intelligent) algorithm-based applications. To address the problems of low efficiency, long time, and low evasion rate in generating adversarial samples, we propose a co-evolutionary algorithm-based adversarial sample generation method. We decompose the adversarial sample generation problem into three sub-problems, which are minimizing the number of modification actions, injecting less content, and being detected as benign by the target model. The two sub-problems of injecting less content and being detected as benign by the target model can be completed by minimizing the fitness function through the cooperation of two populations in coevolution. Minimizing the number of actions is achieved by a selection operation in the evolutionary process. We perform attack experiments on static malicious detection models and commercial detection engines. The experimental results show the generated adversarial samples can improve the evasion rate of some detection engines while ensuring the minimum number of modification actions and injecting less content. On the two static malicious detection models, our approach achieves more than an 80% evasion rate with fewer modification actions and injected content. The evasion rate on three commercial detection engines can reach 58.9%. Uploading the generated adversarial samples to the VirusTotal platform can evade an average of 54.0% of the anti-virus programs on the platform. Our approach is also compared with the adversarial attack approach based on an evolutionary algorithm to verify the necessity of minimizing the number of modification actions and injecting less content in adversarial sample generation.\",\"PeriodicalId\":368903,\"journal\":{\"name\":\"2022 IEEE Conference on Dependable and Secure Computing (DSC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Conference on Dependable and Secure Computing (DSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSC54232.2022.9888884\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSC54232.2022.9888884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

研究针对恶意代码检测模型的对抗性攻击,有助于识别和改进检测模型的缺陷,提高对抗性攻击的检测能力,增强基于AI(人工智能)算法的应用的安全性。针对生成对抗样本效率低、时间长、逃避率低等问题,提出了一种基于协同进化算法的对抗样本生成方法。我们将对抗性样本生成问题分解为三个子问题,即最小化修改动作次数、注入较少内容和被目标模型检测为良性。注入较少的内容和被目标模型检测为良性的两个子问题可以通过两个种群在共同进化中的合作最小化适应度函数来完成。在进化过程中,通过选择操作来实现动作数量的最小化。我们在静态恶意检测模型和商业检测引擎上进行了攻击实验。实验结果表明,生成的对抗样本可以在保证修改动作次数最少和注入较少内容的情况下,提高某些检测引擎的逃避率。在两种静态恶意检测模型上,我们的方法以更少的修改动作和注入的内容实现了80%以上的逃避率。三款商用检测引擎的逃避率可达58.9%。将生成的对抗样本上传到VirusTotal平台,平均可以躲过平台上54.0%的杀毒程序。并将该方法与基于进化算法的对抗攻击方法进行了比较,验证了在对抗样本生成中最小化修改动作数量和注入较少内容的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Co-evolutionary Algorithm-Based Malware Adversarial Sample Generation Method
The study of adversarial attacks on malicious code detection models will help identify and improve the flaws of detection models, improve the detection ability of adversarial attacks, and enhance the security of AI (Artificial Intelligent) algorithm-based applications. To address the problems of low efficiency, long time, and low evasion rate in generating adversarial samples, we propose a co-evolutionary algorithm-based adversarial sample generation method. We decompose the adversarial sample generation problem into three sub-problems, which are minimizing the number of modification actions, injecting less content, and being detected as benign by the target model. The two sub-problems of injecting less content and being detected as benign by the target model can be completed by minimizing the fitness function through the cooperation of two populations in coevolution. Minimizing the number of actions is achieved by a selection operation in the evolutionary process. We perform attack experiments on static malicious detection models and commercial detection engines. The experimental results show the generated adversarial samples can improve the evasion rate of some detection engines while ensuring the minimum number of modification actions and injecting less content. On the two static malicious detection models, our approach achieves more than an 80% evasion rate with fewer modification actions and injected content. The evasion rate on three commercial detection engines can reach 58.9%. Uploading the generated adversarial samples to the VirusTotal platform can evade an average of 54.0% of the anti-virus programs on the platform. Our approach is also compared with the adversarial attack approach based on an evolutionary algorithm to verify the necessity of minimizing the number of modification actions and injecting less content in adversarial sample generation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Symbolon: Enabling Flexible Multi-device-based User Authentication A Survey on Explainable Anomaly Detection for Industrial Internet of Things Optimising user security recommendations for AI-powered smart-homes A Scary Peek into The Future: Advanced Persistent Threats in Emerging Computing Environments LAEG: Leak-based AEG using Dynamic Binary Analysis to Defeat ASLR
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1