从图像到代码:Android应用程序的可执行对抗性示例

Shangyu Gu, Shaoyin Cheng, Weiming Zhang
{"title":"从图像到代码:Android应用程序的可执行对抗性示例","authors":"Shangyu Gu, Shaoyin Cheng, Weiming Zhang","doi":"10.1145/3404555.3404574","DOIUrl":null,"url":null,"abstract":"Recent years, Machine Learning has been widely used in malware analysis and achieved unprecedented success. However, deep learning models are found to be highly vulnerable to adversarial examples, which leads to the machine learning-based malware analysis methods vulnerable to malware makers. Exploring the attack algorithm can not only promote the generation of more effective malware analysis methods, but also can promote the development of the defense algorithm. Different machine learning models use different malware features as their classification basis, and accordingly there will be different attack methods against them. For malware visualization method, corresponding effective adversarial attack has not yet appeared. Most existing malware adversarial examples for malware visualization are generated at the feature level, and do not consider whether the generated adversarial examples can be executed and complete their original functions. In this paper, we explored how to modify an Android executable file without affecting its original functions and made it become an adversarial example. We proposed an executable adversarial examples attack strategy for machine learning-based malware visualization analysis. Experimental result shows that the executable adversarial examples we generated can be normally run on Android devices without affecting its original functions, and can confuse the malware family classifier with 93% success rate. We explored possible defense methods and hope to contribute to building a more robust malware classification method.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"From Image to Code: Executable Adversarial Examples of Android Applications\",\"authors\":\"Shangyu Gu, Shaoyin Cheng, Weiming Zhang\",\"doi\":\"10.1145/3404555.3404574\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent years, Machine Learning has been widely used in malware analysis and achieved unprecedented success. However, deep learning models are found to be highly vulnerable to adversarial examples, which leads to the machine learning-based malware analysis methods vulnerable to malware makers. Exploring the attack algorithm can not only promote the generation of more effective malware analysis methods, but also can promote the development of the defense algorithm. Different machine learning models use different malware features as their classification basis, and accordingly there will be different attack methods against them. For malware visualization method, corresponding effective adversarial attack has not yet appeared. Most existing malware adversarial examples for malware visualization are generated at the feature level, and do not consider whether the generated adversarial examples can be executed and complete their original functions. In this paper, we explored how to modify an Android executable file without affecting its original functions and made it become an adversarial example. We proposed an executable adversarial examples attack strategy for machine learning-based malware visualization analysis. Experimental result shows that the executable adversarial examples we generated can be normally run on Android devices without affecting its original functions, and can confuse the malware family classifier with 93% success rate. We explored possible defense methods and hope to contribute to building a more robust malware classification method.\",\"PeriodicalId\":220526,\"journal\":{\"name\":\"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence\",\"volume\":\"2012 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3404555.3404574\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3404555.3404574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

近年来,机器学习在恶意软件分析中得到了广泛应用,并取得了前所未有的成功。然而,深度学习模型被发现极易受到对抗性示例的攻击,这导致基于机器学习的恶意软件分析方法容易受到恶意软件制造商的攻击。研究攻击算法不仅可以促进更有效的恶意软件分析方法的产生,而且可以促进防御算法的发展。不同的机器学习模型使用不同的恶意软件特征作为分类依据,相应地也会有不同的攻击方法。对于恶意软件可视化方法,相应有效的对抗性攻击尚未出现。现有的用于恶意软件可视化的恶意软件对抗样例大多是在特征级生成的,没有考虑生成的对抗样例是否可以执行并完成其原有的功能。在本文中,我们探讨了如何在不影响Android可执行文件原有功能的情况下修改它,使其成为一个对抗性的例子。针对基于机器学习的恶意软件可视化分析,提出了一种可执行的对抗示例攻击策略。实验结果表明,我们生成的可执行对抗示例可以在不影响其原始功能的情况下在Android设备上正常运行,并且可以以93%的成功率混淆恶意软件分类器。我们探索了可能的防御方法,并希望为构建更健壮的恶意软件分类方法做出贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
From Image to Code: Executable Adversarial Examples of Android Applications
Recent years, Machine Learning has been widely used in malware analysis and achieved unprecedented success. However, deep learning models are found to be highly vulnerable to adversarial examples, which leads to the machine learning-based malware analysis methods vulnerable to malware makers. Exploring the attack algorithm can not only promote the generation of more effective malware analysis methods, but also can promote the development of the defense algorithm. Different machine learning models use different malware features as their classification basis, and accordingly there will be different attack methods against them. For malware visualization method, corresponding effective adversarial attack has not yet appeared. Most existing malware adversarial examples for malware visualization are generated at the feature level, and do not consider whether the generated adversarial examples can be executed and complete their original functions. In this paper, we explored how to modify an Android executable file without affecting its original functions and made it become an adversarial example. We proposed an executable adversarial examples attack strategy for machine learning-based malware visualization analysis. Experimental result shows that the executable adversarial examples we generated can be normally run on Android devices without affecting its original functions, and can confuse the malware family classifier with 93% success rate. We explored possible defense methods and hope to contribute to building a more robust malware classification method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
mRNA Big Data Analysis of Hepatoma Carcinoma Between Different Genders Generalization or Instantiation?: Estimating the Relative Abstractness between Images and Text Auxiliary Edge Detection for Semantic Image Segmentation Intrusion Detection of Abnormal Objects for Railway Scenes Using Infrared Images Multi-Tenant Machine Learning Platform Based on Kubernetes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1