A Comprehensive Analysis of Explainable AI for Malware Hunting

IF 6.7 1区 物理与天体物理 Q1 MATERIALS SCIENCE, MULTIDISCIPLINARY ACS Photonics Pub Date : 2024-07-11 DOI:10.1145/3677374
Mohd Saqib, Samaneh Mahdavifar, Benjamin C. M. Fung, P. Charland
{"title":"A Comprehensive Analysis of Explainable AI for Malware Hunting","authors":"Mohd Saqib, Samaneh Mahdavifar, Benjamin C. M. Fung, P. Charland","doi":"10.1145/3677374","DOIUrl":null,"url":null,"abstract":"In the past decade, the number of malware variants has increased rapidly. Many researchers have proposed to detect malware using intelligent techniques, such as Machine Learning (ML) and Deep Learning (DL), which have high accuracy and precision. These methods, however, suffer from being opaque in the decision-making process. Therefore, we need Artificial Intelligence (AI)-based models to be explainable, interpretable, and transparent to be reliable and trustworthy. In this survey, we reviewed articles related to Explainable AI (XAI) and their application to the significant scope of malware detection. The article encompasses a comprehensive examination of various XAI algorithms employed in malware analysis. Moreover, we have addressed the characteristics, challenges, and requirements in malware analysis that cannot be accommodated by standard XAI methods. We discussed that even though Explainable Malware Detection (EMD) models provide explainability, they make an AI-based model more vulnerable to adversarial attacks. We also propose a framework that assigns a level of explainability to each XAI malware analysis model, based on the security features involved in each method. In summary, the proposed project focuses on combining XAI and malware analysis to apply XAI models for scrutinizing the opaque nature of AI systems and their applications to malware analysis.","PeriodicalId":23,"journal":{"name":"ACS Photonics","volume":"16 11","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Photonics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3677374","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

In the past decade, the number of malware variants has increased rapidly. Many researchers have proposed to detect malware using intelligent techniques, such as Machine Learning (ML) and Deep Learning (DL), which have high accuracy and precision. These methods, however, suffer from being opaque in the decision-making process. Therefore, we need Artificial Intelligence (AI)-based models to be explainable, interpretable, and transparent to be reliable and trustworthy. In this survey, we reviewed articles related to Explainable AI (XAI) and their application to the significant scope of malware detection. The article encompasses a comprehensive examination of various XAI algorithms employed in malware analysis. Moreover, we have addressed the characteristics, challenges, and requirements in malware analysis that cannot be accommodated by standard XAI methods. We discussed that even though Explainable Malware Detection (EMD) models provide explainability, they make an AI-based model more vulnerable to adversarial attacks. We also propose a framework that assigns a level of explainability to each XAI malware analysis model, based on the security features involved in each method. In summary, the proposed project focuses on combining XAI and malware analysis to apply XAI models for scrutinizing the opaque nature of AI systems and their applications to malware analysis.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
全面分析用于恶意软件猎杀的可解释人工智能
在过去十年中,恶意软件变种的数量迅速增加。许多研究人员提出使用机器学习(ML)和深度学习(DL)等智能技术来检测恶意软件,这些技术具有很高的准确性和精确度。然而,这些方法都存在决策过程不透明的问题。因此,我们需要基于人工智能(AI)的模型具有可解释性、可解读性和透明性,这样才可靠可信。在本调查中,我们回顾了与可解释人工智能(XAI)相关的文章,以及它们在恶意软件检测这一重要领域的应用。文章全面考察了恶意软件分析中采用的各种 XAI 算法。此外,我们还探讨了标准 XAI 方法无法满足的恶意软件分析的特点、挑战和要求。我们讨论了即使可解释恶意软件检测(EMD)模型提供了可解释性,它们也会使基于人工智能的模型更容易受到对抗性攻击。我们还提出了一个框架,根据每种方法所涉及的安全功能,为每种 XAI 恶意软件分析模型分配一个可解释性级别。总之,拟议项目的重点是将 XAI 与恶意软件分析相结合,将 XAI 模型用于审查人工智能系统的不透明性及其在恶意软件分析中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACS Photonics
ACS Photonics NANOSCIENCE & NANOTECHNOLOGY-MATERIALS SCIENCE, MULTIDISCIPLINARY
CiteScore
11.90
自引率
5.70%
发文量
438
审稿时长
2.3 months
期刊介绍: Published as soon as accepted and summarized in monthly issues, ACS Photonics will publish Research Articles, Letters, Perspectives, and Reviews, to encompass the full scope of published research in this field.
期刊最新文献
Unbalanced Optical Modulation for Satellite Laser Links Symbolic Learning of Topological Bands in Photonic Crystals Soliton Time Scale Competition and Inter-Path Energy Transfer in Dual-Path Mode-Locked Fiber Lasers Brain-Inspired Multi-Time Scale Reservoir Computing with Organic Optoelectronic Memristors Broadband Terahertz Liquid Crystal Spatial Light Modulators for Multispectral Single-Pixel Imaging and Beam Scanning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1