Explainable deep learning approach for advanced persistent threats (APTs) detection in cybersecurity: a review

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Artificial Intelligence Review Pub Date : 2024-09-18 DOI:10.1007/s10462-024-10890-4
Noor Hazlina Abdul Mutalib, Aznul Qalid Md Sabri, Ainuddin Wahid Abdul Wahab, Erma Rahayu Mohd Faizal Abdullah, Nouar AlDahoul
{"title":"Explainable deep learning approach for advanced persistent threats (APTs) detection in cybersecurity: a review","authors":"Noor Hazlina Abdul Mutalib,&nbsp;Aznul Qalid Md Sabri,&nbsp;Ainuddin Wahid Abdul Wahab,&nbsp;Erma Rahayu Mohd Faizal Abdullah,&nbsp;Nouar AlDahoul","doi":"10.1007/s10462-024-10890-4","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, Advanced Persistent Threat (APT) attacks on network systems have increased through sophisticated fraud tactics. Traditional Intrusion Detection Systems (IDSs) suffer from low detection accuracy, high false-positive rates, and difficulty identifying unknown attacks such as remote-to-local (R2L) and user-to-root (U2R) attacks. This paper addresses these challenges by providing a foundational discussion of APTs and the limitations of existing detection methods. It then pivots to explore the novel integration of deep learning techniques and Explainable Artificial Intelligence (XAI) to improve APT detection. This paper aims to fill the gaps in the current research by providing a thorough analysis of how XAI methods, such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), can make black-box models more transparent and interpretable. The objective is to demonstrate the necessity of explainability in APT detection and propose solutions that enhance the trustworthiness and effectiveness of these models. It offers a critical analysis of existing approaches, highlights their strengths and limitations, and identifies open issues that require further research. This paper also suggests future research directions to combat evolving threats, paving the way for more effective and reliable cybersecurity solutions. Overall, this paper emphasizes the importance of explainability in enhancing the performance and trustworthiness of cybersecurity systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 11","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10890-4.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-10890-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, Advanced Persistent Threat (APT) attacks on network systems have increased through sophisticated fraud tactics. Traditional Intrusion Detection Systems (IDSs) suffer from low detection accuracy, high false-positive rates, and difficulty identifying unknown attacks such as remote-to-local (R2L) and user-to-root (U2R) attacks. This paper addresses these challenges by providing a foundational discussion of APTs and the limitations of existing detection methods. It then pivots to explore the novel integration of deep learning techniques and Explainable Artificial Intelligence (XAI) to improve APT detection. This paper aims to fill the gaps in the current research by providing a thorough analysis of how XAI methods, such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), can make black-box models more transparent and interpretable. The objective is to demonstrate the necessity of explainability in APT detection and propose solutions that enhance the trustworthiness and effectiveness of these models. It offers a critical analysis of existing approaches, highlights their strengths and limitations, and identifies open issues that require further research. This paper also suggests future research directions to combat evolving threats, paving the way for more effective and reliable cybersecurity solutions. Overall, this paper emphasizes the importance of explainability in enhancing the performance and trustworthiness of cybersecurity systems.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于检测网络安全中的高级持续性威胁(APTs)的可解释深度学习方法:综述
近年来,通过复杂的欺诈手段对网络系统发动的高级持续性威胁(APT)攻击日益增多。传统的入侵检测系统(IDS)存在检测准确率低、误报率高、难以识别未知攻击(如远程到本地(R2L)和用户到根(U2R)攻击)等问题。本文通过对 APT 和现有检测方法局限性的基础性讨论来应对这些挑战。然后,本文转而探索深度学习技术与可解释人工智能(XAI)的新型集成,以改进 APT 检测。本文旨在通过深入分析 XAI 方法(如 Shapley Additive Explanations (SHAP) 和 Local Interpretable Model-agnostic Explanations (LIME))如何使黑盒模型更加透明和可解释,填补当前研究的空白。其目的是证明可解释性在 APT 检测中的必要性,并提出提高这些模型可信度和有效性的解决方案。本文对现有方法进行了批判性分析,强调了这些方法的优势和局限性,并指出了需要进一步研究的开放性问题。本文还提出了未来的研究方向,以应对不断演变的威胁,为制定更有效、更可靠的网络安全解决方案铺平道路。总之,本文强调了可解释性在提高网络安全系统性能和可信度方面的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
期刊最新文献
Federated learning design and functional models: survey A systematic literature review of recent advances on context-aware recommender systems Escape: an optimization method based on crowd evacuation behaviors A multi-strategy boosted bald eagle search algorithm for global optimization and constrained engineering problems: case study on MLP classification problems Innovative solution suggestions for financing electric vehicle charging infrastructure investments with a novel artificial intelligence-based fuzzy decision-making modelling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1