基于深度强化学习的入侵检测的对抗鲁棒性

IF 2.4 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Information Security Pub Date : 2024-08-29 DOI:10.1007/s10207-024-00903-2
Mohamed Amine Merzouk, Christopher Neal, Joséphine Delas, Reda Yaich, Nora Boulahia-Cuppens, Frédéric Cuppens
{"title":"基于深度强化学习的入侵检测的对抗鲁棒性","authors":"Mohamed Amine Merzouk, Christopher Neal, Joséphine Delas, Reda Yaich, Nora Boulahia-Cuppens, Frédéric Cuppens","doi":"10.1007/s10207-024-00903-2","DOIUrl":null,"url":null,"abstract":"<p>Machine learning techniques, including Deep Reinforcement Learning (DRL), enhance intrusion detection systems by adapting to new threats. However, DRL’s reliance on vulnerable deep neural networks leads to susceptibility to adversarial examples-perturbations designed to evade detection. While adversarial examples are well-studied in deep learning, their impact on DRL-based intrusion detection remains underexplored, particularly in critical domains. This article conducts a thorough analysis of DRL-based intrusion detection’s vulnerability to adversarial examples. It systematically evaluates key hyperparameters such as DRL algorithms, neural network depth, and width, impacting agents’ robustness. The study extends to black-box attacks, demonstrating adversarial transferability across DRL algorithms. Findings emphasize neural network architecture’s critical role in DRL agent robustness, addressing underfitting and overfitting challenges. Practical implications include insights for optimizing DRL-based intrusion detection agents to enhance performance and resilience. Experiments encompass multiple DRL algorithms tested on three datasets: NSL-KDD, UNSW-NB15, and CICIoV2024, against gradient-based adversarial attacks, with publicly available implementation code.\n</p>","PeriodicalId":50316,"journal":{"name":"International Journal of Information Security","volume":"19 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial robustness of deep reinforcement learning-based intrusion detection\",\"authors\":\"Mohamed Amine Merzouk, Christopher Neal, Joséphine Delas, Reda Yaich, Nora Boulahia-Cuppens, Frédéric Cuppens\",\"doi\":\"10.1007/s10207-024-00903-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Machine learning techniques, including Deep Reinforcement Learning (DRL), enhance intrusion detection systems by adapting to new threats. However, DRL’s reliance on vulnerable deep neural networks leads to susceptibility to adversarial examples-perturbations designed to evade detection. While adversarial examples are well-studied in deep learning, their impact on DRL-based intrusion detection remains underexplored, particularly in critical domains. This article conducts a thorough analysis of DRL-based intrusion detection’s vulnerability to adversarial examples. It systematically evaluates key hyperparameters such as DRL algorithms, neural network depth, and width, impacting agents’ robustness. The study extends to black-box attacks, demonstrating adversarial transferability across DRL algorithms. Findings emphasize neural network architecture’s critical role in DRL agent robustness, addressing underfitting and overfitting challenges. Practical implications include insights for optimizing DRL-based intrusion detection agents to enhance performance and resilience. Experiments encompass multiple DRL algorithms tested on three datasets: NSL-KDD, UNSW-NB15, and CICIoV2024, against gradient-based adversarial attacks, with publicly available implementation code.\\n</p>\",\"PeriodicalId\":50316,\"journal\":{\"name\":\"International Journal of Information Security\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10207-024-00903-2\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10207-024-00903-2","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

包括深度强化学习(DRL)在内的机器学习技术通过适应新威胁来增强入侵检测系统。然而,DRL 对脆弱的深度神经网络的依赖导致其容易受到对抗性示例的影响--对抗性示例是为了逃避检测而设计的扰动。虽然对抗示例在深度学习中得到了充分研究,但它们对基于 DRL 的入侵检测的影响仍未得到充分探索,尤其是在关键领域。本文对基于 DRL 的入侵检测易受对抗示例影响的问题进行了深入分析。它系统地评估了影响代理鲁棒性的关键超参数,如 DRL 算法、神经网络深度和宽度。研究扩展到黑盒攻击,证明了对抗性在 DRL 算法中的可转移性。研究结果强调了神经网络架构在 DRL 代理鲁棒性中的关键作用,解决了欠拟合和过拟合难题。实际影响包括对优化基于 DRL 的入侵检测代理以提高性能和复原力的启示。实验包括在三个数据集上测试的多种 DRL 算法:NSL-KDD、UNSW-NB15 和 CICIoV2024,对抗基于梯度的对抗性攻击,并公开了实现代码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adversarial robustness of deep reinforcement learning-based intrusion detection

Machine learning techniques, including Deep Reinforcement Learning (DRL), enhance intrusion detection systems by adapting to new threats. However, DRL’s reliance on vulnerable deep neural networks leads to susceptibility to adversarial examples-perturbations designed to evade detection. While adversarial examples are well-studied in deep learning, their impact on DRL-based intrusion detection remains underexplored, particularly in critical domains. This article conducts a thorough analysis of DRL-based intrusion detection’s vulnerability to adversarial examples. It systematically evaluates key hyperparameters such as DRL algorithms, neural network depth, and width, impacting agents’ robustness. The study extends to black-box attacks, demonstrating adversarial transferability across DRL algorithms. Findings emphasize neural network architecture’s critical role in DRL agent robustness, addressing underfitting and overfitting challenges. Practical implications include insights for optimizing DRL-based intrusion detection agents to enhance performance and resilience. Experiments encompass multiple DRL algorithms tested on three datasets: NSL-KDD, UNSW-NB15, and CICIoV2024, against gradient-based adversarial attacks, with publicly available implementation code.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Information Security
International Journal of Information Security 工程技术-计算机:理论方法
CiteScore
6.30
自引率
3.10%
发文量
52
审稿时长
12 months
期刊介绍: The International Journal of Information Security is an English language periodical on research in information security which offers prompt publication of important technical work, whether theoretical, applicable, or related to implementation. Coverage includes system security: intrusion detection, secure end systems, secure operating systems, database security, security infrastructures, security evaluation; network security: Internet security, firewalls, mobile security, security agents, protocols, anti-virus and anti-hacker measures; content protection: watermarking, software protection, tamper resistant software; applications: electronic commerce, government, health, telecommunications, mobility.
期刊最新文献
“Animation” URL in NFT marketplaces considered harmful for privacy An overview of proposals towards the privacy-preserving publication of trajectory data Enhancing privacy protections in national identification systems: an examination of stakeholders’ knowledge, attitudes, and practices of privacy by design An enhanced and verifiable lightweight authentication protocol for securing the Internet of Medical Things (IoMT) based on CP-ABE encryption Secure multi-party computation with legally-enforceable fairness
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1