Assessing generalizability of Deep Reinforcement Learning algorithms for Automated Vulnerability Assessment and Penetration Testing

IF 2.3 Q2 COMPUTER SCIENCE, THEORY & METHODS Array Pub Date : 2024-09-27 DOI:10.1016/j.array.2024.100365
Andrea Venturi , Mauro Andreolini , Mirco Marchetti , Michele Colajanni
{"title":"Assessing generalizability of Deep Reinforcement Learning algorithms for Automated Vulnerability Assessment and Penetration Testing","authors":"Andrea Venturi ,&nbsp;Mauro Andreolini ,&nbsp;Mirco Marchetti ,&nbsp;Michele Colajanni","doi":"10.1016/j.array.2024.100365","DOIUrl":null,"url":null,"abstract":"<div><div>Modern cybersecurity best practices and standards require continuous Vulnerability Assessment (VA) and Penetration Test (PT). These activities are human- and time-expensive. The research community is trying to propose autonomous or semi-autonomous solutions based on Deep Reinforcement Learning (DRL) agents, but current proposals require further investigations. We observe that related literature reports performance tests of the proposed agents against a limited subset of the hosts used to train the models, thus raising questions on their applicability in realistic scenarios. The main contribution of this paper is to fill this gap by investigating the generalization capabilities of existing DRL agents to extend their VAPT operations to hosts that were not used in the training phase. To this purpose, we define a novel VAPT environment through which we devise multiple evaluation scenarios. While evidencing the limited capabilities of shallow RL approaches, we consider three state-of-the-art deep RL agents, namely Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Advantage Actor–Critic (A2C), and use them as bases for VAPT operations. The results show that the algorithm using A2C DRL agent outperforms the others because it is more adaptable to unknown hosts and converges faster. Our methodology can guide future researchers and practitioners in designing a new generation of semi-autonomous VAPT tools that are suitable for real-world contexts.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"24 ","pages":"Article 100365"},"PeriodicalIF":2.3000,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005624000316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Modern cybersecurity best practices and standards require continuous Vulnerability Assessment (VA) and Penetration Test (PT). These activities are human- and time-expensive. The research community is trying to propose autonomous or semi-autonomous solutions based on Deep Reinforcement Learning (DRL) agents, but current proposals require further investigations. We observe that related literature reports performance tests of the proposed agents against a limited subset of the hosts used to train the models, thus raising questions on their applicability in realistic scenarios. The main contribution of this paper is to fill this gap by investigating the generalization capabilities of existing DRL agents to extend their VAPT operations to hosts that were not used in the training phase. To this purpose, we define a novel VAPT environment through which we devise multiple evaluation scenarios. While evidencing the limited capabilities of shallow RL approaches, we consider three state-of-the-art deep RL agents, namely Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Advantage Actor–Critic (A2C), and use them as bases for VAPT operations. The results show that the algorithm using A2C DRL agent outperforms the others because it is more adaptable to unknown hosts and converges faster. Our methodology can guide future researchers and practitioners in designing a new generation of semi-autonomous VAPT tools that are suitable for real-world contexts.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估用于自动漏洞评估和渗透测试的深度强化学习算法的通用性
现代网络安全最佳实践和标准要求持续进行漏洞评估 (VA) 和渗透测试 (PT)。这些活动耗费大量人力和时间。研究界正在尝试提出基于深度强化学习(DRL)代理的自主或半自主解决方案,但目前的建议还需要进一步研究。我们注意到,相关文献报告了针对用于训练模型的有限主机子集对所建议的代理进行的性能测试,从而对其在现实场景中的适用性提出了质疑。本文的主要贡献在于通过研究现有 DRL 代理的泛化能力,将其 VAPT 操作扩展到训练阶段未使用的主机,从而填补这一空白。为此,我们定义了一个新颖的 VAPT 环境,并通过该环境设计了多个评估场景。在证明浅层 RL 方法能力有限的同时,我们考虑了三种最先进的深层 RL 代理,即深层 Q 网络(DQN)、近端策略优化(PPO)和优势行为批判(A2C),并将它们作为 VAPT 操作的基础。结果表明,使用 A2C DRL 代理的算法优于其他算法,因为它对未知主机的适应性更强,收敛速度更快。我们的方法可以指导未来的研究人员和从业人员设计出适用于现实环境的新一代半自主 VAPT 工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Array
Array Computer Science-General Computer Science
CiteScore
4.40
自引率
0.00%
发文量
93
审稿时长
45 days
期刊最新文献
Combining computational linguistics with sentence embedding to create a zero-shot NLIDB Development of automatic CNC machine with versatile applications in art, design, and engineering Dual-model approach for one-shot lithium-ion battery state of health sequence prediction Maximizing influence via link prediction in evolving networks Assessing generalizability of Deep Reinforcement Learning algorithms for Automated Vulnerability Assessment and Penetration Testing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1