首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
HERALD: Hybrid Ensemble Approach for Robust Anomaly Detection in encrypted DNS traffic 基于混合集成的加密DNS流量鲁棒异常检测方法
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-28 DOI: 10.1016/j.jnca.2025.104342
Umar Sa’ad , Demeke Shumeye Lakew , Nhu-Ngoc Dao , Sungrae Cho
The proliferation of encrypted Domain Name System (DNS) traffic through protocols like DNS over Hypertext Transfer Protocol Secure presents significant privacy advantages but creates new challenges for anomaly detection. Traditional security mechanisms that rely on payload inspection become ineffective, necessitating advanced strategies capable of detecting threats in encrypted traffic. This study introduces the Hybrid Ensemble Approach for Robust Anomaly Detection (HERALD), a novel framework designed to detect anomalies in encrypted DNS traffic. HERALD combines unsupervised base detectors, including Isolation Forest (IF), One-Class Support Vector Machine (OCSVM), and Local Outlier Factor (LOF), with a supervised Random Forest meta-model, leveraging the strengths of both paradigms. Our comprehensive evaluation demonstrates HERALD’s exceptional performance, achieving 99.99 percent accuracy, precision, recall, and F1-score on the CIRA-CIC-DoHBrw-2020 dataset, while maintaining competitive computational efficiency with 110s training time and 2.2ms inference time. HERALD also demonstrates superior generalization capabilities on cross-dataset evaluations, exhibiting minimal performance degradation of only 2-4 percent when tested on previously unseen attack patterns, outperforming purely supervised models, which showed 5-8 percent degradation. The interpretability analysis, incorporating feature importance, accumulated local effects, and local interpretable model-agnostic explanations, provides insights into the relative contributions of each base detector, with OCSVM emerging as the most influential component, followed by IF and LOF. This study advances the field of network security by offering a robust, interpretable, and adaptable solution for detecting anomalies in encrypted DNS traffic that balances a high detection rate with a low false-positive rate.
加密域名系统(DNS)流量的激增通过超文本传输协议安全DNS等协议提供了显著的隐私优势,但也为异常检测带来了新的挑战。依赖于有效负载检查的传统安全机制变得无效,需要能够检测加密流量中的威胁的高级策略。本研究介绍了用于鲁棒异常检测的混合集成方法(HERALD),这是一种用于检测加密DNS流量异常的新框架。HERALD将无监督基础检测器(包括隔离森林(IF)、一类支持向量机(OCSVM)和局部离群因子(LOF))与监督随机森林元模型相结合,利用了两种范式的优势。我们的综合评估证明了HERALD的卓越性能,在CIRA-CIC-DoHBrw-2020数据集上实现了99.99%的正确率、精密度、召回率和f1分数,同时保持了具有竞争力的计算效率,训练时间为110秒,推理时间为2.2毫秒。HERALD还在跨数据集评估中展示了卓越的泛化能力,在以前未见过的攻击模式上测试时,仅显示出最小的性能下降2- 4%,优于纯监督模型,后者显示出5- 8%的下降。可解释性分析,结合特征重要性、累积局部效应和局部可解释模型不可知的解释,提供了对每个基础检测器的相对贡献的见解,其中OCSVM成为最具影响力的组成部分,其次是IF和LOF。本研究通过提供一个健壮的、可解释的、适应性强的解决方案来检测加密DNS流量中的异常,从而在高检测率和低误报率之间取得平衡,从而推动了网络安全领域的发展。
{"title":"HERALD: Hybrid Ensemble Approach for Robust Anomaly Detection in encrypted DNS traffic","authors":"Umar Sa’ad ,&nbsp;Demeke Shumeye Lakew ,&nbsp;Nhu-Ngoc Dao ,&nbsp;Sungrae Cho","doi":"10.1016/j.jnca.2025.104342","DOIUrl":"10.1016/j.jnca.2025.104342","url":null,"abstract":"<div><div>The proliferation of encrypted Domain Name System (DNS) traffic through protocols like DNS over Hypertext Transfer Protocol Secure presents significant privacy advantages but creates new challenges for anomaly detection. Traditional security mechanisms that rely on payload inspection become ineffective, necessitating advanced strategies capable of detecting threats in encrypted traffic. This study introduces the Hybrid Ensemble Approach for Robust Anomaly Detection (HERALD), a novel framework designed to detect anomalies in encrypted DNS traffic. HERALD combines unsupervised base detectors, including Isolation Forest (IF), One-Class Support Vector Machine (OCSVM), and Local Outlier Factor (LOF), with a supervised Random Forest meta-model, leveraging the strengths of both paradigms. Our comprehensive evaluation demonstrates HERALD’s exceptional performance, achieving 99.99 percent accuracy, precision, recall, and F1-score on the CIRA-CIC-DoHBrw-2020 dataset, while maintaining competitive computational efficiency with 110s training time and 2.2ms inference time. HERALD also demonstrates superior generalization capabilities on cross-dataset evaluations, exhibiting minimal performance degradation of only 2-4 percent when tested on previously unseen attack patterns, outperforming purely supervised models, which showed 5-8 percent degradation. The interpretability analysis, incorporating feature importance, accumulated local effects, and local interpretable model-agnostic explanations, provides insights into the relative contributions of each base detector, with OCSVM emerging as the most influential component, followed by IF and LOF. This study advances the field of network security by offering a robust, interpretable, and adaptable solution for detecting anomalies in encrypted DNS traffic that balances a high detection rate with a low false-positive rate.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104342"},"PeriodicalIF":8.0,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust fault-tolerant framework for VM failure predication and efficient task scheduling in dynamic cloud environments 动态云环境中虚拟机故障预测和高效任务调度的鲁棒容错框架
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-26 DOI: 10.1016/j.jnca.2025.104340
S. Sheeja Rani , Oruba Alfawaz , Ahmed M. Khedr
Due to the dynamic nature of cloud computing, maintaining fault-tolerance is essential to ensure the reliability and performance of virtualized environments. Failures in Virtual Machines (VMs) disrupt the seamless operation of cloud-based services, making it vital to implement a strong failure prediction system. As a solution, this work proposes a Segmented Regressive Learning-based Multivariate Raindrop Optimized Lottery Scheduling (SRL-MROLS) for dynamic cloud environments. Initially, the VM failure prediction is carried out using a Segmented Regressive Q-learning algorithm, where a set of VMs is provided as input. Segmented regression analyzes the average failure rate of VMs, while a reward-based framework guides the decision-making process for accurate failure prediction. Once a failure is predicted, a relocation process is triggered, involving the migration of workloads or tasks from the failing VM to an alternate VM. Next, a Multivariate Elitism Raindrop Optimization approach is employed to identify the optimal VM for task migration. Finally, a Deadline-Aware Stochastic Prioritized Lottery Scheduling is employed for efficient allocation of tasks to the selected VMs, maintaining seamless operations even in the event of VM failures. This process significantly improves task scheduling by maximizing throughput and minimizing response time in cloud environments. Experimental results demonstrate the superior performance of SRL-MROLS across different metrics. Specifically, it achieves an average improvement of 6.4% in failure prediction accuracy, 27.4% in throughput, and a 13% reduction in response time. Additionally, it reduces failure prediction time by 15%, migration cost by 14.3%, and makespan by 15%, significantly outperforming conventional techniques.
由于云计算的动态特性,维护容错性对于确保虚拟化环境的可靠性和性能至关重要。虚拟机(vm)中的故障会破坏基于云的服务的无缝运行,因此实现强大的故障预测系统至关重要。作为解决方案,本工作提出了一种基于分段回归学习的多元雨滴优化彩票调度(SRL-MROLS)的动态云环境。最初,使用分段回归q -学习算法进行虚拟机故障预测,其中提供一组虚拟机作为输入。分割回归分析虚拟机的平均故障率,而基于奖励的框架指导决策过程,以实现准确的故障预测。一旦预测到故障,就会触发重新定位过程,包括将工作负载或任务从故障VM迁移到备用VM。其次,采用多元精英雨滴优化方法确定任务迁移的最优虚拟机。最后,采用截止日期感知的随机优先抽签调度,将任务有效地分配给所选的虚拟机,即使在虚拟机故障的情况下也能保持无缝运行。此流程通过最大化吞吐量和最小化云环境中的响应时间来显著改进任务调度。实验结果表明,SRL-MROLS在不同指标上都具有优异的性能。具体来说,它在故障预测精度方面平均提高了6.4%,吞吐量提高了27.4%,响应时间减少了13%。此外,它将故障预测时间减少了15%,迁移成本减少了14.3%,完工时间减少了15%,显著优于传统技术。
{"title":"A robust fault-tolerant framework for VM failure predication and efficient task scheduling in dynamic cloud environments","authors":"S. Sheeja Rani ,&nbsp;Oruba Alfawaz ,&nbsp;Ahmed M. Khedr","doi":"10.1016/j.jnca.2025.104340","DOIUrl":"10.1016/j.jnca.2025.104340","url":null,"abstract":"<div><div>Due to the dynamic nature of cloud computing, maintaining fault-tolerance is essential to ensure the reliability and performance of virtualized environments. Failures in Virtual Machines (VMs) disrupt the seamless operation of cloud-based services, making it vital to implement a strong failure prediction system. As a solution, this work proposes a Segmented Regressive Learning-based Multivariate Raindrop Optimized Lottery Scheduling (SRL-MROLS) for dynamic cloud environments. Initially, the VM failure prediction is carried out using a Segmented Regressive Q-learning algorithm, where a set of VMs is provided as input. Segmented regression analyzes the average failure rate of VMs, while a reward-based framework guides the decision-making process for accurate failure prediction. Once a failure is predicted, a relocation process is triggered, involving the migration of workloads or tasks from the failing VM to an alternate VM. Next, a Multivariate Elitism Raindrop Optimization approach is employed to identify the optimal VM for task migration. Finally, a Deadline-Aware Stochastic Prioritized Lottery Scheduling is employed for efficient allocation of tasks to the selected VMs, maintaining seamless operations even in the event of VM failures. This process significantly improves task scheduling by maximizing throughput and minimizing response time in cloud environments. Experimental results demonstrate the superior performance of SRL-MROLS across different metrics. Specifically, it achieves an average improvement of 6.4% in failure prediction accuracy, 27.4% in throughput, and a 13% reduction in response time. Additionally, it reduces failure prediction time by 15%, migration cost by 14.3%, and makespan by 15%, significantly outperforming conventional techniques.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104340"},"PeriodicalIF":8.0,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing edge based smart city networks with software defined Networking and zero trust architecture 通过软件定义网络和零信任架构保护基于边缘的智能城市网络
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-25 DOI: 10.1016/j.jnca.2025.104341
Abeer Iftikhar , Faisal Bashir Hussain , Kashif Naseer Qureshi , Muhammad Shiraz , Mehdi Sookhak
Smart cities are rapidly evolving by adopting Internet of Things (IoT) devices, edge and cloud computing, and mobile connectivity. While these advancements enhance urban efficiency and connectivity, they also significantly increase the risk of cyber threats targeting critical infrastructure. Modern interdependent systems require flexible resilience, allowing them to adapt to changing conditions while maintaining core functions. Smart city networks, however, face unique security vulnerabilities due to their scale and heterogeneity. Altered to industry expectations and requirements, traditional security models are generally restrictive. With its "never trust, always verify' motto, the Zero Trust (ZT) security model starkly differs from traditional models. ZT builds on network design by mandating real time identity verification, giving minimum access permission and mandating respect for the principle of least privilege. Software Defined Networking (SDN) extends one step further by offering central control over the network, policy based autonomous application and immediate response to anomalies. To address these challenges, our proposed Trust-based Resilient Edge Networks (TREN) framework integrates ZT principles to enhance smart city security. Under the umbrella of SDN controllers, SPP, the underpinning component of TREN, performs real time trust analysis and autonomous policy enforcement, for instance, applying high level threat defense mechanisms. TREN dynamically defends against advanced threats like DDoS and Sybil attacks by isolating malicious nodes and adapting defense tactics based on real-time trust and traffic analysis. Trust analysis and policy control modules provide dynamic adaptive coverage, permitting effective proactive defense. Mininet-based simulations demonstrate TREN's efficacy, achieving 95 % detection accuracy, a 20 % latency reduction, and a 25 % increase in data throughput when compared to baseline models.
通过采用物联网(IoT)设备、边缘和云计算以及移动连接,智慧城市正在迅速发展。虽然这些进步提高了城市效率和连通性,但也显著增加了针对关键基础设施的网络威胁的风险。现代相互依存的系统需要灵活的弹性,使它们能够适应不断变化的条件,同时保持核心功能。然而,由于其规模和异质性,智慧城市网络面临着独特的安全漏洞。随着行业期望和需求的改变,传统的安全模型通常是限制性的。零信任(Zero trust, ZT)安全模型以“永不信任,始终验证”为座右铭,与传统模型截然不同。ZT建立在网络设计的基础上,通过强制实时身份验证,提供最小访问权限和强制遵守最小特权原则。软件定义网络(SDN)通过提供对网络的集中控制,基于策略的自治应用和对异常的即时响应,进一步扩展了一步。为了应对这些挑战,我们提出的基于信任的弹性边缘网络(TREN)框架整合了ZT原则,以增强智慧城市安全。在SDN控制器的保护伞下,TREN的基础组件SPP执行实时信任分析和自主策略实施,例如,应用高级威胁防御机制。TREN通过隔离恶意节点,并根据实时信任和流量分析调整防御策略,动态防御DDoS和Sybil攻击等高级威胁。信任分析和策略控制模块提供动态自适应覆盖,实现有效的主动防御。与基线模型相比,基于miniet的仿真证明了TREN的有效性,实现了95%的检测精度,减少了20%的延迟,并增加了25%的数据吞吐量。
{"title":"Securing edge based smart city networks with software defined Networking and zero trust architecture","authors":"Abeer Iftikhar ,&nbsp;Faisal Bashir Hussain ,&nbsp;Kashif Naseer Qureshi ,&nbsp;Muhammad Shiraz ,&nbsp;Mehdi Sookhak","doi":"10.1016/j.jnca.2025.104341","DOIUrl":"10.1016/j.jnca.2025.104341","url":null,"abstract":"<div><div>Smart cities are rapidly evolving by adopting Internet of Things (IoT) devices, edge and cloud computing, and mobile connectivity. While these advancements enhance urban efficiency and connectivity, they also significantly increase the risk of cyber threats targeting critical infrastructure. Modern interdependent systems require flexible resilience, allowing them to adapt to changing conditions while maintaining core functions. Smart city networks, however, face unique security vulnerabilities due to their scale and heterogeneity. Altered to industry expectations and requirements, traditional security models are generally restrictive. With its \"never trust, always verify' motto, the Zero Trust (ZT) security model starkly differs from traditional models. ZT builds on network design by mandating real time identity verification, giving minimum access permission and mandating respect for the principle of least privilege. Software Defined Networking (SDN) extends one step further by offering central control over the network, policy based autonomous application and immediate response to anomalies. To address these challenges, our proposed Trust-based Resilient Edge Networks (TREN) framework integrates ZT principles to enhance smart city security. Under the umbrella of SDN controllers, SPP, the underpinning component of TREN, performs real time trust analysis and autonomous policy enforcement, for instance, applying high level threat defense mechanisms. TREN dynamically defends against advanced threats like DDoS and Sybil attacks by isolating malicious nodes and adapting defense tactics based on real-time trust and traffic analysis. Trust analysis and policy control modules provide dynamic adaptive coverage, permitting effective proactive defense. Mininet-based simulations demonstrate TREN's efficacy, achieving 95 % detection accuracy, a 20 % latency reduction, and a 25 % increase in data throughput when compared to baseline models.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104341"},"PeriodicalIF":8.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A profit-effective function service pricing approach for serverless edge computing function offloading 无服务器边缘计算功能卸载的盈利函数服务定价方法
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-25 DOI: 10.1016/j.jnca.2025.104338
Siyuan Liu , Li Pan , Shijun Liu
In recent years, edge computing services have continued to develop and have been better integrated with serverless computing, leading to the improvement of the performance and concurrent request handling capabilities of edge servers. Therefore, an increasing number of IoT devices are willing to pay a certain amount of service processing fees to offload some computing tasks to edge servers for execution, with the aim of meeting their latency requirements. However, the computing capacity and storage space of edge servers at a single base station are still limited. Therefore, base stations must decide which task images to cache for future execution and price these computing services to control the computing offloading of IoT devices, so as to maximize their expected profit under the constraints of limited computing capacity and memory space. In this paper, we stand from the perspective of base stations and formulate the caching and pricing of function images at a base station, as well as the function offloading process of IoT devices, as a Markov Decision Process (MDP). We adopt a Proximal Policy Optimization (PPO)-based function service pricing adjustment algorithm to optimize the profit of base stations. Finally, we evaluate our approach through simulation experiments and compare it with baseline methods. The results show that our approach can significantly improve base stations’ expected profit in various scenarios.
近年来,边缘计算服务不断发展,并与无服务器计算更好地集成在一起,使得边缘服务器的性能和并发请求处理能力不断提高。因此,越来越多的物联网设备愿意支付一定的服务处理费,将一些计算任务卸载给边缘服务器执行,以满足其延迟需求。但是,单个基站的边缘服务器的计算能力和存储空间仍然是有限的。因此,基站必须决定缓存哪些任务映像以供未来执行,并对这些计算服务进行定价,以控制物联网设备的计算卸载,从而在有限的计算能力和内存空间约束下实现预期利润最大化。本文从基站的角度出发,将基站功能映像的缓存和定价以及物联网设备的功能卸载过程表述为马尔可夫决策过程(Markov Decision process, MDP)。采用一种基于近端策略优化(PPO)的函数服务定价调整算法来优化基站的利润。最后,我们通过模拟实验来评估我们的方法,并将其与基线方法进行比较。结果表明,该方法可以显著提高基站在各种场景下的预期利润。
{"title":"A profit-effective function service pricing approach for serverless edge computing function offloading","authors":"Siyuan Liu ,&nbsp;Li Pan ,&nbsp;Shijun Liu","doi":"10.1016/j.jnca.2025.104338","DOIUrl":"10.1016/j.jnca.2025.104338","url":null,"abstract":"<div><div>In recent years, edge computing services have continued to develop and have been better integrated with serverless computing, leading to the improvement of the performance and concurrent request handling capabilities of edge servers. Therefore, an increasing number of IoT devices are willing to pay a certain amount of service processing fees to offload some computing tasks to edge servers for execution, with the aim of meeting their latency requirements. However, the computing capacity and storage space of edge servers at a single base station are still limited. Therefore, base stations must decide which task images to cache for future execution and price these computing services to control the computing offloading of IoT devices, so as to maximize their expected profit under the constraints of limited computing capacity and memory space. In this paper, we stand from the perspective of base stations and formulate the caching and pricing of function images at a base station, as well as the function offloading process of IoT devices, as a Markov Decision Process (MDP). We adopt a Proximal Policy Optimization (PPO)-based function service pricing adjustment algorithm to optimize the profit of base stations. Finally, we evaluate our approach through simulation experiments and compare it with baseline methods. The results show that our approach can significantly improve base stations’ expected profit in various scenarios.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104338"},"PeriodicalIF":8.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elastic RAN slicing technology with multi-timescale SLA assurances for heterogeneous services provision in 6G 具有多时间尺度SLA保证的6G异构业务弹性RAN切片技术
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-24 DOI: 10.1016/j.jnca.2025.104330
Yamin Shen , Ping Wang , Chiou-Jye Huang , Shenxu Kuang , Song Li , Zihan Li
Digital transformation brings diverse applications along with varying Quality of Service (QoS) and isolation requirements. Network slicing, a key 5G technology anticipated to persist in 6G, aims to meet these heterogeneous requirements. However, due to conflicting usage of scarce resources among services, especially with multi-timescale Service Level Agreement (SLA) requirements including QoS and isolation, implementing slicing in the Radio Access Network (RAN) domain is a significant challenge. Therefore, this paper formulates the radio resource allocation problem posed by the coexistence of multiple URLLC (Ultra-Reliable and Low-Latency Communications) with varying delay requirements and eMBB (Enhanced Mobile Broadband) as a multi-timescale optimization problem. Consequently, a novel MPC (Model Predictive Control)-based RAN slicing resource allocation model called MPC-RSS is proposed. Specifically, MPC-RSS ensures elastic QoS through delay-tracking mechanism and far-sighted schemes. Meanwhile, it maintains elastic isolation by introducing logical and physical isolation constraint terms. Compared with the existing state-of-the-art approaches, simulation results show that MPC-RSS can achieve better and more elastic SLA performance. Our proposal provides a choice for 6G RAN to empower vertical industries achieving digital upgrades.
数字转换带来了不同的应用程序以及不同的服务质量(QoS)和隔离要求。网络切片是5G的一项关键技术,预计将在6G中持续存在,旨在满足这些异构需求。然而,由于服务之间对稀缺资源的冲突使用,特别是在包括QoS和隔离在内的多时间尺度服务水平协议(SLA)要求下,在无线接入网(RAN)域中实现切片是一个重大挑战。因此,本文将多个具有不同延迟需求的URLLC (Ultra-Reliable and Low-Latency Communications)和eMBB (Enhanced Mobile Broadband)共存所带来的无线电资源分配问题表述为一个多时标优化问题。因此,提出了一种新的基于模型预测控制(MPC)的RAN切片资源分配模型MPC- rss。具体来说,MPC-RSS通过延迟跟踪机制和前瞻性方案来保证弹性QoS。同时,通过引入逻辑隔离约束项和物理隔离约束项来保持弹性隔离。仿真结果表明,MPC-RSS可以获得更好的弹性SLA性能。我们的提案为6G RAN提供了一种选择,使垂直行业能够实现数字升级。
{"title":"Elastic RAN slicing technology with multi-timescale SLA assurances for heterogeneous services provision in 6G","authors":"Yamin Shen ,&nbsp;Ping Wang ,&nbsp;Chiou-Jye Huang ,&nbsp;Shenxu Kuang ,&nbsp;Song Li ,&nbsp;Zihan Li","doi":"10.1016/j.jnca.2025.104330","DOIUrl":"10.1016/j.jnca.2025.104330","url":null,"abstract":"<div><div>Digital transformation brings diverse applications along with varying Quality of Service (QoS) and isolation requirements. Network slicing, a key 5G technology anticipated to persist in 6G, aims to meet these heterogeneous requirements. However, due to conflicting usage of scarce resources among services, especially with multi-timescale Service Level Agreement (SLA) requirements including QoS and isolation, implementing slicing in the Radio Access Network (RAN) domain is a significant challenge. Therefore, this paper formulates the radio resource allocation problem posed by the coexistence of multiple URLLC (Ultra-Reliable and Low-Latency Communications) with varying delay requirements and eMBB (Enhanced Mobile Broadband) as a multi-timescale optimization problem. Consequently, a novel MPC (Model Predictive Control)-based RAN slicing resource allocation model called MPC-RSS is proposed. Specifically, MPC-RSS ensures elastic QoS through delay-tracking mechanism and far-sighted schemes. Meanwhile, it maintains elastic isolation by introducing logical and physical isolation constraint terms. Compared with the existing state-of-the-art approaches, simulation results show that MPC-RSS can achieve better and more elastic SLA performance. Our proposal provides a choice for 6G RAN to empower vertical industries achieving digital upgrades.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104330"},"PeriodicalIF":8.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight verifiable privacy preserving federated learning 轻量级可验证的隐私保护联邦学习
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-24 DOI: 10.1016/j.jnca.2025.104335
Li Zhang , Bing Tang , Jianbo Xu
Federated learning (FL) has garnered considerable attention owing to its capability of accomplishing model training through the sharing local models without accessing training datasets. Nevertheless, it has been demonstrated that the shared models still possess sensitive information related to the training data. Moreover, there is a possibility that malicious aggregation servers can return manipulated global models. While the verification problem in FL has been explored in existing schemes, most of these schemes employ bilinear pairing operations and homomorphic hash computations dependent on the model’s dimension, leading to substantial computational costs. Additionally, some schemes necessitate multiple parties to collectively manage one or more sets of confidential keys for privacy preservation and validation, which renders them vulnerable to collusion attacks between certain clients and servers. Consequently, we propose a privacy-preserving federated learning mechanism under a dual-server architecture. This mechanism adopts a coding matrix computation-based approach to ensure the privacy security of local models at the client side and achieves the aggregation of local models through collaborative efforts between two servers situated at the server side. To verify the correctness of the aggregated model, a Model Verification Code (MVC) mechanism is designed. By effectively combining the MVC mechanism with the coded matrix computation, there is no requirement for all clients to possess identical sets of confidential keys during the privacy preservation and verification process. Meanwhile, this ensures the fulfillment of security requirements under the malicious threat posed by the server. The computational overhead of this mechanism remains low since it avoids the application of complex cryptographic primitives. We perform extensive experiments on real datasets, and the experimental results further demonstrate the proposed scheme exhibits lightweight characteristics while ensuring the validity and usability of the model.
联邦学习(FL)由于能够在不访问训练数据集的情况下通过共享局部模型来完成模型训练而受到广泛关注。然而,已经证明共享模型仍然具有与训练数据相关的敏感信息。此外,恶意聚合服务器有可能返回被操纵的全局模型。虽然现有方案已经探索了FL中的验证问题,但这些方案大多采用依赖于模型维数的双线性配对操作和同态哈希计算,导致大量的计算成本。此外,一些方案需要多方共同管理一组或多组机密密钥以进行隐私保护和验证,这使得它们容易受到某些客户机和服务器之间的共谋攻击。因此,我们提出了一种双服务器架构下的隐私保护联邦学习机制。该机制采用基于编码矩阵计算的方法,保证了本地模型在客户端的隐私安全,并通过位于服务器端的两台服务器之间的协作实现了本地模型的聚合。为了验证聚合模型的正确性,设计了模型验证码(model Verification Code, MVC)机制。通过将MVC机制与编码矩阵计算有效地结合起来,在隐私保护和验证过程中,不需要所有客户端拥有相同的机密密钥集。同时保证了在服务器端恶意威胁下的安全需求。这种机制的计算开销仍然很低,因为它避免了复杂的加密原语的应用。我们在真实数据集上进行了大量的实验,实验结果进一步证明了该方案在保证模型有效性和可用性的同时,具有轻量级的特点。
{"title":"Lightweight verifiable privacy preserving federated learning","authors":"Li Zhang ,&nbsp;Bing Tang ,&nbsp;Jianbo Xu","doi":"10.1016/j.jnca.2025.104335","DOIUrl":"10.1016/j.jnca.2025.104335","url":null,"abstract":"<div><div>Federated learning (FL) has garnered considerable attention owing to its capability of accomplishing model training through the sharing local models without accessing training datasets. Nevertheless, it has been demonstrated that the shared models still possess sensitive information related to the training data. Moreover, there is a possibility that malicious aggregation servers can return manipulated global models. While the verification problem in FL has been explored in existing schemes, most of these schemes employ bilinear pairing operations and homomorphic hash computations dependent on the model’s dimension, leading to substantial computational costs. Additionally, some schemes necessitate multiple parties to collectively manage one or more sets of confidential keys for privacy preservation and validation, which renders them vulnerable to collusion attacks between certain clients and servers. Consequently, we propose a privacy-preserving federated learning mechanism under a dual-server architecture. This mechanism adopts a coding matrix computation-based approach to ensure the privacy security of local models at the client side and achieves the aggregation of local models through collaborative efforts between two servers situated at the server side. To verify the correctness of the aggregated model, a Model Verification Code (MVC) mechanism is designed. By effectively combining the MVC mechanism with the coded matrix computation, there is no requirement for all clients to possess identical sets of confidential keys during the privacy preservation and verification process. Meanwhile, this ensures the fulfillment of security requirements under the malicious threat posed by the server. The computational overhead of this mechanism remains low since it avoids the application of complex cryptographic primitives. We perform extensive experiments on real datasets, and the experimental results further demonstrate the proposed scheme exhibits lightweight characteristics while ensuring the validity and usability of the model.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104335"},"PeriodicalIF":8.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D UAV path planning based on an improved TD3 deep reinforcement learning for data collection in an urban environment 城市环境下基于改进TD3深度强化学习的无人机路径规划
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-23 DOI: 10.1016/j.jnca.2025.104336
Mohammad Nazemi Jenabi, Hadi Asharioun, Mahdi Pourgholi
With the rapid growth in the number of users and services in communication networks, unmanned aerial vehicles (UAVs) are expected to play a significant role in future wireless communication systems. One of the key applications of UAVs is data collection in Internet of Things (IoT) networks. This paper addresses a three-dimensional (3D) UAV path planning optimization problem aimed at minimizing the completion time of data collection in urban environments, taking into account real-world constraints such as frequent communication link blockages between UAVs and sensors caused by buildings. To tackle this challenge, we propose an improved Deep Reinforcement Learning (DRL) algorithm, referred to as the Dropout-Based Prioritized TD3 Algorithm (DPTD3). This method integrates the TD3 algorithm with the Prioritized Experience Replay Buffer (PER) strategy and introduces a new Actor network architecture incorporating the Dropout technique. Simulation results demonstrate that the proposed 3D UAV path planning approach reduces both data collection time and UAV energy consumption compared to a two-dimensional (2D) path planning method. Furthermore, the results indicate that during training, the DPTD3 algorithm outperforms other state-of-the-art DRL approaches in terms of both stability and performance.
随着通信网络用户数量和业务的快速增长,无人驾驶飞行器(uav)有望在未来的无线通信系统中发挥重要作用。无人机的关键应用之一是物联网(IoT)网络中的数据采集。本文研究了一个三维(3D)无人机路径规划优化问题,该问题旨在最大限度地减少城市环境中数据收集的完成时间,同时考虑到现实世界的约束,如建筑物引起的无人机与传感器之间频繁的通信链路阻塞。为了应对这一挑战,我们提出了一种改进的深度强化学习(DRL)算法,称为基于辍学的优先TD3算法(DPTD3)。该方法将TD3算法与优先体验重放缓冲(PER)策略集成在一起,并引入了一种结合Dropout技术的新的Actor网络架构。仿真结果表明,与二维路径规划方法相比,所提出的无人机三维路径规划方法减少了数据收集时间和无人机能耗。此外,结果表明,在训练期间,DPTD3算法在稳定性和性能方面优于其他最先进的DRL方法。
{"title":"3D UAV path planning based on an improved TD3 deep reinforcement learning for data collection in an urban environment","authors":"Mohammad Nazemi Jenabi,&nbsp;Hadi Asharioun,&nbsp;Mahdi Pourgholi","doi":"10.1016/j.jnca.2025.104336","DOIUrl":"10.1016/j.jnca.2025.104336","url":null,"abstract":"<div><div>With the rapid growth in the number of users and services in communication networks, unmanned aerial vehicles (UAVs) are expected to play a significant role in future wireless communication systems. One of the key applications of UAVs is data collection in Internet of Things (IoT) networks. This paper addresses a three-dimensional (3D) UAV path planning optimization problem aimed at minimizing the completion time of data collection in urban environments, taking into account real-world constraints such as frequent communication link blockages between UAVs and sensors caused by buildings. To tackle this challenge, we propose an improved Deep Reinforcement Learning (DRL) algorithm, referred to as the Dropout-Based Prioritized TD3 Algorithm (DPTD3). This method integrates the TD3 algorithm with the Prioritized Experience Replay Buffer (PER) strategy and introduces a new Actor network architecture incorporating the Dropout technique. Simulation results demonstrate that the proposed 3D UAV path planning approach reduces both data collection time and UAV energy consumption compared to a two-dimensional (2D) path planning method. Furthermore, the results indicate that during training, the DPTD3 algorithm outperforms other state-of-the-art DRL approaches in terms of both stability and performance.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104336"},"PeriodicalIF":8.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure event-triggered control for vehicle platooning against dual deception attacks 针对双重欺骗攻击的车辆队列安全事件触发控制
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-19 DOI: 10.1016/j.jnca.2025.104323
Ali Nikoutadbir , Sajjad Torabi , Sadegh Bolouki
This paper addresses the challenge of achieving secure consensus in a vehicular platoon under dual deception attacks using an event-triggered control approach. The platoon consists of a leader and multiple follower vehicles that intermittently exchange position and velocity information to maintain stability. The study focuses on two types of deception attacks: gain modification attacks, where controller gains are manipulated, and false data injection attacks, which compromise sensor and control data integrity to destabilize the platoon. The research analyzes the duration, frequency, and impact of these attacks on system stability. To address these challenges, a robust event-triggered control scheme is proposed to ensure secure consensus despite the attacks. Sufficient consensus conditions are derived for both distributed static and dynamic event-triggered control schemes, considering constraints on attack duration and frequency. The influence of system matrices and triggering parameters on attack resilience is also analyzed. Additionally, a topology-switching scheme is introduced as a mitigation strategy when attack conditions exceed tolerable limits. The effectiveness of the proposed methodology is validated through simulations across various case studies, demonstrating its ability to maintain platoon stability under dual deception attacks.
本文解决了在双重欺骗攻击下,使用事件触发控制方法在车辆排中实现安全共识的挑战。车队由一辆领头车和多辆跟随车组成,它们间歇性地交换位置和速度信息以保持稳定。该研究侧重于两种类型的欺骗攻击:增益修改攻击,其中控制器增益被操纵;虚假数据注入攻击,破坏传感器和控制数据的完整性,从而破坏排的稳定。研究分析了这些攻击的持续时间、频率以及对系统稳定性的影响。为了应对这些挑战,提出了一种鲁棒的事件触发控制方案,以确保在攻击发生时达成安全共识。在考虑攻击持续时间和频率约束的情况下,导出了分布式静态和动态事件触发控制方案的充分共识条件。分析了系统矩阵和触发参数对攻击恢复能力的影响。此外,当攻击条件超过可容忍限制时,引入拓扑切换方案作为缓解策略。通过各种案例研究的模拟验证了所提出方法的有效性,证明了其在双重欺骗攻击下保持排稳定性的能力。
{"title":"Secure event-triggered control for vehicle platooning against dual deception attacks","authors":"Ali Nikoutadbir ,&nbsp;Sajjad Torabi ,&nbsp;Sadegh Bolouki","doi":"10.1016/j.jnca.2025.104323","DOIUrl":"10.1016/j.jnca.2025.104323","url":null,"abstract":"<div><div>This paper addresses the challenge of achieving secure consensus in a vehicular platoon under dual deception attacks using an event-triggered control approach. The platoon consists of a leader and multiple follower vehicles that intermittently exchange position and velocity information to maintain stability. The study focuses on two types of deception attacks: gain modification attacks, where controller gains are manipulated, and false data injection attacks, which compromise sensor and control data integrity to destabilize the platoon. The research analyzes the duration, frequency, and impact of these attacks on system stability. To address these challenges, a robust event-triggered control scheme is proposed to ensure secure consensus despite the attacks. Sufficient consensus conditions are derived for both distributed static and dynamic event-triggered control schemes, considering constraints on attack duration and frequency. The influence of system matrices and triggering parameters on attack resilience is also analyzed. Additionally, a topology-switching scheme is introduced as a mitigation strategy when attack conditions exceed tolerable limits. The effectiveness of the proposed methodology is validated through simulations across various case studies, demonstrating its ability to maintain platoon stability under dual deception attacks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104323"},"PeriodicalIF":8.0,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145157537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autoformer-based mobility and handoff-aware prediction for QoE enhancement in adaptive video streaming in 4G/5G networks 4G/5G网络中自适应视频流QoE增强的基于autoformer的移动性和切换感知预测
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-18 DOI: 10.1016/j.jnca.2025.104324
Maram Helmy , Mohamed S. Hassan , Mahmoud H. Ismail , Usman Tariq
Traditional Adaptive Bitrate (ABR) algorithms in Dynamic Adaptive Streaming over HTTP (DASH) rely on basic throughput estimation techniques that often struggle to quickly adapt to network fluctuations. As users move across different transportation modes or change from one access point to another (e.g., Wi-Fi to cellular networks or between 4G/5G cells), available bandwidth can vary sharply, causing interruptions, abrupt quality shifts, which impact the ability of conventional ABR algorithms to provide seamless playback and maintain high quality-of-experience (QoE). To address these issues, this paper introduces a novel and comprehensive framework that significantly enhances the adaptability and intelligence of ABR algorithms. The proposed solution integrates three key components: a transformer-based throughput prediction model, a Mobility-Aware Throughput Prediction engine (MATH-P), and a Handoff-Aware Throughput Prediction engine (HATH-P). The transformer-based model outperforms state-of-the-art approaches in predicting throughput for both 4G and 5G networks, leveraging its ability to capture complex temporal patterns and long-term dependencies. The MATH-P engine adapts throughput predictions to varying mobility scenarios, while the HATH-P one manages seamless transitions by accurately predicting 4G/5G handoff events and selecting the appropriate throughput prediction model. The proposed systems were integrated into existing ABR algorithms, replacing traditional throughput estimation techniques. Experimental results demonstrate that the MATH-P and HATH-P engines significantly improve video streaming performance, reducing stall durations, enhancing video quality, and ensuring smoother playback.
基于HTTP的动态自适应流(DASH)中的传统自适应比特率(ABR)算法依赖于基本的吞吐量估计技术,通常难以快速适应网络波动。当用户在不同的传输方式之间移动或从一个接入点切换到另一个接入点时(例如,从Wi-Fi到蜂窝网络或在4G/5G蜂窝之间切换),可用带宽可能会发生急剧变化,从而导致中断和突然的质量变化,从而影响传统ABR算法提供无缝回放和保持高体验质量(QoE)的能力。为了解决这些问题,本文引入了一种新颖而全面的框架,显著提高了ABR算法的适应性和智能。提出的解决方案集成了三个关键组件:基于变压器的吞吐量预测模型,移动性感知吞吐量预测引擎(MATH-P)和切换感知吞吐量预测引擎(ath - p)。基于变压器的模型在预测4G和5G网络吞吐量方面优于最先进的方法,利用其捕获复杂时间模式和长期依赖关系的能力。MATH-P引擎根据不同的移动场景调整吞吐量预测,而ath - p引擎通过准确预测4G/5G切换事件并选择适当的吞吐量预测模型来实现无缝过渡。该系统被集成到现有的ABR算法中,取代了传统的吞吐量估计技术。实验结果表明,MATH-P和ath - p引擎显著提高了视频流性能,减少了失速持续时间,提高了视频质量,并确保了更流畅的播放。
{"title":"Autoformer-based mobility and handoff-aware prediction for QoE enhancement in adaptive video streaming in 4G/5G networks","authors":"Maram Helmy ,&nbsp;Mohamed S. Hassan ,&nbsp;Mahmoud H. Ismail ,&nbsp;Usman Tariq","doi":"10.1016/j.jnca.2025.104324","DOIUrl":"10.1016/j.jnca.2025.104324","url":null,"abstract":"<div><div>Traditional Adaptive Bitrate (ABR) algorithms in Dynamic Adaptive Streaming over HTTP (DASH) rely on basic throughput estimation techniques that often struggle to quickly adapt to network fluctuations. As users move across different transportation modes or change from one access point to another (e.g., Wi-Fi to cellular networks or between 4G/5G cells), available bandwidth can vary sharply, causing interruptions, abrupt quality shifts, which impact the ability of conventional ABR algorithms to provide seamless playback and maintain high quality-of-experience (QoE). To address these issues, this paper introduces a novel and comprehensive framework that significantly enhances the adaptability and intelligence of ABR algorithms. The proposed solution integrates three key components: a transformer-based throughput prediction model, a Mobility-Aware Throughput Prediction engine (MATH-P), and a Handoff-Aware Throughput Prediction engine (HATH-P). The transformer-based model outperforms state-of-the-art approaches in predicting throughput for both 4G and 5G networks, leveraging its ability to capture complex temporal patterns and long-term dependencies. The MATH-P engine adapts throughput predictions to varying mobility scenarios, while the HATH-P one manages seamless transitions by accurately predicting 4G/5G handoff events and selecting the appropriate throughput prediction model. The proposed systems were integrated into existing ABR algorithms, replacing traditional throughput estimation techniques. Experimental results demonstrate that the MATH-P and HATH-P engines significantly improve video streaming performance, reducing stall durations, enhancing video quality, and ensuring smoother playback.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104324"},"PeriodicalIF":8.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRAETOR:Packet flow graph and dynamic spatio-temporal graph neural network-based flow table overflow attack detection method PRAETOR:基于数据包流图和动态时空图神经网络的流表溢出攻击检测方法
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-17 DOI: 10.1016/j.jnca.2025.104333
Kaixi Wang , Yunhe Cui , Guowei Shen , Chun Guo , Yi Chen , Qing Qian
The flow table overflow attack on SDN switches is considered to be a destructive attack in SDN. By exhausting the computing and storage resources of SDN switches, this attack severely disrupts the normal communication functions of SDN networks. Graph neural networks are now being employed to detect flow table overflow attacks in SDN. When a flow graph is constructed, flow features are commonly utilized as nodes to represent the characteristics of flow table overflow attacks. However, a graph solely relying on these nodes and attributes may not fully encompass all the nuances of the flow table overflow attack. Additionally, GNN model may be difficult in capturing the graph information between different flow graphs over time, thus decreasing the detection accuracy of packet flow graph. To address these issues, we introduce PRAETOR, a detection method for flow table overflow attacks that leverages a packet flow graph and a dynamic spatio-temporal graph neural network. More particularly, The PaFlo-Graph algorithm and the EGST model are introduced by PRAETOR. The PaFlo-Graph algorithm generates a packet flow graph for each flow. It utilizes packet information to construct the graph with more detail, thereby better reflecting the characteristics of flow table overflow attacks. The EGST model is a dynamic spatio-temporal graph convolutional network designed to detect flow table overflow attacks by analyzing packet flow graphs. Experiments were conducted under two network topologies, where we used tcpreplay to replay packets from the bigFlow dataset to simulate SDN network flow. We also employed sFlow to sample packet features. Based on the sampled data, two datasets were constructed, each containing 1,760 network flows. For each packet, eight key features were extracted to represent its characteristics. The evaluation metrics include TPR, TNR, accuracy, precision, recall, F1-score, confusion matrix, ROC curves, and PR curves. Experimental results show that the proposed PaFlo-Graph algorithm generates more detailed flow graphs compared to KNN and CRAM, resulting in an average improvement of 6.49% in accuracy and 8.7% in precision. Furthermore, the overall detection framework, PRAETOR, achieves detection accuracies of 99.66% and 99.44% on Topo1 and Topo2, respectively. The precision scores reach 99.32% and 99.72%, and the F1-scores are 99.57% and 100%, respectively, indicating superior detection performance compared to other methods.
针对SDN交换机的流表溢出攻击被认为是SDN网络中的一种破坏性攻击。该攻击通过耗尽SDN交换机的计算和存储资源,严重破坏SDN网络的正常通信功能。图神经网络目前被用于检测SDN中的流表溢出攻击。在构建流图时,通常使用流特征作为节点来表示流表溢出攻击的特征。然而,仅仅依赖于这些节点和属性的图可能无法完全包含流表溢出攻击的所有细微差别。此外,随着时间的推移,GNN模型可能难以捕获不同流图之间的图形信息,从而降低了包流图的检测精度。为了解决这些问题,我们引入了PRAETOR,这是一种利用数据包流图和动态时空图神经网络的流表溢出攻击检测方法。具体地说,PRAETOR介绍了PaFlo-Graph算法和EGST模型。PaFlo-Graph算法为每个流生成数据包流图。它利用报文信息构造更详细的图,从而更好地反映了流表溢出攻击的特点。EGST模型是一个动态的时空图卷积网络,旨在通过分析数据包流图来检测流表溢出攻击。实验在两种网络拓扑下进行,其中我们使用tcpreplay来重播来自bigFlow数据集的数据包来模拟SDN网络流。我们还使用sFlow对数据包特征进行采样。基于采样数据,构建了两个数据集,每个数据集包含1760个网络流。对于每个数据包,提取8个关键特征来表示其特征。评价指标包括TPR、TNR、正确率、精密度、召回率、f1评分、混淆矩阵、ROC曲线、PR曲线。实验结果表明,与KNN和CRAM算法相比,本文提出的PaFlo-Graph算法生成的流图更加详细,准确率平均提高6.49%,精度平均提高8.7%。此外,整体检测框架PRAETOR在Topo1和Topo2上的检测准确率分别达到99.66%和99.44%。精密度得分达到99.32%、99.72%,f1得分分别达到99.57%、100%,检测性能优于其他方法。
{"title":"PRAETOR:Packet flow graph and dynamic spatio-temporal graph neural network-based flow table overflow attack detection method","authors":"Kaixi Wang ,&nbsp;Yunhe Cui ,&nbsp;Guowei Shen ,&nbsp;Chun Guo ,&nbsp;Yi Chen ,&nbsp;Qing Qian","doi":"10.1016/j.jnca.2025.104333","DOIUrl":"10.1016/j.jnca.2025.104333","url":null,"abstract":"<div><div>The flow table overflow attack on SDN switches is considered to be a destructive attack in SDN. By exhausting the computing and storage resources of SDN switches, this attack severely disrupts the normal communication functions of SDN networks. Graph neural networks are now being employed to detect flow table overflow attacks in SDN. When a flow graph is constructed, flow features are commonly utilized as nodes to represent the characteristics of flow table overflow attacks. However, a graph solely relying on these nodes and attributes may not fully encompass all the nuances of the flow table overflow attack. Additionally, GNN model may be difficult in capturing the graph information between different flow graphs over time, thus decreasing the detection accuracy of packet flow graph. To address these issues, we introduce PRAETOR, a detection method for flow table overflow attacks that leverages a packet flow graph and a dynamic spatio-temporal graph neural network. More particularly, The PaFlo-Graph algorithm and the EGST model are introduced by PRAETOR. The PaFlo-Graph algorithm generates a packet flow graph for each flow. It utilizes packet information to construct the graph with more detail, thereby better reflecting the characteristics of flow table overflow attacks. The EGST model is a dynamic spatio-temporal graph convolutional network designed to detect flow table overflow attacks by analyzing packet flow graphs. Experiments were conducted under two network topologies, where we used tcpreplay to replay packets from the bigFlow dataset to simulate SDN network flow. We also employed sFlow to sample packet features. Based on the sampled data, two datasets were constructed, each containing 1,760 network flows. For each packet, eight key features were extracted to represent its characteristics. The evaluation metrics include TPR, TNR, accuracy, precision, recall, F1-score, confusion matrix, ROC curves, and PR curves. Experimental results show that the proposed PaFlo-Graph algorithm generates more detailed flow graphs compared to KNN and CRAM, resulting in an average improvement of 6.49% in accuracy and 8.7% in precision. Furthermore, the overall detection framework, PRAETOR, achieves detection accuracies of 99.66% and 99.44% on Topo1 and Topo2, respectively. The precision scores reach 99.32% and 99.72%, and the F1-scores are 99.57% and 100%, respectively, indicating superior detection performance compared to other methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104333"},"PeriodicalIF":8.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1