首页 > 最新文献

Computer Networks最新文献

英文 中文
Automating bit-level field localization with hybrid neural network 基于混合神经网络的位级场定位自动化
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-23 DOI: 10.1016/j.comnet.2026.112041
Tao Huang , Yansong Gao , Yifeng Zheng , Boyu Kuang , Zhidan Yuan , Anmin Fu
Protocol Reverse Engineering (PRE), which can decipher the format specifications of unknown protocols, lays the groundwork for numerous security analysis applications. Network trace-based PRE has emerged as the dominant technology given its ease of implementation. However, its current identification precision is primarily limited to byte-level granularity. While a few advanced methods can achieve precise identification of fine-grained bit-level fields within given bytes, their target byte localization relies heavily on subjective prior domain knowledge and tedious manual labor, significantly restricting their generalizability and adoption. To address these limitations, we propose BitFiL that is an automated bit-level field localization method. BitFiL features a hybrid neural network architecture delicately designed to capture both intra-byte temporal features and inter-byte contextual structural features from known protocol bytes, enabling automated bit-level field localization and consequent field count identification for unknown protocol bytes. Experimental results demonstrate that BitFiL delivers accurate localization performance for bit-level fields in byte-oriented protocols, with robustness against variations in training-validation protocol combinations and training protocol set sizes. Although limited diversity in bit-level field samples may affect the identification accuracy of field counts, the overall prediction deviations remain relatively small, showcasing high accuracy, convergence, and stability.
协议逆向工程(PRE)可以破译未知协议的格式规范,为许多安全分析应用奠定了基础。由于易于实现,基于网络跟踪的PRE已成为主导技术。然而,它目前的识别精度主要限于字节级粒度。虽然一些先进的方法可以在给定字节内实现对细粒度位级字段的精确识别,但它们的目标字节定位严重依赖于主观的先验领域知识和繁琐的人工劳动,严重限制了它们的推广和采用。为了解决这些限制,我们提出了BitFiL,这是一种自动的位级域定位方法。BitFiL采用了一种混合神经网络架构,该架构经过精心设计,可以从已知协议字节中捕获字节内时间特征和字节间上下文结构特征,从而实现自动的位级字段定位和未知协议字节的后续字段计数识别。实验结果表明,BitFiL在面向字节的协议中提供了准确的位级域定位性能,对训练验证协议组合和训练协议集大小的变化具有鲁棒性。尽管位级场样本的有限多样性可能会影响场计数的识别精度,但总体预测偏差仍然相对较小,具有较高的准确性、收敛性和稳定性。
{"title":"Automating bit-level field localization with hybrid neural network","authors":"Tao Huang ,&nbsp;Yansong Gao ,&nbsp;Yifeng Zheng ,&nbsp;Boyu Kuang ,&nbsp;Zhidan Yuan ,&nbsp;Anmin Fu","doi":"10.1016/j.comnet.2026.112041","DOIUrl":"10.1016/j.comnet.2026.112041","url":null,"abstract":"<div><div>Protocol Reverse Engineering (PRE), which can decipher the format specifications of unknown protocols, lays the groundwork for numerous security analysis applications. Network trace-based PRE has emerged as the dominant technology given its ease of implementation. However, its current identification precision is primarily limited to byte-level granularity. While a few advanced methods can achieve precise identification of fine-grained bit-level fields within given bytes, their target byte localization relies heavily on subjective prior domain knowledge and tedious manual labor, significantly restricting their generalizability and adoption. To address these limitations, we propose BitFiL that is an automated bit-level field localization method. BitFiL features a hybrid neural network architecture delicately designed to capture both intra-byte temporal features and inter-byte contextual structural features from known protocol bytes, enabling automated bit-level field localization and consequent field count identification for unknown protocol bytes. Experimental results demonstrate that BitFiL delivers accurate localization performance for bit-level fields in byte-oriented protocols, with robustness against variations in training-validation protocol combinations and training protocol set sizes. Although limited diversity in bit-level field samples may affect the identification accuracy of field counts, the overall prediction deviations remain relatively small, showcasing high accuracy, convergence, and stability.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112041"},"PeriodicalIF":4.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint communication and sensing optimization for LEO-Multi-UAV SAGIN: Task offloading, resource allocation and UAV trajectory LEO-Multi-UAV SAGIN联合通信与传感优化:任务卸载、资源分配与UAV轨迹
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-23 DOI: 10.1016/j.comnet.2026.112050
Pengya Duan , Wei Huang , Yang Yang , Guiyan Liu , Fei Wang , Yan Wu , Xiongyu Zhong
Space-Air-Ground Integrated Network (SAGIN) is a key architecture for achieving wide-area sensing and communication services. However, the connection between Low Earth Orbit (LEO) satellites and ground devices is constrained by satellite mobility and service angles. Unmanned Aerial Vehicles (UAVs), acting as relay and sensing nodes, can effectively bridge this gap. Nevertheless, under their limited onboard resources, the coupled impacts between UAV trajectory planning and the performance of communication and sensing-especially in scenarios where multi-UAV collaboration extends LEO service coverage-have not been fully investigated. To address these challenges, this paper proposes an integrated sensing and computation offloading architecture for SAGIN, where UAVs perform multi-target sensing while cooperating with LEO satellites to provide communication and computational services. We formulate a joint optimization problem that encompasses user offloading decisions, communication-sensing time allocation, UAV trajectory planning, and computing resource allocation, aiming to minimize long-term service latency. This problem is modeled as a mixed-integer nonlinear program (MINLP). To efficiently solve it, we develop a low-complexity Lyapunov-Benders Optimization (LBO) algorithm based on Lyapunov optimization and generalized Benders decomposition, which decomposes the long-term problem into tractable single-slot subproblems. Simulation results validate that the proposed method outperforms existing benchmarks in service latency, demonstrating its effectiveness in dynamic SAGIN environments.
空间-空地综合网络(SAGIN)是实现广域传感和通信服务的关键体系结构。然而,低地球轨道卫星与地面设备之间的连接受到卫星机动性和服务角度的限制。无人机(uav)作为中继和传感节点,可以有效地弥补这一差距。然而,在有限的机载资源下,无人机轨迹规划与通信和传感性能之间的耦合影响——特别是在多无人机协作扩展LEO服务覆盖的情况下——尚未得到充分研究。为了解决这些挑战,本文提出了一种集成传感和计算卸载的SAGIN体系结构,其中无人机在与LEO卫星合作提供通信和计算服务的同时执行多目标传感。我们制定了一个包含用户卸载决策、通信感知时间分配、无人机轨迹规划和计算资源分配的联合优化问题,旨在最小化长期服务延迟。该问题被建模为一个混合整数非线性规划(MINLP)。为了有效地解决这一问题,我们提出了一种基于Lyapunov优化和广义Benders分解的低复杂度Lyapunov-Benders优化(LBO)算法,将长期问题分解为可处理的单槽子问题。仿真结果验证了该方法在服务延迟方面优于现有基准测试,证明了其在动态SAGIN环境中的有效性。
{"title":"Joint communication and sensing optimization for LEO-Multi-UAV SAGIN: Task offloading, resource allocation and UAV trajectory","authors":"Pengya Duan ,&nbsp;Wei Huang ,&nbsp;Yang Yang ,&nbsp;Guiyan Liu ,&nbsp;Fei Wang ,&nbsp;Yan Wu ,&nbsp;Xiongyu Zhong","doi":"10.1016/j.comnet.2026.112050","DOIUrl":"10.1016/j.comnet.2026.112050","url":null,"abstract":"<div><div>Space-Air-Ground Integrated Network (SAGIN) is a key architecture for achieving wide-area sensing and communication services. However, the connection between Low Earth Orbit (LEO) satellites and ground devices is constrained by satellite mobility and service angles. Unmanned Aerial Vehicles (UAVs), acting as relay and sensing nodes, can effectively bridge this gap. Nevertheless, under their limited onboard resources, the coupled impacts between UAV trajectory planning and the performance of communication and sensing-especially in scenarios where multi-UAV collaboration extends LEO service coverage-have not been fully investigated. To address these challenges, this paper proposes an integrated sensing and computation offloading architecture for SAGIN, where UAVs perform multi-target sensing while cooperating with LEO satellites to provide communication and computational services. We formulate a joint optimization problem that encompasses user offloading decisions, communication-sensing time allocation, UAV trajectory planning, and computing resource allocation, aiming to minimize long-term service latency. This problem is modeled as a mixed-integer nonlinear program (MINLP). To efficiently solve it, we develop a low-complexity Lyapunov-Benders Optimization (LBO) algorithm based on Lyapunov optimization and generalized Benders decomposition, which decomposes the long-term problem into tractable single-slot subproblems. Simulation results validate that the proposed method outperforms existing benchmarks in service latency, demonstrating its effectiveness in dynamic SAGIN environments.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112050"},"PeriodicalIF":4.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedHome: A federated learning framework for smart home device classification and attack detection by broadband service providers FedHome:宽带服务提供商用于智能家居设备分类和攻击检测的联邦学习框架
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-22 DOI: 10.1016/j.comnet.2026.112040
Md Mizanur Rahman , Faycal Bouhafs , Sayed Amir Hoseini , Frank den Hartog
The rise of the Internet of Things (IoT) has led to the integration of various devices into smart homes, significantly increasing the complexity and vulnerability of home networks. Consequent network performance issues often lead to complaints directed at Broadband Service Providers (BSPs), which may arise from either legitimate usage or malicious cyber attacks. BSPs, however, lack visibility into client-side networks, which is partly due to privacy concerns. This makes it hard to identify the true cause of performance problems. While previous research has tackled these challenges using Machine Learning (ML) techniques, few studies have approached the problem from the perspective of BSPs. They need a solution that is scalable, accurate, and privacy-preserving. Existing centralized ML models fail to generalize across these heterogeneous environments and provide low accuracy. We address this gap by introducing a novel Federated Learning (FL) framework for smart home device classification and attack detection. The proposed approach offers a privacy-preserving, scalable framework that can achieve accuracies of more than 80%. This framework can be installed inside the existing resource-constrained home gateways, making it suitable for large-scale deployment by BSPs.
物联网(IoT)的兴起导致各种设备集成到智能家居中,大大增加了家庭网络的复杂性和脆弱性。随之而来的网络性能问题经常导致针对宽带服务提供商(bsp)的投诉,这些投诉可能是由合法使用或恶意网络攻击引起的。然而,bsp缺乏对客户端网络的可见性,这在一定程度上是由于隐私问题。这使得很难确定性能问题的真正原因。虽然以前的研究已经使用机器学习(ML)技术解决了这些挑战,但很少有研究从bsp的角度来解决这个问题。他们需要一种可扩展、准确且保护隐私的解决方案。现有的集中式机器学习模型无法在这些异构环境中进行泛化,并且提供较低的准确性。我们通过引入一种用于智能家居设备分类和攻击检测的新型联邦学习(FL)框架来解决这一差距。所提出的方法提供了一个隐私保护,可扩展的框架,可以实现超过80%的准确性。该框架可以安装在现有资源受限的家庭网关中,使其适合bsp的大规模部署。
{"title":"FedHome: A federated learning framework for smart home device classification and attack detection by broadband service providers","authors":"Md Mizanur Rahman ,&nbsp;Faycal Bouhafs ,&nbsp;Sayed Amir Hoseini ,&nbsp;Frank den Hartog","doi":"10.1016/j.comnet.2026.112040","DOIUrl":"10.1016/j.comnet.2026.112040","url":null,"abstract":"<div><div>The rise of the Internet of Things (IoT) has led to the integration of various devices into smart homes, significantly increasing the complexity and vulnerability of home networks. Consequent network performance issues often lead to complaints directed at Broadband Service Providers (BSPs), which may arise from either legitimate usage or malicious cyber attacks. BSPs, however, lack visibility into client-side networks, which is partly due to privacy concerns. This makes it hard to identify the true cause of performance problems. While previous research has tackled these challenges using Machine Learning (ML) techniques, few studies have approached the problem from the perspective of BSPs. They need a solution that is scalable, accurate, and privacy-preserving. Existing centralized ML models fail to generalize across these heterogeneous environments and provide low accuracy. We address this gap by introducing a novel Federated Learning (FL) framework for smart home device classification and attack detection. The proposed approach offers a privacy-preserving, scalable framework that can achieve accuracies of more than 80%. This framework can be installed inside the existing resource-constrained home gateways, making it suitable for large-scale deployment by BSPs.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112040"},"PeriodicalIF":4.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROAR: A resource-optimized adaptive routing protocol for underwater acoustic communication networks 一种资源优化的水声通信网络自适应路由协议
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-22 DOI: 10.1016/j.comnet.2026.112055
Jinghua He , Jie Tian , Zhanqing Pu , Yunan Zhu , Wei Wang , Haining Huang
Underwater acoustic communication networks (UACNs) face significant challenges such as limited bandwidth, high attenuation, long latency, and time-varying channels. Most existing routing protocols rely on static resource configurations, which limits their performance in dynamic and resource-constrained underwater environments. To address these issues, this paper proposes a Resource-Optimized Adaptive Routing (ROAR) protocol that integrates dynamic relay selection with cross-layer resource optimization. ROAR improves forwarding efficiency by selecting appropriate forward nodes while excluding those with low residual energy. The optimal relay node is selected based on both residual energy and proximity to the destination. ROAR also formulates resource allocation as a multi-objective optimization problem, jointly considering transmission mode, subcarrier spacing, guard interval, and transmission power. The optimization aims to minimize energy consumption, reduce end-to-end delay, and improve bandwidth utilization. This problem is solved in real time using the Non-dominated Sorting Genetic Algorithm II (NSGA-II), combined with the ideal point method, which dynamically adapts the resource configuration to the current network conditions during runtime. Simulation results indicate that ROAR outperforms Q-Learning-Based Energy-Efficient and Lifetime-Extended Adaptive Routing (QELAR), Reinforcement-Learning-Based Routing for Congestion Avoidance (RCAR), and Q-Learning-Based Hierarchical Routing Protocol (QHRP) in terms of average hop count, average end-to-end delay, and packet delivery ratio (PDR), highlighting its effectiveness in resource-constrained UACNs.
水声通信网络(uacn)面临着带宽有限、衰减大、时延长、信道时变等重大挑战。现有的大多数路由协议依赖于静态资源配置,这限制了它们在动态和资源受限的水下环境中的性能。为了解决这些问题,本文提出了一种资源优化自适应路由(ROAR)协议,该协议将动态中继选择与跨层资源优化相结合。ROAR通过选择合适的转发节点,排除剩余能量低的转发节点,提高转发效率。根据剩余能量和距离目的地的远近选择最优中继节点。ROAR还将资源分配作为一个多目标优化问题,综合考虑传输方式、子载波间隔、保护间隔和传输功率。优化的目的是最小化能耗,降低端到端时延,提高带宽利用率。采用非支配排序遗传算法II (non - dominant Sorting Genetic Algorithm II, NSGA-II),结合理想点法,在运行时根据当前网络状况动态调整资源配置,实时解决了该问题。仿真结果表明,在平均跳数、平均端到端延迟和包交付率(PDR)方面,ROAR优于基于q - learning的节能和寿命扩展自适应路由(QELAR)、基于强化学习的拥塞避免路由(RCAR)和基于q - learning的分层路由协议(QHRP),突出了其在资源受限的uacn中的有效性。
{"title":"ROAR: A resource-optimized adaptive routing protocol for underwater acoustic communication networks","authors":"Jinghua He ,&nbsp;Jie Tian ,&nbsp;Zhanqing Pu ,&nbsp;Yunan Zhu ,&nbsp;Wei Wang ,&nbsp;Haining Huang","doi":"10.1016/j.comnet.2026.112055","DOIUrl":"10.1016/j.comnet.2026.112055","url":null,"abstract":"<div><div>Underwater acoustic communication networks (UACNs) face significant challenges such as limited bandwidth, high attenuation, long latency, and time-varying channels. Most existing routing protocols rely on static resource configurations, which limits their performance in dynamic and resource-constrained underwater environments. To address these issues, this paper proposes a Resource-Optimized Adaptive Routing (ROAR) protocol that integrates dynamic relay selection with cross-layer resource optimization. ROAR improves forwarding efficiency by selecting appropriate forward nodes while excluding those with low residual energy. The optimal relay node is selected based on both residual energy and proximity to the destination. ROAR also formulates resource allocation as a multi-objective optimization problem, jointly considering transmission mode, subcarrier spacing, guard interval, and transmission power. The optimization aims to minimize energy consumption, reduce end-to-end delay, and improve bandwidth utilization. This problem is solved in real time using the Non-dominated Sorting Genetic Algorithm II (NSGA-II), combined with the ideal point method, which dynamically adapts the resource configuration to the current network conditions during runtime. Simulation results indicate that ROAR outperforms Q-Learning-Based Energy-Efficient and Lifetime-Extended Adaptive Routing (QELAR), Reinforcement-Learning-Based Routing for Congestion Avoidance (RCAR), and Q-Learning-Based Hierarchical Routing Protocol (QHRP) in terms of average hop count, average end-to-end delay, and packet delivery ratio (PDR), highlighting its effectiveness in resource-constrained UACNs.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112055"},"PeriodicalIF":4.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incentive mechanism design in blockchain-based hierarchical federated learning over edge clouds 边缘云上基于区块链分层联邦学习的激励机制设计
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-22 DOI: 10.1016/j.comnet.2026.112039
Xuanzhang Liu , Jiyao Liu , Xinliang Wei , Yu Wang
Federated learning (FL) is a promising distributed AI paradigm for protecting user privacy by training models on local devices (such as IoT devices). However, FL systems face challenges like high communication overhead and non-transparent model aggregation. To address these issues, integrating blockchain technology into hierarchical federated learning (HFL) to construct a decentralized, low latency, and transparent learning framework over a cloud-edge-client architecture has gained attention. To ensure participant engagement from edge servers and clients, this paper explores incentive mechanism design in a blockchain-based HFL system using a semi-asynchronous aggregation model. We model the resource pricing among clients, edge servers, and task publishers at the cloud as a three-stage Stackelberg game, proving the existence of a Nash equilibrium in which each participant could maximize their own utility. An iterative algorithm based on the alternating direction method of multipliers and backward induction is then proposed to optimize strategies. Extensive simulations verify the algorithm’s rapid convergence and demonstrate that our proposed mechanism consistently outperforms baseline strategies across various scenarios in terms of participant utilities. Our approach also achieves up to 7% higher model accuracy than baseline methods, confirming its practical effectiveness.
联邦学习(FL)是一种很有前途的分布式人工智能范例,它通过在本地设备(如物联网设备)上训练模型来保护用户隐私。然而,FL系统面临着高通信开销和不透明模型聚合等挑战。为了解决这些问题,将区块链技术集成到分层联邦学习(HFL)中,以在云边缘客户端架构上构建一个分散、低延迟和透明的学习框架,已经引起了人们的关注。为了确保边缘服务器和客户端的参与者参与,本文使用半异步聚合模型探索了基于区块链的HFL系统中的激励机制设计。我们将云中的客户端、边缘服务器和任务发布者之间的资源定价建模为三阶段Stackelberg博弈,证明存在纳什均衡,其中每个参与者都可以最大化自己的效用。然后提出了一种基于乘法器交替方向法和逆向归纳法的迭代算法来优化策略。大量的模拟验证了算法的快速收敛性,并证明我们提出的机制在参与者效用方面始终优于各种场景的基线策略。该方法的模型精度也比基线方法高出7%,证实了其实际有效性。
{"title":"Incentive mechanism design in blockchain-based hierarchical federated learning over edge clouds","authors":"Xuanzhang Liu ,&nbsp;Jiyao Liu ,&nbsp;Xinliang Wei ,&nbsp;Yu Wang","doi":"10.1016/j.comnet.2026.112039","DOIUrl":"10.1016/j.comnet.2026.112039","url":null,"abstract":"<div><div>Federated learning (FL) is a promising distributed AI paradigm for protecting user privacy by training models on local devices (such as IoT devices). However, FL systems face challenges like high communication overhead and non-transparent model aggregation. To address these issues, integrating blockchain technology into hierarchical federated learning (HFL) to construct a decentralized, low latency, and transparent learning framework over a cloud-edge-client architecture has gained attention. To ensure participant engagement from edge servers and clients, this paper explores incentive mechanism design in a blockchain-based HFL system using a semi-asynchronous aggregation model. We model the resource pricing among clients, edge servers, and task publishers at the cloud as a three-stage Stackelberg game, proving the existence of a Nash equilibrium in which each participant could maximize their own utility. An iterative algorithm based on the alternating direction method of multipliers and backward induction is then proposed to optimize strategies. Extensive simulations verify the algorithm’s rapid convergence and demonstrate that our proposed mechanism consistently outperforms baseline strategies across various scenarios in terms of participant utilities. Our approach also achieves up to 7% higher model accuracy than baseline methods, confirming its practical effectiveness.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112039"},"PeriodicalIF":4.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poseidon: Intelligent proactive defense against DDoS attacks in edge clouds 波塞冬:边缘云DDoS攻击智能主动防御
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-21 DOI: 10.1016/j.comnet.2026.112025
Shen Dong , Guozhen Cheng , Wenyan Liu
With the rise of edge computing (EC), data and computation are increasingly shifted from centralized clouds to edge nodes, improving real-time performance and privacy. However, the resource constraints of edge nodes make them vulnerable to Distributed Denial-of-Service (DDoS) attacks. Traditional passive defense mechanisms struggle to counter diverse attacks due to their delayed response and lack of flexibility. While proactive defense strategies possess dynamism and adaptability, existing solutions often rely solely on either Moving Target Defense (MTD) or deception defense. The former fails to curb attacks at their source, while the latter lacks dynamic adaptability. Moreover, they often address only one type of attack and impose high resource and latency costs. To overcome these challenges, we propose Poseidon, a deep reinforcement learning-based hybrid proactive defense framework. Poseidon integrates the dynamism of MTD with the deceptive nature of deception defense, enabling differentiated responses to both High-rate Distributed Denial-of-Service (HDDoS) and Low-rate Distributed Denial-of-Service (LDDoS) attacks. By leveraging the lightweight characteristics of containers, it achieves resource-efficient protection. The interaction between attacks and defenses is modeled as a Markov Decision Process (MDP), and the Deep Q-Network (DQN) algorithm is employed to dynamically balance defense effectiveness and resource overhead. Experimental results demonstrate that Poseidon significantly outperforms existing MTD schemes across multiple DDoS attack scenarios, achieving up to a 28% improvement in average reward, a 30% enhancement in security, and a 15% increase in service quality. Furthermore, Poseidon effectively ensures service availability while minimizing quality degradation, showcasing considerable practical value.
随着边缘计算(EC)的兴起,数据和计算越来越多地从集中式云转移到边缘节点,提高了实时性和隐私性。然而,边缘节点的资源限制使其容易受到分布式拒绝服务(DDoS)攻击。传统的被动防御机制由于反应迟缓和缺乏灵活性而难以应对各种攻击。虽然主动防御策略具有动态性和适应性,但现有的解决方案往往只依赖于移动目标防御(MTD)或欺骗防御。前者无法从源头遏制攻击,而后者缺乏动态适应性。此外,它们通常只处理一种类型的攻击,并带来很高的资源和延迟成本。为了克服这些挑战,我们提出了基于深度强化学习的混合主动防御框架Poseidon。Poseidon集成了MTD的动态和欺骗防御的欺骗性,能够对高速率分布式拒绝服务(HDDoS)和低速率分布式拒绝服务(LDDoS)攻击做出不同的响应。通过利用容器的轻量级特性,它实现了资源高效保护。将攻击与防御之间的交互建模为马尔可夫决策过程(MDP),并采用深度Q-Network (DQN)算法动态平衡防御效果和资源开销。实验结果表明,Poseidon在多种DDoS攻击场景下显著优于现有的MTD方案,平均奖励提高28%,安全性提高30%,服务质量提高15%。此外,Poseidon有效地保证了服务的可用性,同时最大限度地减少了质量下降,显示出相当大的实用价值。
{"title":"Poseidon: Intelligent proactive defense against DDoS attacks in edge clouds","authors":"Shen Dong ,&nbsp;Guozhen Cheng ,&nbsp;Wenyan Liu","doi":"10.1016/j.comnet.2026.112025","DOIUrl":"10.1016/j.comnet.2026.112025","url":null,"abstract":"<div><div>With the rise of edge computing (EC), data and computation are increasingly shifted from centralized clouds to edge nodes, improving real-time performance and privacy. However, the resource constraints of edge nodes make them vulnerable to Distributed Denial-of-Service (DDoS) attacks. Traditional passive defense mechanisms struggle to counter diverse attacks due to their delayed response and lack of flexibility. While proactive defense strategies possess dynamism and adaptability, existing solutions often rely solely on either Moving Target Defense (MTD) or deception defense. The former fails to curb attacks at their source, while the latter lacks dynamic adaptability. Moreover, they often address only one type of attack and impose high resource and latency costs. To overcome these challenges, we propose Poseidon, a deep reinforcement learning-based hybrid proactive defense framework. Poseidon integrates the dynamism of MTD with the deceptive nature of deception defense, enabling differentiated responses to both High-rate Distributed Denial-of-Service (HDDoS) and Low-rate Distributed Denial-of-Service (LDDoS) attacks. By leveraging the lightweight characteristics of containers, it achieves resource-efficient protection. The interaction between attacks and defenses is modeled as a Markov Decision Process (MDP), and the Deep Q-Network (DQN) algorithm is employed to dynamically balance defense effectiveness and resource overhead. Experimental results demonstrate that Poseidon significantly outperforms existing MTD schemes across multiple DDoS attack scenarios, achieving up to a 28% improvement in average reward, a 30% enhancement in security, and a 15% increase in service quality. Furthermore, Poseidon effectively ensures service availability while minimizing quality degradation, showcasing considerable practical value.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112025"},"PeriodicalIF":4.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-augmented deep feature extraction and temporal-dependencies prediction for network traffic anomaly detection 基于内存增强深度特征提取和时间依赖预测的网络流量异常检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-21 DOI: 10.1016/j.comnet.2026.112037
Chao Wang , Ping Zhou , Jiuzhen Zeng , Yong Ma , Ruichi Zhang
Traffic anomaly detections are crucial for the network security. Most of existing unsupervised detection models are based on reconstruction and prediction methods that are deficient in generalization ability and processing temporal-dependencies. Although the memory module is introduced to mitigate the weak generalization ability, it encounters challenges such as data distribution drift over time and memory contamination. To address these issues, this paper proposes a novel unsupervised network traffic anomaly detection model, MAFE-TDP, which integrates a transformer-based feature extraction module, a memory module and a prediction-based temporal-dependencies extraction network. The generalization ability of the model and its robustness to memory contamination are enhanced by introducing the memory module with the FIFO memory replacement strategy and KNN method. The proposed anomaly scoring method fuses reconstruction error and prediction error, thus enlarging the gap between normal and abnormal data. Evaluation results on four real-word network traffic datasets demonstrate that MAFE-TDP outperforms existing state-of-the-art baseline methods in terms of AUC-ROC and AUC-PR metrics.
流量异常检测对网络安全至关重要。现有的无监督检测模型大多基于重构和预测方法,缺乏泛化能力和处理时间依赖性。虽然引入了内存模块来缓解泛化能力弱的问题,但它面临着数据分布随时间漂移和内存污染等挑战。为了解决这些问题,本文提出了一种新的无监督网络流量异常检测模型MAFE-TDP,该模型集成了基于变压器的特征提取模块、内存模块和基于预测的时间依赖提取网络。通过引入FIFO内存替换策略和KNN方法,增强了模型的泛化能力和对内存污染的鲁棒性。所提出的异常评分方法融合了重建误差和预测误差,从而扩大了正常数据与异常数据之间的差距。在四个真实网络流量数据集上的评估结果表明,MAFE-TDP在AUC-ROC和AUC-PR指标方面优于现有的最先进的基线方法。
{"title":"Memory-augmented deep feature extraction and temporal-dependencies prediction for network traffic anomaly detection","authors":"Chao Wang ,&nbsp;Ping Zhou ,&nbsp;Jiuzhen Zeng ,&nbsp;Yong Ma ,&nbsp;Ruichi Zhang","doi":"10.1016/j.comnet.2026.112037","DOIUrl":"10.1016/j.comnet.2026.112037","url":null,"abstract":"<div><div>Traffic anomaly detections are crucial for the network security. Most of existing unsupervised detection models are based on reconstruction and prediction methods that are deficient in generalization ability and processing temporal-dependencies. Although the memory module is introduced to mitigate the weak generalization ability, it encounters challenges such as data distribution drift over time and memory contamination. To address these issues, this paper proposes a novel unsupervised network traffic anomaly detection model, MAFE-TDP, which integrates a transformer-based feature extraction module, a memory module and a prediction-based temporal-dependencies extraction network. The generalization ability of the model and its robustness to memory contamination are enhanced by introducing the memory module with the FIFO memory replacement strategy and KNN method. The proposed anomaly scoring method fuses reconstruction error and prediction error, thus enlarging the gap between normal and abnormal data. Evaluation results on four real-word network traffic datasets demonstrate that MAFE-TDP outperforms existing state-of-the-art baseline methods in terms of AUC-ROC and AUC-PR metrics.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112037"},"PeriodicalIF":4.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Doubling the speed of large-scale packet classification through compressing decision tree nodes 通过压缩决策树节点,将大规模数据包分类速度提高一倍
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112032
Jincheng Zhong , Tao Li , Gaofeng Lv , Shuhui Chen
Packet classification underpins critical network functions such as access control and quality of service. While decision tree-based approaches offer efficiency and scalability, their classification performance is often bottlenecked by excessive memory accesses during tree traversal-primarily due to the use of pointer-based indexing structures necessitated by large node sizes.
This paper proposes a general pointer-elimination paradigm via Extreme Node Compression (ENC). This enables indexing structures to store nodes directly rather than pointers, thereby eliminating one memory indirection per level and nearly halving the number of memory accesses per lookup. To validate this core idea, this paper designs TupleTree-Compress based on state-of-the-art hash-based decision tree scheme-TupleTree. TupleTree-Compress integrates three key techniques-a unified global hash table, fingerprint-based keys, and hash-based sibling linking-to achieve full node compression while preserving correctness and update support.
Furthermore, to demonstrate the generality of our approach, we apply the same optimization paradigm to state-of-the-art classical decision tree scheme-CutSplit, resulting in CutSplit-Compress. Experimental results show that TupleTree-Compress achieves speedups of 2.24 × –3.12 ×  over TupleTree and 1.43 × –1.91 ×  over DBTable-the current best-performing scheme. Similarly, CutSplit-Compress achieves speedups of 3.19 × –3.64 ×  over CutSplit, with improvements up to 1.58 ×  over DBTable.
Our work demonstrates that aggressive node compression is a powerful and generalizable strategy for boosting packet classification performance, offering a promising direction for optimizing decision tree-based schemes.
分组分类是接入控制和服务质量等关键网络功能的基础。虽然基于决策树的方法提供了效率和可伸缩性,但它们的分类性能经常受到树遍历期间过多内存访问的瓶颈——主要是由于使用大节点大小所必需的基于指针的索引结构。本文提出了一种基于极限节点压缩(ENC)的通用指针消除范式。这使得索引结构可以直接存储节点而不是指针,从而消除了每个级别的间接内存,并且每次查找的内存访问次数几乎减少了一半。为了验证这一核心思想,本文基于最先进的基于哈希的决策树方案- tupletree设计了TupleTree-Compress。TupleTree-Compress集成了三种关键技术——统一的全局哈希表、基于指纹的键和基于哈希的兄弟链接——以实现全节点压缩,同时保持正确性和更新支持。此外,为了证明我们方法的通用性,我们将相同的优化范例应用于最先进的经典决策树方案- cutsplit,从而产生CutSplit-Compress。实验结果表明,与TupleTree相比,TupleTree- compress的速度提高了2.24 × -3.12 × ,与dbtable相比,速度提高了1.43 × -1.91 × ,这是目前性能最好的方案。类似地,CutSplit- compress比CutSplit实现了3.19 × -3.64 × 的速度提升,比DBTable的速度提升了1.58 × 。我们的工作表明,主动节点压缩是提高数据包分类性能的一种强大且可推广的策略,为优化基于决策树的方案提供了一个有希望的方向。
{"title":"Doubling the speed of large-scale packet classification through compressing decision tree nodes","authors":"Jincheng Zhong ,&nbsp;Tao Li ,&nbsp;Gaofeng Lv ,&nbsp;Shuhui Chen","doi":"10.1016/j.comnet.2026.112032","DOIUrl":"10.1016/j.comnet.2026.112032","url":null,"abstract":"<div><div>Packet classification underpins critical network functions such as access control and quality of service. While decision tree-based approaches offer efficiency and scalability, their classification performance is often bottlenecked by excessive memory accesses during tree traversal-primarily due to the use of pointer-based indexing structures necessitated by large node sizes.</div><div>This paper proposes a general pointer-elimination paradigm via Extreme Node Compression (ENC). This enables indexing structures to store nodes directly rather than pointers, thereby eliminating one memory indirection per level and nearly halving the number of memory accesses per lookup. To validate this core idea, this paper designs TupleTree-Compress based on state-of-the-art hash-based decision tree scheme-TupleTree. TupleTree-Compress integrates three key techniques-a unified global hash table, fingerprint-based keys, and hash-based sibling linking-to achieve full node compression while preserving correctness and update support.</div><div>Furthermore, to demonstrate the generality of our approach, we apply the same optimization paradigm to state-of-the-art classical decision tree scheme-CutSplit, resulting in CutSplit-Compress. Experimental results show that TupleTree-Compress achieves speedups of <strong>2.24</strong> × –<strong>3.12</strong> ×  over TupleTree and <strong>1.43</strong> × –<strong>1.91</strong> ×  over DBTable-the current best-performing scheme. Similarly, CutSplit-Compress achieves speedups of <strong>3.19</strong> × –<strong>3.64</strong> ×  over CutSplit, with improvements up to <strong>1.58</strong> ×  over DBTable.</div><div>Our work demonstrates that aggressive node compression is a powerful and generalizable strategy for boosting packet classification performance, offering a promising direction for optimizing decision tree-based schemes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112032"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient swarm intelligence-based resource allocation scheme for 5G-HCRAN 基于高效群体智能的5G-HCRAN资源分配方案
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112034
Tejas Kishor Patil, Paramveer Kumar, Pavan Kumar Mishra, Sudhakar Pandey
The rapid evolution of 5G-HCRAN (Heterogeneous Cloud Radio Access Network) necessitates innovative resource allocation schemes to meet diverse user demands while optimizing energy efficiency. This research introduces a novel resource allocation scheme explicitly designed for 5G-HCRAN, emphasizing the maximization of throughput and energy efficiency. The proposed methodology employs a hybrid Particle Swarm Optimization-Ant Colony Optimization (PSO-ACO) scheme that combines the strengths of both Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) to achieve a more efficient and effective optimization process. PSO contributes its global search capability via particle-based solution updates, while ACO introduces a pheromone-driven decision mechanism that smoothly adapts to dynamic network conditions. By integrating these complementary behaviors, the hybrid PSO-ACO scheme can evaluate resource-allocation choices more consistently and respond more effectively to network variability. This combined strategy supports more efficient utilization of limited network resources and significantly improves 5G-HCRAN performance. Simulation results validate the superiority of the proposed hybrid resource allocation scheme over standalone methods, demonstrating significant improvements in throughput and better energy efficiency. By addressing the objectives, the proposed hybrid scheme provides a practical and scalable approach for the next generation 5G-HCRAN.
5G-HCRAN(异构云无线接入网)的快速发展需要创新的资源分配方案,以满足多样化的用户需求,同时优化能源效率。本研究提出了一种明确为5G-HCRAN设计的新颖资源分配方案,强调吞吐量和能效的最大化。该方法采用粒子群优化-蚁群优化(PSO-ACO)混合方案,结合粒子群优化(PSO)和蚁群优化(ACO)的优点,实现更高效的优化过程。粒子群算法通过基于粒子的解决方案更新来提高其全局搜索能力,而蚁群算法引入了信息素驱动的决策机制,可以平滑地适应动态网络条件。通过整合这些互补行为,混合PSO-ACO方案可以更一致地评估资源分配选择,并更有效地响应网络的可变性。这种组合策略支持更有效地利用有限的网络资源,并显着提高5G-HCRAN性能。仿真结果验证了所提出的混合资源分配方案相对于独立方法的优越性,证明了吞吐量和能源效率的显著提高。通过解决这些目标,提出的混合方案为下一代5G-HCRAN提供了一种实用且可扩展的方法。
{"title":"Energy-efficient swarm intelligence-based resource allocation scheme for 5G-HCRAN","authors":"Tejas Kishor Patil,&nbsp;Paramveer Kumar,&nbsp;Pavan Kumar Mishra,&nbsp;Sudhakar Pandey","doi":"10.1016/j.comnet.2026.112034","DOIUrl":"10.1016/j.comnet.2026.112034","url":null,"abstract":"<div><div>The rapid evolution of 5G-HCRAN (Heterogeneous Cloud Radio Access Network) necessitates innovative resource allocation schemes to meet diverse user demands while optimizing energy efficiency. This research introduces a novel resource allocation scheme explicitly designed for 5G-HCRAN, emphasizing the maximization of throughput and energy efficiency. The proposed methodology employs a hybrid Particle Swarm Optimization-Ant Colony Optimization (PSO-ACO) scheme that combines the strengths of both Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) to achieve a more efficient and effective optimization process. PSO contributes its global search capability via particle-based solution updates, while ACO introduces a pheromone-driven decision mechanism that smoothly adapts to dynamic network conditions. By integrating these complementary behaviors, the hybrid PSO-ACO scheme can evaluate resource-allocation choices more consistently and respond more effectively to network variability. This combined strategy supports more efficient utilization of limited network resources and significantly improves 5G-HCRAN performance. Simulation results validate the superiority of the proposed hybrid resource allocation scheme over standalone methods, demonstrating significant improvements in throughput and better energy efficiency. By addressing the objectives, the proposed hybrid scheme provides a practical and scalable approach for the next generation 5G-HCRAN.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112034"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LTGAT: A lightweight temporal graph attention accelerator for deterministic routing in resource-constrained delay-tolerant non-terrestrial networks LTGAT:一种轻量级时间图注意力加速器,用于资源受限、容错延迟的非地面网络中的确定性路由
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112035
Dalia I. Elewaily , Ahmed I. Saleh , Hesham A. Ali , Mohamed M. Abdelsalam
This paper introduces the Lightweight Temporal Graph Attention (LTGAT) model, with the primary aim of developing an efficient routing solution to enhance Delay-Tolerant Networking (DTN) performance in resource-constrained non-terrestrial environments. The primary motivation is to overcome the limitations of traditional deterministic routing methods, such as Contact Graph Routing (CGR), which suffer from significant computational overhead in large-scale time-varying topologies, due to their reliance on repeated contact graph searches. The proposed LTGAT achieves this by integrating a lightweight architecture that combines a two-head Graph Attention Network (GAT) and a Gated Recurrent Unit (GRU) to learn the complex representation of the known spatial-temporal structure in the scheduled contact plan, enabling fast routing decisions with minimal computational and energy demands. The significance of this work is validated through experiments across six realistic simulated lunar scenarios, where LTGAT demonstrates a substantial reduction in delivery times by up to 32 % compared to CGR, with processing times scaled to 105.445–286.712 ms on a Proton 200k On-Board Computer, reflecting improvements up to 89.9–91.0 % over CGR and 22.6–40.2 % over GAUSS. Additionally, LTGAT achieves energy consumption of 15.82–43.01 mJ per routing decision, and preserves CGR’s perfect delivery reliability by achieving a delivery ratio of 1.0 on bundles that CGR itself successfully processes, far outperforming GAUSS (0.25 - 0.85) on the same bundles, while recovering 29–100 % of the bundles that CGR drops. These results confirm LTGAT’s suitability for resource-limited CubeSat deployments. This research contributes a lightweight, computation-efficient routing framework, offering a critical advancement for resource-constrained non-terrestrial communication systems and providing a foundation for future interplanetary network studies.
本文介绍了轻量级时间图注意(LTGAT)模型,其主要目的是开发一种有效的路由解决方案,以提高资源受限的非地面环境中容忍延迟网络(DTN)的性能。其主要动机是克服传统确定性路由方法的局限性,如接触图路由(CGR),由于依赖于重复的接触图搜索,在大规模时变拓扑中存在显著的计算开销。提出的LTGAT通过集成轻量级架构来实现这一目标,该架构结合了双头图注意网络(GAT)和门控循环单元(GRU),以学习预定接触计划中已知时空结构的复杂表示,从而以最小的计算和能量需求实现快速路由决策。这项工作的重要性通过六个真实的模拟月球场景的实验得到了验证,其中LTGAT与CGR相比,交付时间大幅减少了32%,处理时间在质子200k机载计算机上扩展到105.445-286.712毫秒,比CGR提高了89.9 - 91.0%,比GAUSS提高了22.6 - 40.2%。此外,LTGAT在每个路由决策中实现了15.82-43.01 mJ的能量消耗,并通过在CGR本身成功处理的束上实现1.0的交付比来保持CGR的完美交付可靠性,远远优于相同束上的GAUSS(0.25 - 0.85),同时回收了CGR掉落的束的29 - 100%。这些结果证实了LTGAT适用于资源有限的立方体卫星部署。该研究提供了一种轻量级、计算效率高的路由框架,为资源受限的非地面通信系统提供了重要的进步,并为未来的行星际网络研究奠定了基础。
{"title":"LTGAT: A lightweight temporal graph attention accelerator for deterministic routing in resource-constrained delay-tolerant non-terrestrial networks","authors":"Dalia I. Elewaily ,&nbsp;Ahmed I. Saleh ,&nbsp;Hesham A. Ali ,&nbsp;Mohamed M. Abdelsalam","doi":"10.1016/j.comnet.2026.112035","DOIUrl":"10.1016/j.comnet.2026.112035","url":null,"abstract":"<div><div>This paper introduces the Lightweight Temporal Graph Attention (LTGAT) model, with the primary aim of developing an efficient routing solution to enhance Delay-Tolerant Networking (DTN) performance in resource-constrained non-terrestrial environments. The primary motivation is to overcome the limitations of traditional deterministic routing methods, such as Contact Graph Routing (CGR), which suffer from significant computational overhead in large-scale time-varying topologies, due to their reliance on repeated contact graph searches. The proposed LTGAT achieves this by integrating a lightweight architecture that combines a two-head Graph Attention Network (GAT) and a Gated Recurrent Unit (GRU) to learn the complex representation of the known spatial-temporal structure in the scheduled contact plan, enabling fast routing decisions with minimal computational and energy demands. The significance of this work is validated through experiments across six realistic simulated lunar scenarios, where LTGAT demonstrates a substantial reduction in delivery times by up to 32 % compared to CGR, with processing times scaled to 105.445–286.712 ms on a Proton 200k On-Board Computer, reflecting improvements up to 89.9–91.0 % over CGR and 22.6–40.2 % over GAUSS. Additionally, LTGAT achieves energy consumption of 15.82–43.01 mJ per routing decision, and preserves CGR’s perfect delivery reliability by achieving a delivery ratio of 1.0 on bundles that CGR itself successfully processes, far outperforming GAUSS (0.25 - 0.85) on the same bundles, while recovering 29–100 % of the bundles that CGR drops. These results confirm LTGAT’s suitability for resource-limited CubeSat deployments. This research contributes a lightweight, computation-efficient routing framework, offering a critical advancement for resource-constrained non-terrestrial communication systems and providing a foundation for future interplanetary network studies.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112035"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1