Pub Date : 2025-12-30DOI: 10.1016/j.comnet.2025.111981
Jiakang Ma , Baolei Cheng , Yan Wang , Jianxi Fan , Junkai Zhu
The performance of data center networks largely determines cloud computing efficiency. BCDC is a high-performance data center network whose logical graph is exactly the line graph of the n-dimensional crossed cube (CQn). However, there are few studies on its vertex-independent spanning trees (VISTs). Until now, constructing VISTs rooted at an arbitrary vertex in BCDC remains an open question. In this paper, an algorithm is proposed to construct the VISTs in BCDC. Firstly, a parallel algorithm is adopted to construct trees in CQn. Then, we transform these trees into mutually independent trees in the BCDC. Subsequently, by hanging vertices on these trees, VISTs rooted at an arbitrary vertex in BCDC are obtained. Finally, we used Python’s Matplotlib and NumPy packages for simulation and obtained results showing that the discrepancy between the average path length and the network diameter remains within 0.5, and the communication success rate stays above 60% even under a 30% vertex failure rate, which verifies the high efficiency and strong security of the network in data transmission.
{"title":"Vertex-independent spanning trees in data center network BCDC","authors":"Jiakang Ma , Baolei Cheng , Yan Wang , Jianxi Fan , Junkai Zhu","doi":"10.1016/j.comnet.2025.111981","DOIUrl":"10.1016/j.comnet.2025.111981","url":null,"abstract":"<div><div>The performance of data center networks largely determines cloud computing efficiency. BCDC is a high-performance data center network whose logical graph is exactly the line graph of the <em>n</em>-dimensional crossed cube (<em>CQ<sub>n</sub></em>). However, there are few studies on its vertex-independent spanning trees (VISTs). Until now, constructing VISTs rooted at an arbitrary vertex in BCDC remains an open question. In this paper, an algorithm is proposed to construct the VISTs in BCDC. Firstly, a parallel algorithm is adopted to construct <span><math><mrow><mi>n</mi><mo>−</mo><mn>1</mn></mrow></math></span> trees in <em>CQ<sub>n</sub></em>. Then, we transform these trees into <span><math><mrow><mn>2</mn><mi>n</mi><mo>−</mo><mn>2</mn></mrow></math></span> mutually independent trees in the BCDC. Subsequently, by hanging vertices on these trees, <span><math><mrow><mn>2</mn><mi>n</mi><mo>−</mo><mn>2</mn></mrow></math></span> VISTs rooted at an arbitrary vertex in BCDC are obtained. Finally, we used Python’s Matplotlib and NumPy packages for simulation and obtained results showing that the discrepancy between the average path length and the network diameter remains within 0.5, and the communication success rate stays above 60% even under a 30% vertex failure rate, which verifies the high efficiency and strong security of the network in data transmission.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111981"},"PeriodicalIF":4.6,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.comnet.2025.111965
Harleen Kaur, Ankush Kansal
This work computes the average ergodic user rate for the multicell massive Multiple-Input Multiple-Output (mMIMO) system based on Multicell Minimum Mean Squared Error (M-MMSE) and Truncated Polynomial Expansion (TPE) techniques. By applying Random Matrix Theory (RMT) and large system analysis, the deterministic expression for the system's Signal-to-Interference plus Noise Ratio (SINR) with the M-MMSE scheme in uplink and downlink mode is computed, leading to the system's average user rate calculation. The M-MMSE scheme involves gram matrix inversion, increasing the system's lag and complexity. Therefore, the problem is solved by approximating the inverse of the matrix using TPE, which involves simple operations that can parallelize. Also, the complexity of the TPE technique depends only on the TPE order rather than the system's dimensions. Based on the RMT theory, the deterministic equivalents required for SINRs of the TPE scheme in uplink and downlink modes are derived. These deterministic equivalents for TPE SINRs are optimized to compute the average user rate for the system, matching the M-MMSE technique performance at a lower TPE order. In section 6, the system’s average user rate is validated by varying it with different parameters. The comparison between the M-MMSE and the TPE scheme shows that the TPE scheme achieves the required performance at J=3 TPE order. The theoretical results show the accuracy of derived deterministic equivalents.
{"title":"Closed-Form Analytics of Multicell Massive MIMO System Using M-MMSE and TPE Techniques in Correlated Environment","authors":"Harleen Kaur, Ankush Kansal","doi":"10.1016/j.comnet.2025.111965","DOIUrl":"10.1016/j.comnet.2025.111965","url":null,"abstract":"<div><div>This work computes the average ergodic user rate for the multicell massive Multiple-Input Multiple-Output (mMIMO) system based on Multicell Minimum Mean Squared Error (M-MMSE) and Truncated Polynomial Expansion (TPE) techniques. By applying Random Matrix Theory (RMT) and large system analysis, the deterministic expression for the system's Signal-to-Interference plus Noise Ratio (SINR) with the M-MMSE scheme in uplink and downlink mode is computed, leading to the system's average user rate calculation. The M-MMSE scheme involves gram matrix inversion, increasing the system's lag and complexity. Therefore, the problem is solved by approximating the inverse of the matrix using TPE, which involves simple operations that can parallelize. Also, the complexity of the TPE technique depends only on the TPE order rather than the system's dimensions. Based on the RMT theory, the deterministic equivalents required for SINRs of the TPE scheme in uplink and downlink modes are derived. These deterministic equivalents for TPE SINRs are optimized to compute the average user rate for the system, matching the M-MMSE technique performance at a lower TPE order. In section 6, the system’s average user rate is validated by varying it with different parameters. The comparison between the M-MMSE and the TPE scheme shows that the TPE scheme achieves the required performance at J=3 TPE order. The theoretical results show the accuracy of derived deterministic equivalents.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 111965"},"PeriodicalIF":4.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.comnet.2025.111979
Zhiyuan Li , Jie Sun
As artificial intelligence-generated content (AIGC) services become increasingly prevalent in edge networks, the demand for rapid and efficient processing in latency-sensitive applications continues to grow. Traditional task offloading strategies often struggle to coordinate heterogeneous resources, such as GPU and TPU clusters, resulting in imbalanced load distribution and underutilization of specialized accelerators. To overcome these limitations, we propose the adaptive multi-edge load balancing optimization (AMBO) algorithm, designed to optimize collaborative task scheduling among edge servers. AMBO utilizes an online reinforcement learning approach, decomposing the task offloading process into edge server selection and load balancing functions, which enables intelligent scheduling across nodes with varying computational capacities. Furthermore, by integrating the dueling Deep Q-Network (DQN) framework, AMBO enhances decision-making accuracy and stability in dynamic edge environments. Extensive experimental results demonstrate that AMBO significantly improves task offloading efficiency, reducing task completion time by 79.04% and achieving a task completion rate of 99.89%. These results highlight the algorithm’s strong adaptability and effectiveness in heterogeneous edge computing scenarios, making it well-suited for supporting the next generation of latency-sensitive AIGC services.
{"title":"Collaborative multi-task offloading in multi-edge system for AI-generated content service","authors":"Zhiyuan Li , Jie Sun","doi":"10.1016/j.comnet.2025.111979","DOIUrl":"10.1016/j.comnet.2025.111979","url":null,"abstract":"<div><div>As artificial intelligence-generated content (AIGC) services become increasingly prevalent in edge networks, the demand for rapid and efficient processing in latency-sensitive applications continues to grow. Traditional task offloading strategies often struggle to coordinate heterogeneous resources, such as GPU and TPU clusters, resulting in imbalanced load distribution and underutilization of specialized accelerators. To overcome these limitations, we propose the adaptive multi-edge load balancing optimization (AMBO) algorithm, designed to optimize collaborative task scheduling among edge servers. AMBO utilizes an online reinforcement learning approach, decomposing the task offloading process into edge server selection and load balancing functions, which enables intelligent scheduling across nodes with varying computational capacities. Furthermore, by integrating the dueling Deep Q-Network (DQN) framework, AMBO enhances decision-making accuracy and stability in dynamic edge environments. Extensive experimental results demonstrate that AMBO significantly improves task offloading efficiency, reducing task completion time by 79.04% and achieving a task completion rate of 99.89%. These results highlight the algorithm’s strong adaptability and effectiveness in heterogeneous edge computing scenarios, making it well-suited for supporting the next generation of latency-sensitive AIGC services.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111979"},"PeriodicalIF":4.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-28DOI: 10.1016/j.comnet.2025.111976
Zixuan Song , Zhishu Shen , Xiaoyu Zheng , Qiushi Zheng , Zheng Lei , Jiong Jin
As a key complement to terrestrial networks and a fundamental component of future 6G systems, Low Earth Orbit (LEO) satellite networks are expected to provide high-quality communication services when integrated with ground-based infrastructure, thereby attracting significant research interest. However, the limited satellite onboard resources and the uneven distribution of computational workloads often result in congestion along inter-satellite links (ISLs) that degrades task processing efficiency. Effectively managing the dynamic and large-scale topology of LEO networks to ensure balanced task distribution remains a critical challenge. To this end, we propose a dynamic multi-region division framework for intelligent task management in LEO satellite networks. This framework optimizes both intra- and inter-region routing to minimize task delay while balancing the utilization of computational and communication resources. Based on this framework, we propose a dynamic multi-region division algorithm based on the Genetic Algorithm (GA), which adaptively adjusts the size of each region based on the workload status of individual satellites. Additionally, we incorporate an adaptive routing algorithm and a task splitting and offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MA-DDPG) to effectively accommodate the arriving tasks. Simulation results show that the proposed framework outperforms existing methods by improving the task completion rate by up to 5.78%, reducing the average task delay by up to 330.5 ms, and lowering energy consumption per task by up to 0.165 J, demonstrating its effectiveness and scalability for large-scale LEO satellite networks.
{"title":"Intelligent task management via dynamic multi-region division in LEO satellite networks","authors":"Zixuan Song , Zhishu Shen , Xiaoyu Zheng , Qiushi Zheng , Zheng Lei , Jiong Jin","doi":"10.1016/j.comnet.2025.111976","DOIUrl":"10.1016/j.comnet.2025.111976","url":null,"abstract":"<div><div>As a key complement to terrestrial networks and a fundamental component of future 6G systems, Low Earth Orbit (LEO) satellite networks are expected to provide high-quality communication services when integrated with ground-based infrastructure, thereby attracting significant research interest. However, the limited satellite onboard resources and the uneven distribution of computational workloads often result in congestion along inter-satellite links (ISLs) that degrades task processing efficiency. Effectively managing the dynamic and large-scale topology of LEO networks to ensure balanced task distribution remains a critical challenge. To this end, we propose a dynamic multi-region division framework for intelligent task management in LEO satellite networks. This framework optimizes both intra- and inter-region routing to minimize task delay while balancing the utilization of computational and communication resources. Based on this framework, we propose a dynamic multi-region division algorithm based on the Genetic Algorithm (GA), which adaptively adjusts the size of each region based on the workload status of individual satellites. Additionally, we incorporate an adaptive routing algorithm and a task splitting and offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MA-DDPG) to effectively accommodate the arriving tasks. Simulation results show that the proposed framework outperforms existing methods by improving the task completion rate by up to 5.78%, reducing the average task delay by up to 330.5 ms, and lowering energy consumption per task by up to 0.165 J, demonstrating its effectiveness and scalability for large-scale LEO satellite networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111976"},"PeriodicalIF":4.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1016/j.comnet.2025.111967
Cláudio Modesto , João Borges , Cleverson Nahum , Lucas Matni , Cristiano Bonato Both , Kleber Cardoso , Glauco Gonçalves , Ilan Correa , Silvia Lins , Andrey Silva , Aldebaro Klautau
The ability of the Network digital twin (NDT) to remain aware of changes in its physical counterpart, known as the physical twin (PTwin), is a fundamental condition to enable timely synchronization, also referred to as twinning. In this way, considering a transport network, a key requirement is to handle unexpected traffic variability and dynamically adapt to maintain optimal performance in the associated virtual model, known as the virtual twin (VTwin). In this context, we propose a self-adaptive implementation of a novel NDT architecture designed to provide accurate delay predictions, even under fluctuating traffic conditions. This architecture addresses an essential challenge, underexplored in the literature: improving the resilience of data-driven NDT platforms against traffic variability and improving synchronization between the VTwin and its physical counterpart. Therefore, the contributions of this article rely on NDT lifecycle by focusing on the operational phase, where telemetry modules are used to monitor incoming traffic, and concept drift detection techniques guide retraining decisions aimed at updating and redeploying the VTwin when necessary. We validate our architecture with a network management use case, across various emulated network topologies, and diverse traffic patterns to demonstrate its effectiveness in preserving acceptable performance and predicting quality of service (QoS) metrics under unexpected traffic variation, such as delay and jitter. The results in all tested topologies, using the normalized mean square error as the evaluation metric, demonstrate that our proposed architecture, after a traffic concept drift, achieves a performance improvement in per-flow delay and jitter prediction of at least 64% and 21%, respectively, compared to a configuration without NDT synchronization.
{"title":"Towards a robust transport network with self-adaptive network digital twin","authors":"Cláudio Modesto , João Borges , Cleverson Nahum , Lucas Matni , Cristiano Bonato Both , Kleber Cardoso , Glauco Gonçalves , Ilan Correa , Silvia Lins , Andrey Silva , Aldebaro Klautau","doi":"10.1016/j.comnet.2025.111967","DOIUrl":"10.1016/j.comnet.2025.111967","url":null,"abstract":"<div><div>The ability of the Network digital twin (NDT) to remain aware of changes in its physical counterpart, known as the physical twin (PTwin), is a fundamental condition to enable timely synchronization, also referred to as <em>twinning</em>. In this way, considering a transport network, a key requirement is to handle unexpected traffic variability and dynamically adapt to maintain optimal performance in the associated virtual model, known as the virtual twin (VTwin). In this context, we propose a self-adaptive implementation of a novel NDT architecture designed to provide accurate delay predictions, even under fluctuating traffic conditions. This architecture addresses an essential challenge, underexplored in the literature: improving the resilience of data-driven NDT platforms against traffic variability and improving synchronization between the VTwin and its physical counterpart. Therefore, the contributions of this article rely on NDT lifecycle by focusing on the operational phase, where telemetry modules are used to monitor incoming traffic, and concept drift detection techniques guide retraining decisions aimed at updating and redeploying the VTwin when necessary. We validate our architecture with a network management use case, across various emulated network topologies, and diverse traffic patterns to demonstrate its effectiveness in preserving acceptable performance and predicting quality of service (QoS) metrics under unexpected traffic variation, such as delay and jitter. The results in all tested topologies, using the normalized mean square error as the evaluation metric, demonstrate that our proposed architecture, after a traffic concept drift, achieves a performance improvement in per-flow delay and jitter prediction of at least 64% and 21%, respectively, compared to a configuration without NDT synchronization.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111967"},"PeriodicalIF":4.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1016/j.comnet.2025.111971
Zhengyu Zhu , Shan Liao , Lei Zhang , Liang Liu
While runtime parameters have been incorporated to enhance API-based malware detection, existing approaches still fall short in fully capturing the structural and temporal characteristics of API call sequences, thereby limiting their generalization capability. In this paper, we propose HAT, a novel detection method that jointly models API sequences from both structural and temporal perspectives. HAT leverages a hierarchical attention mechanism to learn the varying importance of API names and their parameters, and integrates two complementary temporal modules to uncover execution patterns of malware that are underexplored in prior work. Extensive experiments on multiple datasets demonstrate that HAT consistently outperforms existing methods. Compared to approaches relying only on API names, HAT improves the F1-score by 5.50% to 30.87%. Compared to parameter-augmented approaches, it achieves superior detection and generalization, with F1-score improvements of 4.10% to 7.07%, benefiting from its unified modeling of structural and temporal aspects.
{"title":"HAT: Leveraging hierarchical attention and temporal modeling for API-based malware detection","authors":"Zhengyu Zhu , Shan Liao , Lei Zhang , Liang Liu","doi":"10.1016/j.comnet.2025.111971","DOIUrl":"10.1016/j.comnet.2025.111971","url":null,"abstract":"<div><div>While runtime parameters have been incorporated to enhance API-based malware detection, existing approaches still fall short in fully capturing the structural and temporal characteristics of API call sequences, thereby limiting their generalization capability. In this paper, we propose <strong>HAT</strong>, a novel detection method that jointly models API sequences from both structural and temporal perspectives. HAT leverages a hierarchical attention mechanism to learn the varying importance of API names and their parameters, and integrates two complementary temporal modules to uncover execution patterns of malware that are underexplored in prior work. Extensive experiments on multiple datasets demonstrate that HAT consistently outperforms existing methods. Compared to approaches relying only on API names, HAT improves the F1-score by 5.50% to 30.87%. Compared to parameter-augmented approaches, it achieves superior detection and generalization, with F1-score improvements of 4.10% to 7.07%, benefiting from its unified modeling of structural and temporal aspects.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111971"},"PeriodicalIF":4.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.comnet.2025.111962
Guangfeng Guo , Junxing Zhang , Baowei Liu
Wearable devices can assist users in cognitive decline through context-aware scene interpretation. They should function in real time with sufficient functionality, performance, and usability. However, high-accuracy and low-delay scene interpretation rely on the Deep Neural Network (DNN) inference of continuous video streams, which poses significant challenges to wearable devices due to their tight energy budget and unpredictable delay impact. In this paper, we propose a novel framework, EEOKD (Energy-Efficient Online Knowledge Distillation). The framework specializes in a high-accuracy and low-cost object detection model that automatically adapts to the target video, utilizes minimal bandwidth, and tolerates variations in network delay. First, we formalize the online knowledge distillation problem and introduce a metric for choosing the timing of online training based on the concept drift theory. Second, we propose efficient asynchronous distributed algorithms that leverage the loss gradient to alleviate the impact of delay changes. Third, we propose a novel online knowledge distillation scheme that incorporates freshness-based importance sampling and batch training to enhance the student model’s generalization ability while minimizing the number of training samples and reducing the frequency of weight updates. The novel method enhances energy efficiency by accelerating model convergence and maintains good detection performance even when network delays change considerably. Finally, we implement a system prototype and evaluate its performance and energy efficiency. Experimental results demonstrate that our EEOKD framework achieves a 13% increase in energy efficiency, approximately 60% lower network bandwidth usage, and an average 4% improvement in detection accuracy compared to existing methods.
{"title":"Energy-efficient online knowledge distillation for mobile video inference","authors":"Guangfeng Guo , Junxing Zhang , Baowei Liu","doi":"10.1016/j.comnet.2025.111962","DOIUrl":"10.1016/j.comnet.2025.111962","url":null,"abstract":"<div><div>Wearable devices can assist users in cognitive decline through context-aware scene interpretation. They should function in real time with sufficient functionality, performance, and usability. However, high-accuracy and low-delay scene interpretation rely on the Deep Neural Network (DNN) inference of continuous video streams, which poses significant challenges to wearable devices due to their tight energy budget and unpredictable delay impact. In this paper, we propose a novel framework, EEOKD (Energy-Efficient Online Knowledge Distillation). The framework specializes in a high-accuracy and low-cost object detection model that automatically adapts to the target video, utilizes minimal bandwidth, and tolerates variations in network delay. First, we formalize the online knowledge distillation problem and introduce a metric for choosing the timing of online training based on the concept drift theory. Second, we propose efficient asynchronous distributed algorithms that leverage the loss gradient to alleviate the impact of delay changes. Third, we propose a novel online knowledge distillation scheme that incorporates freshness-based importance sampling and batch training to enhance the student model’s generalization ability while minimizing the number of training samples and reducing the frequency of weight updates. The novel method enhances energy efficiency by accelerating model convergence and maintains good detection performance even when network delays change considerably. Finally, we implement a system prototype and evaluate its performance and energy efficiency. Experimental results demonstrate that our EEOKD framework achieves a 13% increase in energy efficiency, approximately 60% lower network bandwidth usage, and an average 4% improvement in detection accuracy compared to existing methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111962"},"PeriodicalIF":4.6,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.comnet.2025.111968
Jawad Hassan , Muhammad Yousaf Ali Raza , Adnan Sohail , Muhammad Asim , Zeeshan Pervez
The Internet of Things (IoT) relies heavily on the Routing Protocol for Low Power and Lossy Networks (RPL) to support large scale, resource constrained deployments. However, RPL faces major research challenges, including its susceptibility to routing attacks, limited support for mutual authentication, and dynamic topology variations. In addition, the inefficiency of traditional heavy-weight cryptographic mechanisms, though provide secure communication but remain ineffective against insider routing attacks. These weaknesses allow adversaries to exploit routing control messages, leading to attacks such as Wormhole, Rank, and DAO Inconsistency. Among these, Wormhole attacks are particularly severe because they exploit colluding nodes to create deceptive low latency tunnels, misleading neighboring nodes, and disrupting the overall routing topology. Motivated by these challenges, this paper presents Efficient and Reliable Wormhole detection for IoT (AKA ERW-IoT), a lightweight path validation mechanism that ensures routing integrity with minimal overhead. Simulation results show that ERW-IoT improves the average packet delivery ratio by 5.5%, reduces energy consumption by 0.986%, optimizes memory utilization by nearly 1%, and achieves a 100% detection rate, demonstrating its practicality and effectiveness in securing RPL based IoT networks.
{"title":"An efficient and reliable mechanism for Wormhole detection in RPL based IoT networks","authors":"Jawad Hassan , Muhammad Yousaf Ali Raza , Adnan Sohail , Muhammad Asim , Zeeshan Pervez","doi":"10.1016/j.comnet.2025.111968","DOIUrl":"10.1016/j.comnet.2025.111968","url":null,"abstract":"<div><div>The Internet of Things (IoT) relies heavily on the Routing Protocol for Low Power and Lossy Networks (RPL) to support large scale, resource constrained deployments. However, RPL faces major research challenges, including its susceptibility to routing attacks, limited support for mutual authentication, and dynamic topology variations. In addition, the inefficiency of traditional heavy-weight cryptographic mechanisms, though provide secure communication but remain ineffective against insider routing attacks. These weaknesses allow adversaries to exploit routing control messages, leading to attacks such as Wormhole, Rank, and DAO Inconsistency. Among these, Wormhole attacks are particularly severe because they exploit colluding nodes to create deceptive low latency tunnels, misleading neighboring nodes, and disrupting the overall routing topology. Motivated by these challenges, this paper presents Efficient and Reliable Wormhole detection for IoT (AKA ERW-IoT), a lightweight path validation mechanism that ensures routing integrity with minimal overhead. Simulation results show that ERW-IoT improves the average packet delivery ratio by 5.5%, reduces energy consumption by 0.986%, optimizes memory utilization by nearly 1%, and achieves a 100% detection rate, demonstrating its practicality and effectiveness in securing RPL based IoT networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111968"},"PeriodicalIF":4.6,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1016/j.comnet.2025.111966
Binayak Kar , Ujjwal Sahu , Ciza Thomas , Jyoti Prakash Sahoo
Securing networks in Dew-Enabled edge-of-things (EoT) networks from sophisticated intrusions is a challenge that is at once critical and challenging. This paper presents HybridGuard, a state-of-the-art framework that combines Machine Learning and Deep Learning to raise the bar for intrusion detection. HybridGuard addresses data imbalance by performing mutual information-based feature selection to ensure that the most important features are always considered to improve detection performance, especially for minority attacks. The proposed framework leverages Wasserstein Conditional Generative Adversarial Networks (WCGAN-GP) to alleviate class imbalance, hence enhancing the precision of detection. In the framework, a two-phase architecture named “DualNetShield” was integrated to introduce advanced network traffic analysis and anomaly detection techniques, enhancing the granular identification of threats within complex EoT environments. HybridGuard, tested on UNSW-NB15, CIC-IDS-2017, and IOTID20 datasets, demonstrates robust performance over a wide variety of attack scenarios, outperforming the existing solutions in adaptation to evolving cybersecurity threats. This innovative approach establishes HybridGuard as a powerful tool for safeguarding EoT networks against modern intrusions.
{"title":"HybridGuard: Enhancing minority-class intrusion detection in dew-enabled edge-of-things networks","authors":"Binayak Kar , Ujjwal Sahu , Ciza Thomas , Jyoti Prakash Sahoo","doi":"10.1016/j.comnet.2025.111966","DOIUrl":"10.1016/j.comnet.2025.111966","url":null,"abstract":"<div><div>Securing networks in Dew-Enabled edge-of-things (EoT) networks from sophisticated intrusions is a challenge that is at once critical and challenging. This paper presents HybridGuard, a state-of-the-art framework that combines Machine Learning and Deep Learning to raise the bar for intrusion detection. HybridGuard addresses data imbalance by performing mutual information-based feature selection to ensure that the most important features are always considered to improve detection performance, especially for minority attacks. The proposed framework leverages Wasserstein Conditional Generative Adversarial Networks (WCGAN-GP) to alleviate class imbalance, hence enhancing the precision of detection. In the framework, a two-phase architecture named “DualNetShield” was integrated to introduce advanced network traffic analysis and anomaly detection techniques, enhancing the granular identification of threats within complex EoT environments. HybridGuard, tested on UNSW-NB15, CIC-IDS-2017, and IOTID20 datasets, demonstrates robust performance over a wide variety of attack scenarios, outperforming the existing solutions in adaptation to evolving cybersecurity threats. This innovative approach establishes HybridGuard as a powerful tool for safeguarding EoT networks against modern intrusions.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111966"},"PeriodicalIF":4.6,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1016/j.comnet.2025.111969
Gabriel Pimenta de Freitas Cardoso, Paulo Henrique Portela De Carvalho, Paulo Roberto de Lira Gondim
The ongoing evolution of mobile communication systems, particularly toward the sixth generation (6G), has opened new frontiers in the integration of communication and sensing technologies. In this context, Industry 4.0 demands efficient and intelligent solutions for supporting a growing number of interconnected devices while ensuring low latency and high spectral efficiency.
This study addresses the complex problem of joint resource allocation in systems that integrate primary communications, device-to-device (D2D) communication, and sensing, with a special focus on power control and spectrum sharing.It proposes a novel hyper-heuristic (HH) strategy powered by Deep Reinforcement Learning (DRL) that dynamically allocates resources and optimizes spectral usage in a 6G-enabled environment. Unlike traditional heuristic-based approaches that rely on fixed rules, DRL-based HH learns from interactions with the environment and selects appropriate low-level heuristics (LLHs) for managing interference, meeting performance constraints, and improving D2D and sensor operations. A realistic simulation scenario inspired by industrial environments was modeled for evaluations of the strategy’s effectiveness.
The results show the method can effectively balance the competing demands of different system components, dynamically adapt to environmental changes, and maintain compliance with detection and transmission constraints. By extending existing models to including D2D communication, channel uncertainties, and spectrum reallocation over time, the study contributes with a scalable and intelligent solution for future wireless systems in complex industrial settings.
{"title":"Joint spectrum allocation and power control for D2D communication and sensing in 6G networks using DRL-based hyper-heuristics","authors":"Gabriel Pimenta de Freitas Cardoso, Paulo Henrique Portela De Carvalho, Paulo Roberto de Lira Gondim","doi":"10.1016/j.comnet.2025.111969","DOIUrl":"10.1016/j.comnet.2025.111969","url":null,"abstract":"<div><div>The ongoing evolution of mobile communication systems, particularly toward the sixth generation (6G), has opened new frontiers in the integration of communication and sensing technologies. In this context, Industry 4.0 demands efficient and intelligent solutions for supporting a growing number of interconnected devices while ensuring low latency and high spectral efficiency.</div><div>This study addresses the complex problem of joint resource allocation in systems that integrate primary communications, device-to-device (D2D) communication, and sensing, with a special focus on power control and spectrum sharing.It proposes a novel hyper-heuristic (HH) strategy powered by Deep Reinforcement Learning (DRL) that dynamically allocates resources and optimizes spectral usage in a 6G-enabled environment. Unlike traditional heuristic-based approaches that rely on fixed rules, DRL-based HH learns from interactions with the environment and selects appropriate low-level heuristics (LLHs) for managing interference, meeting performance constraints, and improving D2D and sensor operations. A realistic simulation scenario inspired by industrial environments was modeled for evaluations of the strategy’s effectiveness.</div><div>The results show the method can effectively balance the competing demands of different system components, dynamically adapt to environmental changes, and maintain compliance with detection and transmission constraints. By extending existing models to including D2D communication, channel uncertainties, and spectrum reallocation over time, the study contributes with a scalable and intelligent solution for future wireless systems in complex industrial settings.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111969"},"PeriodicalIF":4.6,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}