Pub Date : 2026-01-22DOI: 10.1016/j.comnet.2026.112039
Xuanzhang Liu , Jiyao Liu , Xinliang Wei , Yu Wang
Federated learning (FL) is a promising distributed AI paradigm for protecting user privacy by training models on local devices (such as IoT devices). However, FL systems face challenges like high communication overhead and non-transparent model aggregation. To address these issues, integrating blockchain technology into hierarchical federated learning (HFL) to construct a decentralized, low latency, and transparent learning framework over a cloud-edge-client architecture has gained attention. To ensure participant engagement from edge servers and clients, this paper explores incentive mechanism design in a blockchain-based HFL system using a semi-asynchronous aggregation model. We model the resource pricing among clients, edge servers, and task publishers at the cloud as a three-stage Stackelberg game, proving the existence of a Nash equilibrium in which each participant could maximize their own utility. An iterative algorithm based on the alternating direction method of multipliers and backward induction is then proposed to optimize strategies. Extensive simulations verify the algorithm’s rapid convergence and demonstrate that our proposed mechanism consistently outperforms baseline strategies across various scenarios in terms of participant utilities. Our approach also achieves up to 7% higher model accuracy than baseline methods, confirming its practical effectiveness.
{"title":"Incentive mechanism design in blockchain-based hierarchical federated learning over edge clouds","authors":"Xuanzhang Liu , Jiyao Liu , Xinliang Wei , Yu Wang","doi":"10.1016/j.comnet.2026.112039","DOIUrl":"10.1016/j.comnet.2026.112039","url":null,"abstract":"<div><div>Federated learning (FL) is a promising distributed AI paradigm for protecting user privacy by training models on local devices (such as IoT devices). However, FL systems face challenges like high communication overhead and non-transparent model aggregation. To address these issues, integrating blockchain technology into hierarchical federated learning (HFL) to construct a decentralized, low latency, and transparent learning framework over a cloud-edge-client architecture has gained attention. To ensure participant engagement from edge servers and clients, this paper explores incentive mechanism design in a blockchain-based HFL system using a semi-asynchronous aggregation model. We model the resource pricing among clients, edge servers, and task publishers at the cloud as a three-stage Stackelberg game, proving the existence of a Nash equilibrium in which each participant could maximize their own utility. An iterative algorithm based on the alternating direction method of multipliers and backward induction is then proposed to optimize strategies. Extensive simulations verify the algorithm’s rapid convergence and demonstrate that our proposed mechanism consistently outperforms baseline strategies across various scenarios in terms of participant utilities. Our approach also achieves up to 7% higher model accuracy than baseline methods, confirming its practical effectiveness.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112039"},"PeriodicalIF":4.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1016/j.comnet.2026.112025
Shen Dong , Guozhen Cheng , Wenyan Liu
With the rise of edge computing (EC), data and computation are increasingly shifted from centralized clouds to edge nodes, improving real-time performance and privacy. However, the resource constraints of edge nodes make them vulnerable to Distributed Denial-of-Service (DDoS) attacks. Traditional passive defense mechanisms struggle to counter diverse attacks due to their delayed response and lack of flexibility. While proactive defense strategies possess dynamism and adaptability, existing solutions often rely solely on either Moving Target Defense (MTD) or deception defense. The former fails to curb attacks at their source, while the latter lacks dynamic adaptability. Moreover, they often address only one type of attack and impose high resource and latency costs. To overcome these challenges, we propose Poseidon, a deep reinforcement learning-based hybrid proactive defense framework. Poseidon integrates the dynamism of MTD with the deceptive nature of deception defense, enabling differentiated responses to both High-rate Distributed Denial-of-Service (HDDoS) and Low-rate Distributed Denial-of-Service (LDDoS) attacks. By leveraging the lightweight characteristics of containers, it achieves resource-efficient protection. The interaction between attacks and defenses is modeled as a Markov Decision Process (MDP), and the Deep Q-Network (DQN) algorithm is employed to dynamically balance defense effectiveness and resource overhead. Experimental results demonstrate that Poseidon significantly outperforms existing MTD schemes across multiple DDoS attack scenarios, achieving up to a 28% improvement in average reward, a 30% enhancement in security, and a 15% increase in service quality. Furthermore, Poseidon effectively ensures service availability while minimizing quality degradation, showcasing considerable practical value.
{"title":"Poseidon: Intelligent proactive defense against DDoS attacks in edge clouds","authors":"Shen Dong , Guozhen Cheng , Wenyan Liu","doi":"10.1016/j.comnet.2026.112025","DOIUrl":"10.1016/j.comnet.2026.112025","url":null,"abstract":"<div><div>With the rise of edge computing (EC), data and computation are increasingly shifted from centralized clouds to edge nodes, improving real-time performance and privacy. However, the resource constraints of edge nodes make them vulnerable to Distributed Denial-of-Service (DDoS) attacks. Traditional passive defense mechanisms struggle to counter diverse attacks due to their delayed response and lack of flexibility. While proactive defense strategies possess dynamism and adaptability, existing solutions often rely solely on either Moving Target Defense (MTD) or deception defense. The former fails to curb attacks at their source, while the latter lacks dynamic adaptability. Moreover, they often address only one type of attack and impose high resource and latency costs. To overcome these challenges, we propose Poseidon, a deep reinforcement learning-based hybrid proactive defense framework. Poseidon integrates the dynamism of MTD with the deceptive nature of deception defense, enabling differentiated responses to both High-rate Distributed Denial-of-Service (HDDoS) and Low-rate Distributed Denial-of-Service (LDDoS) attacks. By leveraging the lightweight characteristics of containers, it achieves resource-efficient protection. The interaction between attacks and defenses is modeled as a Markov Decision Process (MDP), and the Deep Q-Network (DQN) algorithm is employed to dynamically balance defense effectiveness and resource overhead. Experimental results demonstrate that Poseidon significantly outperforms existing MTD schemes across multiple DDoS attack scenarios, achieving up to a 28% improvement in average reward, a 30% enhancement in security, and a 15% increase in service quality. Furthermore, Poseidon effectively ensures service availability while minimizing quality degradation, showcasing considerable practical value.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112025"},"PeriodicalIF":4.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1016/j.comnet.2026.112037
Chao Wang , Ping Zhou , Jiuzhen Zeng , Yong Ma , Ruichi Zhang
Traffic anomaly detections are crucial for the network security. Most of existing unsupervised detection models are based on reconstruction and prediction methods that are deficient in generalization ability and processing temporal-dependencies. Although the memory module is introduced to mitigate the weak generalization ability, it encounters challenges such as data distribution drift over time and memory contamination. To address these issues, this paper proposes a novel unsupervised network traffic anomaly detection model, MAFE-TDP, which integrates a transformer-based feature extraction module, a memory module and a prediction-based temporal-dependencies extraction network. The generalization ability of the model and its robustness to memory contamination are enhanced by introducing the memory module with the FIFO memory replacement strategy and KNN method. The proposed anomaly scoring method fuses reconstruction error and prediction error, thus enlarging the gap between normal and abnormal data. Evaluation results on four real-word network traffic datasets demonstrate that MAFE-TDP outperforms existing state-of-the-art baseline methods in terms of AUC-ROC and AUC-PR metrics.
{"title":"Memory-augmented deep feature extraction and temporal-dependencies prediction for network traffic anomaly detection","authors":"Chao Wang , Ping Zhou , Jiuzhen Zeng , Yong Ma , Ruichi Zhang","doi":"10.1016/j.comnet.2026.112037","DOIUrl":"10.1016/j.comnet.2026.112037","url":null,"abstract":"<div><div>Traffic anomaly detections are crucial for the network security. Most of existing unsupervised detection models are based on reconstruction and prediction methods that are deficient in generalization ability and processing temporal-dependencies. Although the memory module is introduced to mitigate the weak generalization ability, it encounters challenges such as data distribution drift over time and memory contamination. To address these issues, this paper proposes a novel unsupervised network traffic anomaly detection model, MAFE-TDP, which integrates a transformer-based feature extraction module, a memory module and a prediction-based temporal-dependencies extraction network. The generalization ability of the model and its robustness to memory contamination are enhanced by introducing the memory module with the FIFO memory replacement strategy and KNN method. The proposed anomaly scoring method fuses reconstruction error and prediction error, thus enlarging the gap between normal and abnormal data. Evaluation results on four real-word network traffic datasets demonstrate that MAFE-TDP outperforms existing state-of-the-art baseline methods in terms of AUC-ROC and AUC-PR metrics.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112037"},"PeriodicalIF":4.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.comnet.2026.112032
Jincheng Zhong , Tao Li , Gaofeng Lv , Shuhui Chen
Packet classification underpins critical network functions such as access control and quality of service. While decision tree-based approaches offer efficiency and scalability, their classification performance is often bottlenecked by excessive memory accesses during tree traversal-primarily due to the use of pointer-based indexing structures necessitated by large node sizes.
This paper proposes a general pointer-elimination paradigm via Extreme Node Compression (ENC). This enables indexing structures to store nodes directly rather than pointers, thereby eliminating one memory indirection per level and nearly halving the number of memory accesses per lookup. To validate this core idea, this paper designs TupleTree-Compress based on state-of-the-art hash-based decision tree scheme-TupleTree. TupleTree-Compress integrates three key techniques-a unified global hash table, fingerprint-based keys, and hash-based sibling linking-to achieve full node compression while preserving correctness and update support.
Furthermore, to demonstrate the generality of our approach, we apply the same optimization paradigm to state-of-the-art classical decision tree scheme-CutSplit, resulting in CutSplit-Compress. Experimental results show that TupleTree-Compress achieves speedups of 2.24 × –3.12 × over TupleTree and 1.43 × –1.91 × over DBTable-the current best-performing scheme. Similarly, CutSplit-Compress achieves speedups of 3.19 × –3.64 × over CutSplit, with improvements up to 1.58 × over DBTable.
Our work demonstrates that aggressive node compression is a powerful and generalizable strategy for boosting packet classification performance, offering a promising direction for optimizing decision tree-based schemes.
{"title":"Doubling the speed of large-scale packet classification through compressing decision tree nodes","authors":"Jincheng Zhong , Tao Li , Gaofeng Lv , Shuhui Chen","doi":"10.1016/j.comnet.2026.112032","DOIUrl":"10.1016/j.comnet.2026.112032","url":null,"abstract":"<div><div>Packet classification underpins critical network functions such as access control and quality of service. While decision tree-based approaches offer efficiency and scalability, their classification performance is often bottlenecked by excessive memory accesses during tree traversal-primarily due to the use of pointer-based indexing structures necessitated by large node sizes.</div><div>This paper proposes a general pointer-elimination paradigm via Extreme Node Compression (ENC). This enables indexing structures to store nodes directly rather than pointers, thereby eliminating one memory indirection per level and nearly halving the number of memory accesses per lookup. To validate this core idea, this paper designs TupleTree-Compress based on state-of-the-art hash-based decision tree scheme-TupleTree. TupleTree-Compress integrates three key techniques-a unified global hash table, fingerprint-based keys, and hash-based sibling linking-to achieve full node compression while preserving correctness and update support.</div><div>Furthermore, to demonstrate the generality of our approach, we apply the same optimization paradigm to state-of-the-art classical decision tree scheme-CutSplit, resulting in CutSplit-Compress. Experimental results show that TupleTree-Compress achieves speedups of <strong>2.24</strong> × –<strong>3.12</strong> × over TupleTree and <strong>1.43</strong> × –<strong>1.91</strong> × over DBTable-the current best-performing scheme. Similarly, CutSplit-Compress achieves speedups of <strong>3.19</strong> × –<strong>3.64</strong> × over CutSplit, with improvements up to <strong>1.58</strong> × over DBTable.</div><div>Our work demonstrates that aggressive node compression is a powerful and generalizable strategy for boosting packet classification performance, offering a promising direction for optimizing decision tree-based schemes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112032"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid evolution of 5G-HCRAN (Heterogeneous Cloud Radio Access Network) necessitates innovative resource allocation schemes to meet diverse user demands while optimizing energy efficiency. This research introduces a novel resource allocation scheme explicitly designed for 5G-HCRAN, emphasizing the maximization of throughput and energy efficiency. The proposed methodology employs a hybrid Particle Swarm Optimization-Ant Colony Optimization (PSO-ACO) scheme that combines the strengths of both Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) to achieve a more efficient and effective optimization process. PSO contributes its global search capability via particle-based solution updates, while ACO introduces a pheromone-driven decision mechanism that smoothly adapts to dynamic network conditions. By integrating these complementary behaviors, the hybrid PSO-ACO scheme can evaluate resource-allocation choices more consistently and respond more effectively to network variability. This combined strategy supports more efficient utilization of limited network resources and significantly improves 5G-HCRAN performance. Simulation results validate the superiority of the proposed hybrid resource allocation scheme over standalone methods, demonstrating significant improvements in throughput and better energy efficiency. By addressing the objectives, the proposed hybrid scheme provides a practical and scalable approach for the next generation 5G-HCRAN.
{"title":"Energy-efficient swarm intelligence-based resource allocation scheme for 5G-HCRAN","authors":"Tejas Kishor Patil, Paramveer Kumar, Pavan Kumar Mishra, Sudhakar Pandey","doi":"10.1016/j.comnet.2026.112034","DOIUrl":"10.1016/j.comnet.2026.112034","url":null,"abstract":"<div><div>The rapid evolution of 5G-HCRAN (Heterogeneous Cloud Radio Access Network) necessitates innovative resource allocation schemes to meet diverse user demands while optimizing energy efficiency. This research introduces a novel resource allocation scheme explicitly designed for 5G-HCRAN, emphasizing the maximization of throughput and energy efficiency. The proposed methodology employs a hybrid Particle Swarm Optimization-Ant Colony Optimization (PSO-ACO) scheme that combines the strengths of both Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) to achieve a more efficient and effective optimization process. PSO contributes its global search capability via particle-based solution updates, while ACO introduces a pheromone-driven decision mechanism that smoothly adapts to dynamic network conditions. By integrating these complementary behaviors, the hybrid PSO-ACO scheme can evaluate resource-allocation choices more consistently and respond more effectively to network variability. This combined strategy supports more efficient utilization of limited network resources and significantly improves 5G-HCRAN performance. Simulation results validate the superiority of the proposed hybrid resource allocation scheme over standalone methods, demonstrating significant improvements in throughput and better energy efficiency. By addressing the objectives, the proposed hybrid scheme provides a practical and scalable approach for the next generation 5G-HCRAN.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112034"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.comnet.2026.112035
Dalia I. Elewaily , Ahmed I. Saleh , Hesham A. Ali , Mohamed M. Abdelsalam
This paper introduces the Lightweight Temporal Graph Attention (LTGAT) model, with the primary aim of developing an efficient routing solution to enhance Delay-Tolerant Networking (DTN) performance in resource-constrained non-terrestrial environments. The primary motivation is to overcome the limitations of traditional deterministic routing methods, such as Contact Graph Routing (CGR), which suffer from significant computational overhead in large-scale time-varying topologies, due to their reliance on repeated contact graph searches. The proposed LTGAT achieves this by integrating a lightweight architecture that combines a two-head Graph Attention Network (GAT) and a Gated Recurrent Unit (GRU) to learn the complex representation of the known spatial-temporal structure in the scheduled contact plan, enabling fast routing decisions with minimal computational and energy demands. The significance of this work is validated through experiments across six realistic simulated lunar scenarios, where LTGAT demonstrates a substantial reduction in delivery times by up to 32 % compared to CGR, with processing times scaled to 105.445–286.712 ms on a Proton 200k On-Board Computer, reflecting improvements up to 89.9–91.0 % over CGR and 22.6–40.2 % over GAUSS. Additionally, LTGAT achieves energy consumption of 15.82–43.01 mJ per routing decision, and preserves CGR’s perfect delivery reliability by achieving a delivery ratio of 1.0 on bundles that CGR itself successfully processes, far outperforming GAUSS (0.25 - 0.85) on the same bundles, while recovering 29–100 % of the bundles that CGR drops. These results confirm LTGAT’s suitability for resource-limited CubeSat deployments. This research contributes a lightweight, computation-efficient routing framework, offering a critical advancement for resource-constrained non-terrestrial communication systems and providing a foundation for future interplanetary network studies.
{"title":"LTGAT: A lightweight temporal graph attention accelerator for deterministic routing in resource-constrained delay-tolerant non-terrestrial networks","authors":"Dalia I. Elewaily , Ahmed I. Saleh , Hesham A. Ali , Mohamed M. Abdelsalam","doi":"10.1016/j.comnet.2026.112035","DOIUrl":"10.1016/j.comnet.2026.112035","url":null,"abstract":"<div><div>This paper introduces the Lightweight Temporal Graph Attention (LTGAT) model, with the primary aim of developing an efficient routing solution to enhance Delay-Tolerant Networking (DTN) performance in resource-constrained non-terrestrial environments. The primary motivation is to overcome the limitations of traditional deterministic routing methods, such as Contact Graph Routing (CGR), which suffer from significant computational overhead in large-scale time-varying topologies, due to their reliance on repeated contact graph searches. The proposed LTGAT achieves this by integrating a lightweight architecture that combines a two-head Graph Attention Network (GAT) and a Gated Recurrent Unit (GRU) to learn the complex representation of the known spatial-temporal structure in the scheduled contact plan, enabling fast routing decisions with minimal computational and energy demands. The significance of this work is validated through experiments across six realistic simulated lunar scenarios, where LTGAT demonstrates a substantial reduction in delivery times by up to 32 % compared to CGR, with processing times scaled to 105.445–286.712 ms on a Proton 200k On-Board Computer, reflecting improvements up to 89.9–91.0 % over CGR and 22.6–40.2 % over GAUSS. Additionally, LTGAT achieves energy consumption of 15.82–43.01 mJ per routing decision, and preserves CGR’s perfect delivery reliability by achieving a delivery ratio of 1.0 on bundles that CGR itself successfully processes, far outperforming GAUSS (0.25 - 0.85) on the same bundles, while recovering 29–100 % of the bundles that CGR drops. These results confirm LTGAT’s suitability for resource-limited CubeSat deployments. This research contributes a lightweight, computation-efficient routing framework, offering a critical advancement for resource-constrained non-terrestrial communication systems and providing a foundation for future interplanetary network studies.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112035"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.comnet.2026.112033
Vincent Omollo Nyangaresi , Mohd Shariq , Daisy Nyang’anyi Ondwari , Muhammad Shafiq , Khalid Alsubhi , Mehedi Masud
Smart home networks deploy a myriad of sensors and intelligent devices to collect and disseminate massive and sensitive data, facilitating task automation for enhancing comfort, quality of life, efficiency, and sustainability. However, the utilization of public channels for interactions between users and smart home devices raises serious privacy and security issues. Numerous authentication schemes have been proposed in recent literature; most of them are prone to security attacks, including offline guessing, privileged insiders, and impersonation. In addition, some of them have complicated architectures that result in high resource consumption. In this paper, efficient Chebyshev polynomials and hashing functions are leveraged to develop a robust authentication protocol for smart homes. The Burrows–Abadi–Needham (BAN) logic-based detailed formal security analysis confirms the robustness of the joint authentication and key negotiation procedures. In addition, informal security analysis shows that the proposed protocol is secure against the Dolev-Yao (D-Y) and Canetti and Krawczyk (C-K) adversary models, mitigating several known security attacks. In terms of performance, the developed scheme incurs relatively low computation, energy, and communication costs.
{"title":"Efficient smart home message verification protocol based on Chebyshev chaotic mapping","authors":"Vincent Omollo Nyangaresi , Mohd Shariq , Daisy Nyang’anyi Ondwari , Muhammad Shafiq , Khalid Alsubhi , Mehedi Masud","doi":"10.1016/j.comnet.2026.112033","DOIUrl":"10.1016/j.comnet.2026.112033","url":null,"abstract":"<div><div>Smart home networks deploy a myriad of sensors and intelligent devices to collect and disseminate massive and sensitive data, facilitating task automation for enhancing comfort, quality of life, efficiency, and sustainability. However, the utilization of public channels for interactions between users and smart home devices raises serious privacy and security issues. Numerous authentication schemes have been proposed in recent literature; most of them are prone to security attacks, including offline guessing, privileged insiders, and impersonation. In addition, some of them have complicated architectures that result in high resource consumption. In this paper, efficient Chebyshev polynomials and hashing functions are leveraged to develop a robust authentication protocol for smart homes. The Burrows–Abadi–Needham (BAN) logic-based detailed formal security analysis confirms the robustness of the joint authentication and key negotiation procedures. In addition, informal security analysis shows that the proposed protocol is secure against the Dolev-Yao (D-Y) and Canetti and Krawczyk (C-K) adversary models, mitigating several known security attacks. In terms of performance, the developed scheme incurs relatively low computation, energy, and communication costs.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112033"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.comnet.2026.112029
Nikos Filinis , Ioannis Dimolitsas , Dimitrios Spatharakis , Paolo Bono , Anastasios Zafeiropoulos , Cristina Emilia Costa , Roberto Bruschi , Symeon Papavassiliou
The rapid advancements in technologies across the Computing Continuum have reinforced the need for the interplay of various network and compute orchestration mechanisms within distributed infrastructure architectures to support the hyper-distributed application (HDA) deployments. A unified approach to managing heterogeneous components is crucial for reconciling conflicting objectives and creating a synergetic framework. To undertake these challenges, we present NEPHELE, a platform that realizes a hierarchical multi-layered orchestration architecture that incorporates infrastructure and application orchestration workflows across diverse resource management layers. The proposed platform integrates well-defined components spanning network and multi-cluster compute domains to enable intent-driven, dynamic orchestration. At its core, the Synergetic Meta-Orchestrator (SMO) integrates diverse application requirements, generating deployment plans by interfacing with underlying orchestrators over distributed compute and network infrastructure. In the current work, we present the NEPHELE architecture, enumerate its interaction workflows, and evaluate key components of the overall architecture based on the instantiation and usage of the NEPHELE platform. The platform is evaluated in a multi-domain infrastructure setup to assess the operational overhead of the introduced orchestration functionality, considering also the assessment of different topology configurations on resource instantiation times and allocation dynamics, and network latency. Finally, we demonstrate the platform’s effectiveness in orchestrating distributed application graphs under varying placement intents, performance constraints, and workload stress conditions. The evaluation results outline the effectiveness of NEPHELE in orchestrating various infrastructure layers and application lifecycle scenarios through a unified interface.
{"title":"A platform perspective for the computing continuum: Synergetic orchestration of compute and network resources for hyper-distributed applications","authors":"Nikos Filinis , Ioannis Dimolitsas , Dimitrios Spatharakis , Paolo Bono , Anastasios Zafeiropoulos , Cristina Emilia Costa , Roberto Bruschi , Symeon Papavassiliou","doi":"10.1016/j.comnet.2026.112029","DOIUrl":"10.1016/j.comnet.2026.112029","url":null,"abstract":"<div><div>The rapid advancements in technologies across the Computing Continuum have reinforced the need for the interplay of various network and compute orchestration mechanisms within distributed infrastructure architectures to support the hyper-distributed application (HDA) deployments. A unified approach to managing heterogeneous components is crucial for reconciling conflicting objectives and creating a synergetic framework. To undertake these challenges, we present NEPHELE, a platform that realizes a hierarchical multi-layered orchestration architecture that incorporates infrastructure and application orchestration workflows across diverse resource management layers. The proposed platform integrates well-defined components spanning network and multi-cluster compute domains to enable intent-driven, dynamic orchestration. At its core, the Synergetic Meta-Orchestrator (SMO) integrates diverse application requirements, generating deployment plans by interfacing with underlying orchestrators over distributed compute and network infrastructure. In the current work, we present the NEPHELE architecture, enumerate its interaction workflows, and evaluate key components of the overall architecture based on the instantiation and usage of the NEPHELE platform. The platform is evaluated in a multi-domain infrastructure setup to assess the operational overhead of the introduced orchestration functionality, considering also the assessment of different topology configurations on resource instantiation times and allocation dynamics, and network latency. Finally, we demonstrate the platform’s effectiveness in orchestrating distributed application graphs under varying placement intents, performance constraints, and workload stress conditions. The evaluation results outline the effectiveness of NEPHELE in orchestrating various infrastructure layers and application lifecycle scenarios through a unified interface.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112029"},"PeriodicalIF":4.6,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.comnet.2026.112022
Yiyang Huang , Mingxin Cui , Gaopeng Gou , Chang Liu , Yong Wang , Bing Xia , Guoming Ren , Zheyuan Gu , Xiyuan Zhang , Gang Xiong
Dynamic IP technologies such as IP address pool rotation by Internet operators and elastic IP drift by cloud service providers are widely adopted, breaking the static binding between IP addresses and geographical locations and posing severe challenges to the accuracy, efficiency, and robustness of IP cross-regional detection. Traditional solutions rely on third-party IP geolocation databases, whose large-scale batch update mode fails to synchronize IP regional attribution in a timely manner, struggling to adapt to dynamic IP changes. This results in insufficient detection accuracy and efficiency, compromising the stability of geographically related network services. To address this issue, this paper proposes TrafficCL, a traffic feature-based IP cross-regional detection method: it constructs a geographically associated traffic feature set, aligns traffic embedding distance with geographical distance via contrastive learning to enhance geographical attributes, integrates data augmentation to improve model robustness, designs a lightweight binary classification task for regional deviation detection, and adopts a targeted update strategy to avoid large-scale update latency. Experimental results show that TrafficCL significantly outperforms the active probing method PoP: on the Beijing cross-district dataset, the accuracy increases from 0.781 to 0.982, the F1-score improves by 2.2 times, and the processing efficiency for ten-thousand-level samples is enhanced by 23.6 times. When facing 10 % data loss, 10 % network feature fluctuation, and a positional offset of approximately 500 m, the F1-score degradation is less than 3 % in all cases, demonstrating excellent robustness. This method effectively improves the accuracy, efficiency, and robustness of IP cross-regional detection, and has practical significance for ensuring the stability of geographically related network services.
{"title":"TrafficCL: Contrastive learning on network traffic for accurate, efficient and robust IP cross-regional detection","authors":"Yiyang Huang , Mingxin Cui , Gaopeng Gou , Chang Liu , Yong Wang , Bing Xia , Guoming Ren , Zheyuan Gu , Xiyuan Zhang , Gang Xiong","doi":"10.1016/j.comnet.2026.112022","DOIUrl":"10.1016/j.comnet.2026.112022","url":null,"abstract":"<div><div>Dynamic IP technologies such as IP address pool rotation by Internet operators and elastic IP drift by cloud service providers are widely adopted, breaking the static binding between IP addresses and geographical locations and posing severe challenges to the accuracy, efficiency, and robustness of IP cross-regional detection. Traditional solutions rely on third-party IP geolocation databases, whose large-scale batch update mode fails to synchronize IP regional attribution in a timely manner, struggling to adapt to dynamic IP changes. This results in insufficient detection accuracy and efficiency, compromising the stability of geographically related network services. To address this issue, this paper proposes TrafficCL, a traffic feature-based IP cross-regional detection method: it constructs a geographically associated traffic feature set, aligns traffic embedding distance with geographical distance via contrastive learning to enhance geographical attributes, integrates data augmentation to improve model robustness, designs a lightweight binary classification task for regional deviation detection, and adopts a targeted update strategy to avoid large-scale update latency. Experimental results show that TrafficCL significantly outperforms the active probing method PoP: on the Beijing cross-district dataset, the accuracy increases from 0.781 to 0.982, the F1-score improves by 2.2 times, and the processing efficiency for ten-thousand-level samples is enhanced by 23.6 times. When facing 10 % data loss, 10 % network feature fluctuation, and a positional offset of approximately 500 m, the F1-score degradation is less than 3 % in all cases, demonstrating excellent robustness. This method effectively improves the accuracy, efficiency, and robustness of IP cross-regional detection, and has practical significance for ensuring the stability of geographically related network services.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112022"},"PeriodicalIF":4.6,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.comnet.2026.112018
Yingjie Cai , Tianbo Lu , Jiaze Shang , Yanfang Li , Qitai Gong , Hanrui Chen
Authentication key agreement (AKA) protocol is an effective method for achieving secure communication between Internet of Things (IoT) devices. However, existing public key infrastructure-based and identity-based AKA protocols face limitations due to complex certificate management and key escrow issues. Furthermore, cross-domain communication is a fundamental requirement for IoT. However, current solutions addressing this challenge rely on trusted third parties, which undoubtedly increases the communication overhead and system complexity during the authentication phase. To address these challenges, we propose a new provably secure lightweight certificateless cross-domain authentication key agreement protocol (LCC-AKA). By introducing a certificateless public key cryptographic mechanism during the registration phase, we eliminate the need for complex certificate management and the limitations of key escrow, while also preventing insider attacks even under the semi-honest Key Generation Center (KGC) assumption. In the cross-domain authentication key agreement phase, we present a mechanism that enables direct cross-domain authentication and key agreement between devices without relying on trusted third parties, utilizing lightweight elliptic curve and hash function operations to achieve efficiency. In terms of security, we analyze the security vulnerabilities of existing certificateless cross-domain AKA schemes and extend the Real-Or-Random (ROR) model. The LCC-AKA protocol is provably secure under the extended ROR model and BAN logic. Security and performance analyses demonstrate that the LCC-AKA protocol can resist both insider and outsider attacks, including public key replacement attacks, while maintaining low computational and communication overhead.
{"title":"LCC-AKA: Lightweight certificateless cross-domain authentication key agreement protocol for IoT devices","authors":"Yingjie Cai , Tianbo Lu , Jiaze Shang , Yanfang Li , Qitai Gong , Hanrui Chen","doi":"10.1016/j.comnet.2026.112018","DOIUrl":"10.1016/j.comnet.2026.112018","url":null,"abstract":"<div><div>Authentication key agreement (AKA) protocol is an effective method for achieving secure communication between Internet of Things (IoT) devices. However, existing public key infrastructure-based and identity-based AKA protocols face limitations due to complex certificate management and key escrow issues. Furthermore, cross-domain communication is a fundamental requirement for IoT. However, current solutions addressing this challenge rely on trusted third parties, which undoubtedly increases the communication overhead and system complexity during the authentication phase. To address these challenges, we propose a new provably secure lightweight certificateless cross-domain authentication key agreement protocol (LCC-AKA). By introducing a certificateless public key cryptographic mechanism during the registration phase, we eliminate the need for complex certificate management and the limitations of key escrow, while also preventing insider attacks even under the semi-honest Key Generation Center (KGC) assumption. In the cross-domain authentication key agreement phase, we present a mechanism that enables direct cross-domain authentication and key agreement between devices without relying on trusted third parties, utilizing lightweight elliptic curve and hash function operations to achieve efficiency. In terms of security, we analyze the security vulnerabilities of existing certificateless cross-domain AKA schemes and extend the Real-Or-Random (ROR) model. The LCC-AKA protocol is provably secure under the extended ROR model and BAN logic. Security and performance analyses demonstrate that the LCC-AKA protocol can resist both insider and outsider attacks, including public key replacement attacks, while maintaining low computational and communication overhead.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112018"},"PeriodicalIF":4.6,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}