首页 > 最新文献

Computer Networks最新文献

英文 中文
Incentive mechanism design in blockchain-based hierarchical federated learning over edge clouds 边缘云上基于区块链分层联邦学习的激励机制设计
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-22 DOI: 10.1016/j.comnet.2026.112039
Xuanzhang Liu , Jiyao Liu , Xinliang Wei , Yu Wang
Federated learning (FL) is a promising distributed AI paradigm for protecting user privacy by training models on local devices (such as IoT devices). However, FL systems face challenges like high communication overhead and non-transparent model aggregation. To address these issues, integrating blockchain technology into hierarchical federated learning (HFL) to construct a decentralized, low latency, and transparent learning framework over a cloud-edge-client architecture has gained attention. To ensure participant engagement from edge servers and clients, this paper explores incentive mechanism design in a blockchain-based HFL system using a semi-asynchronous aggregation model. We model the resource pricing among clients, edge servers, and task publishers at the cloud as a three-stage Stackelberg game, proving the existence of a Nash equilibrium in which each participant could maximize their own utility. An iterative algorithm based on the alternating direction method of multipliers and backward induction is then proposed to optimize strategies. Extensive simulations verify the algorithm’s rapid convergence and demonstrate that our proposed mechanism consistently outperforms baseline strategies across various scenarios in terms of participant utilities. Our approach also achieves up to 7% higher model accuracy than baseline methods, confirming its practical effectiveness.
联邦学习(FL)是一种很有前途的分布式人工智能范例,它通过在本地设备(如物联网设备)上训练模型来保护用户隐私。然而,FL系统面临着高通信开销和不透明模型聚合等挑战。为了解决这些问题,将区块链技术集成到分层联邦学习(HFL)中,以在云边缘客户端架构上构建一个分散、低延迟和透明的学习框架,已经引起了人们的关注。为了确保边缘服务器和客户端的参与者参与,本文使用半异步聚合模型探索了基于区块链的HFL系统中的激励机制设计。我们将云中的客户端、边缘服务器和任务发布者之间的资源定价建模为三阶段Stackelberg博弈,证明存在纳什均衡,其中每个参与者都可以最大化自己的效用。然后提出了一种基于乘法器交替方向法和逆向归纳法的迭代算法来优化策略。大量的模拟验证了算法的快速收敛性,并证明我们提出的机制在参与者效用方面始终优于各种场景的基线策略。该方法的模型精度也比基线方法高出7%,证实了其实际有效性。
{"title":"Incentive mechanism design in blockchain-based hierarchical federated learning over edge clouds","authors":"Xuanzhang Liu ,&nbsp;Jiyao Liu ,&nbsp;Xinliang Wei ,&nbsp;Yu Wang","doi":"10.1016/j.comnet.2026.112039","DOIUrl":"10.1016/j.comnet.2026.112039","url":null,"abstract":"<div><div>Federated learning (FL) is a promising distributed AI paradigm for protecting user privacy by training models on local devices (such as IoT devices). However, FL systems face challenges like high communication overhead and non-transparent model aggregation. To address these issues, integrating blockchain technology into hierarchical federated learning (HFL) to construct a decentralized, low latency, and transparent learning framework over a cloud-edge-client architecture has gained attention. To ensure participant engagement from edge servers and clients, this paper explores incentive mechanism design in a blockchain-based HFL system using a semi-asynchronous aggregation model. We model the resource pricing among clients, edge servers, and task publishers at the cloud as a three-stage Stackelberg game, proving the existence of a Nash equilibrium in which each participant could maximize their own utility. An iterative algorithm based on the alternating direction method of multipliers and backward induction is then proposed to optimize strategies. Extensive simulations verify the algorithm’s rapid convergence and demonstrate that our proposed mechanism consistently outperforms baseline strategies across various scenarios in terms of participant utilities. Our approach also achieves up to 7% higher model accuracy than baseline methods, confirming its practical effectiveness.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112039"},"PeriodicalIF":4.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poseidon: Intelligent proactive defense against DDoS attacks in edge clouds 波塞冬:边缘云DDoS攻击智能主动防御
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-21 DOI: 10.1016/j.comnet.2026.112025
Shen Dong , Guozhen Cheng , Wenyan Liu
With the rise of edge computing (EC), data and computation are increasingly shifted from centralized clouds to edge nodes, improving real-time performance and privacy. However, the resource constraints of edge nodes make them vulnerable to Distributed Denial-of-Service (DDoS) attacks. Traditional passive defense mechanisms struggle to counter diverse attacks due to their delayed response and lack of flexibility. While proactive defense strategies possess dynamism and adaptability, existing solutions often rely solely on either Moving Target Defense (MTD) or deception defense. The former fails to curb attacks at their source, while the latter lacks dynamic adaptability. Moreover, they often address only one type of attack and impose high resource and latency costs. To overcome these challenges, we propose Poseidon, a deep reinforcement learning-based hybrid proactive defense framework. Poseidon integrates the dynamism of MTD with the deceptive nature of deception defense, enabling differentiated responses to both High-rate Distributed Denial-of-Service (HDDoS) and Low-rate Distributed Denial-of-Service (LDDoS) attacks. By leveraging the lightweight characteristics of containers, it achieves resource-efficient protection. The interaction between attacks and defenses is modeled as a Markov Decision Process (MDP), and the Deep Q-Network (DQN) algorithm is employed to dynamically balance defense effectiveness and resource overhead. Experimental results demonstrate that Poseidon significantly outperforms existing MTD schemes across multiple DDoS attack scenarios, achieving up to a 28% improvement in average reward, a 30% enhancement in security, and a 15% increase in service quality. Furthermore, Poseidon effectively ensures service availability while minimizing quality degradation, showcasing considerable practical value.
随着边缘计算(EC)的兴起,数据和计算越来越多地从集中式云转移到边缘节点,提高了实时性和隐私性。然而,边缘节点的资源限制使其容易受到分布式拒绝服务(DDoS)攻击。传统的被动防御机制由于反应迟缓和缺乏灵活性而难以应对各种攻击。虽然主动防御策略具有动态性和适应性,但现有的解决方案往往只依赖于移动目标防御(MTD)或欺骗防御。前者无法从源头遏制攻击,而后者缺乏动态适应性。此外,它们通常只处理一种类型的攻击,并带来很高的资源和延迟成本。为了克服这些挑战,我们提出了基于深度强化学习的混合主动防御框架Poseidon。Poseidon集成了MTD的动态和欺骗防御的欺骗性,能够对高速率分布式拒绝服务(HDDoS)和低速率分布式拒绝服务(LDDoS)攻击做出不同的响应。通过利用容器的轻量级特性,它实现了资源高效保护。将攻击与防御之间的交互建模为马尔可夫决策过程(MDP),并采用深度Q-Network (DQN)算法动态平衡防御效果和资源开销。实验结果表明,Poseidon在多种DDoS攻击场景下显著优于现有的MTD方案,平均奖励提高28%,安全性提高30%,服务质量提高15%。此外,Poseidon有效地保证了服务的可用性,同时最大限度地减少了质量下降,显示出相当大的实用价值。
{"title":"Poseidon: Intelligent proactive defense against DDoS attacks in edge clouds","authors":"Shen Dong ,&nbsp;Guozhen Cheng ,&nbsp;Wenyan Liu","doi":"10.1016/j.comnet.2026.112025","DOIUrl":"10.1016/j.comnet.2026.112025","url":null,"abstract":"<div><div>With the rise of edge computing (EC), data and computation are increasingly shifted from centralized clouds to edge nodes, improving real-time performance and privacy. However, the resource constraints of edge nodes make them vulnerable to Distributed Denial-of-Service (DDoS) attacks. Traditional passive defense mechanisms struggle to counter diverse attacks due to their delayed response and lack of flexibility. While proactive defense strategies possess dynamism and adaptability, existing solutions often rely solely on either Moving Target Defense (MTD) or deception defense. The former fails to curb attacks at their source, while the latter lacks dynamic adaptability. Moreover, they often address only one type of attack and impose high resource and latency costs. To overcome these challenges, we propose Poseidon, a deep reinforcement learning-based hybrid proactive defense framework. Poseidon integrates the dynamism of MTD with the deceptive nature of deception defense, enabling differentiated responses to both High-rate Distributed Denial-of-Service (HDDoS) and Low-rate Distributed Denial-of-Service (LDDoS) attacks. By leveraging the lightweight characteristics of containers, it achieves resource-efficient protection. The interaction between attacks and defenses is modeled as a Markov Decision Process (MDP), and the Deep Q-Network (DQN) algorithm is employed to dynamically balance defense effectiveness and resource overhead. Experimental results demonstrate that Poseidon significantly outperforms existing MTD schemes across multiple DDoS attack scenarios, achieving up to a 28% improvement in average reward, a 30% enhancement in security, and a 15% increase in service quality. Furthermore, Poseidon effectively ensures service availability while minimizing quality degradation, showcasing considerable practical value.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112025"},"PeriodicalIF":4.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-augmented deep feature extraction and temporal-dependencies prediction for network traffic anomaly detection 基于内存增强深度特征提取和时间依赖预测的网络流量异常检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-21 DOI: 10.1016/j.comnet.2026.112037
Chao Wang , Ping Zhou , Jiuzhen Zeng , Yong Ma , Ruichi Zhang
Traffic anomaly detections are crucial for the network security. Most of existing unsupervised detection models are based on reconstruction and prediction methods that are deficient in generalization ability and processing temporal-dependencies. Although the memory module is introduced to mitigate the weak generalization ability, it encounters challenges such as data distribution drift over time and memory contamination. To address these issues, this paper proposes a novel unsupervised network traffic anomaly detection model, MAFE-TDP, which integrates a transformer-based feature extraction module, a memory module and a prediction-based temporal-dependencies extraction network. The generalization ability of the model and its robustness to memory contamination are enhanced by introducing the memory module with the FIFO memory replacement strategy and KNN method. The proposed anomaly scoring method fuses reconstruction error and prediction error, thus enlarging the gap between normal and abnormal data. Evaluation results on four real-word network traffic datasets demonstrate that MAFE-TDP outperforms existing state-of-the-art baseline methods in terms of AUC-ROC and AUC-PR metrics.
流量异常检测对网络安全至关重要。现有的无监督检测模型大多基于重构和预测方法,缺乏泛化能力和处理时间依赖性。虽然引入了内存模块来缓解泛化能力弱的问题,但它面临着数据分布随时间漂移和内存污染等挑战。为了解决这些问题,本文提出了一种新的无监督网络流量异常检测模型MAFE-TDP,该模型集成了基于变压器的特征提取模块、内存模块和基于预测的时间依赖提取网络。通过引入FIFO内存替换策略和KNN方法,增强了模型的泛化能力和对内存污染的鲁棒性。所提出的异常评分方法融合了重建误差和预测误差,从而扩大了正常数据与异常数据之间的差距。在四个真实网络流量数据集上的评估结果表明,MAFE-TDP在AUC-ROC和AUC-PR指标方面优于现有的最先进的基线方法。
{"title":"Memory-augmented deep feature extraction and temporal-dependencies prediction for network traffic anomaly detection","authors":"Chao Wang ,&nbsp;Ping Zhou ,&nbsp;Jiuzhen Zeng ,&nbsp;Yong Ma ,&nbsp;Ruichi Zhang","doi":"10.1016/j.comnet.2026.112037","DOIUrl":"10.1016/j.comnet.2026.112037","url":null,"abstract":"<div><div>Traffic anomaly detections are crucial for the network security. Most of existing unsupervised detection models are based on reconstruction and prediction methods that are deficient in generalization ability and processing temporal-dependencies. Although the memory module is introduced to mitigate the weak generalization ability, it encounters challenges such as data distribution drift over time and memory contamination. To address these issues, this paper proposes a novel unsupervised network traffic anomaly detection model, MAFE-TDP, which integrates a transformer-based feature extraction module, a memory module and a prediction-based temporal-dependencies extraction network. The generalization ability of the model and its robustness to memory contamination are enhanced by introducing the memory module with the FIFO memory replacement strategy and KNN method. The proposed anomaly scoring method fuses reconstruction error and prediction error, thus enlarging the gap between normal and abnormal data. Evaluation results on four real-word network traffic datasets demonstrate that MAFE-TDP outperforms existing state-of-the-art baseline methods in terms of AUC-ROC and AUC-PR metrics.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112037"},"PeriodicalIF":4.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Doubling the speed of large-scale packet classification through compressing decision tree nodes 通过压缩决策树节点,将大规模数据包分类速度提高一倍
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112032
Jincheng Zhong , Tao Li , Gaofeng Lv , Shuhui Chen
Packet classification underpins critical network functions such as access control and quality of service. While decision tree-based approaches offer efficiency and scalability, their classification performance is often bottlenecked by excessive memory accesses during tree traversal-primarily due to the use of pointer-based indexing structures necessitated by large node sizes.
This paper proposes a general pointer-elimination paradigm via Extreme Node Compression (ENC). This enables indexing structures to store nodes directly rather than pointers, thereby eliminating one memory indirection per level and nearly halving the number of memory accesses per lookup. To validate this core idea, this paper designs TupleTree-Compress based on state-of-the-art hash-based decision tree scheme-TupleTree. TupleTree-Compress integrates three key techniques-a unified global hash table, fingerprint-based keys, and hash-based sibling linking-to achieve full node compression while preserving correctness and update support.
Furthermore, to demonstrate the generality of our approach, we apply the same optimization paradigm to state-of-the-art classical decision tree scheme-CutSplit, resulting in CutSplit-Compress. Experimental results show that TupleTree-Compress achieves speedups of 2.24 × –3.12 ×  over TupleTree and 1.43 × –1.91 ×  over DBTable-the current best-performing scheme. Similarly, CutSplit-Compress achieves speedups of 3.19 × –3.64 ×  over CutSplit, with improvements up to 1.58 ×  over DBTable.
Our work demonstrates that aggressive node compression is a powerful and generalizable strategy for boosting packet classification performance, offering a promising direction for optimizing decision tree-based schemes.
分组分类是接入控制和服务质量等关键网络功能的基础。虽然基于决策树的方法提供了效率和可伸缩性,但它们的分类性能经常受到树遍历期间过多内存访问的瓶颈——主要是由于使用大节点大小所必需的基于指针的索引结构。本文提出了一种基于极限节点压缩(ENC)的通用指针消除范式。这使得索引结构可以直接存储节点而不是指针,从而消除了每个级别的间接内存,并且每次查找的内存访问次数几乎减少了一半。为了验证这一核心思想,本文基于最先进的基于哈希的决策树方案- tupletree设计了TupleTree-Compress。TupleTree-Compress集成了三种关键技术——统一的全局哈希表、基于指纹的键和基于哈希的兄弟链接——以实现全节点压缩,同时保持正确性和更新支持。此外,为了证明我们方法的通用性,我们将相同的优化范例应用于最先进的经典决策树方案- cutsplit,从而产生CutSplit-Compress。实验结果表明,与TupleTree相比,TupleTree- compress的速度提高了2.24 × -3.12 × ,与dbtable相比,速度提高了1.43 × -1.91 × ,这是目前性能最好的方案。类似地,CutSplit- compress比CutSplit实现了3.19 × -3.64 × 的速度提升,比DBTable的速度提升了1.58 × 。我们的工作表明,主动节点压缩是提高数据包分类性能的一种强大且可推广的策略,为优化基于决策树的方案提供了一个有希望的方向。
{"title":"Doubling the speed of large-scale packet classification through compressing decision tree nodes","authors":"Jincheng Zhong ,&nbsp;Tao Li ,&nbsp;Gaofeng Lv ,&nbsp;Shuhui Chen","doi":"10.1016/j.comnet.2026.112032","DOIUrl":"10.1016/j.comnet.2026.112032","url":null,"abstract":"<div><div>Packet classification underpins critical network functions such as access control and quality of service. While decision tree-based approaches offer efficiency and scalability, their classification performance is often bottlenecked by excessive memory accesses during tree traversal-primarily due to the use of pointer-based indexing structures necessitated by large node sizes.</div><div>This paper proposes a general pointer-elimination paradigm via Extreme Node Compression (ENC). This enables indexing structures to store nodes directly rather than pointers, thereby eliminating one memory indirection per level and nearly halving the number of memory accesses per lookup. To validate this core idea, this paper designs TupleTree-Compress based on state-of-the-art hash-based decision tree scheme-TupleTree. TupleTree-Compress integrates three key techniques-a unified global hash table, fingerprint-based keys, and hash-based sibling linking-to achieve full node compression while preserving correctness and update support.</div><div>Furthermore, to demonstrate the generality of our approach, we apply the same optimization paradigm to state-of-the-art classical decision tree scheme-CutSplit, resulting in CutSplit-Compress. Experimental results show that TupleTree-Compress achieves speedups of <strong>2.24</strong> × –<strong>3.12</strong> ×  over TupleTree and <strong>1.43</strong> × –<strong>1.91</strong> ×  over DBTable-the current best-performing scheme. Similarly, CutSplit-Compress achieves speedups of <strong>3.19</strong> × –<strong>3.64</strong> ×  over CutSplit, with improvements up to <strong>1.58</strong> ×  over DBTable.</div><div>Our work demonstrates that aggressive node compression is a powerful and generalizable strategy for boosting packet classification performance, offering a promising direction for optimizing decision tree-based schemes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112032"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient swarm intelligence-based resource allocation scheme for 5G-HCRAN 基于高效群体智能的5G-HCRAN资源分配方案
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112034
Tejas Kishor Patil, Paramveer Kumar, Pavan Kumar Mishra, Sudhakar Pandey
The rapid evolution of 5G-HCRAN (Heterogeneous Cloud Radio Access Network) necessitates innovative resource allocation schemes to meet diverse user demands while optimizing energy efficiency. This research introduces a novel resource allocation scheme explicitly designed for 5G-HCRAN, emphasizing the maximization of throughput and energy efficiency. The proposed methodology employs a hybrid Particle Swarm Optimization-Ant Colony Optimization (PSO-ACO) scheme that combines the strengths of both Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) to achieve a more efficient and effective optimization process. PSO contributes its global search capability via particle-based solution updates, while ACO introduces a pheromone-driven decision mechanism that smoothly adapts to dynamic network conditions. By integrating these complementary behaviors, the hybrid PSO-ACO scheme can evaluate resource-allocation choices more consistently and respond more effectively to network variability. This combined strategy supports more efficient utilization of limited network resources and significantly improves 5G-HCRAN performance. Simulation results validate the superiority of the proposed hybrid resource allocation scheme over standalone methods, demonstrating significant improvements in throughput and better energy efficiency. By addressing the objectives, the proposed hybrid scheme provides a practical and scalable approach for the next generation 5G-HCRAN.
5G-HCRAN(异构云无线接入网)的快速发展需要创新的资源分配方案,以满足多样化的用户需求,同时优化能源效率。本研究提出了一种明确为5G-HCRAN设计的新颖资源分配方案,强调吞吐量和能效的最大化。该方法采用粒子群优化-蚁群优化(PSO-ACO)混合方案,结合粒子群优化(PSO)和蚁群优化(ACO)的优点,实现更高效的优化过程。粒子群算法通过基于粒子的解决方案更新来提高其全局搜索能力,而蚁群算法引入了信息素驱动的决策机制,可以平滑地适应动态网络条件。通过整合这些互补行为,混合PSO-ACO方案可以更一致地评估资源分配选择,并更有效地响应网络的可变性。这种组合策略支持更有效地利用有限的网络资源,并显着提高5G-HCRAN性能。仿真结果验证了所提出的混合资源分配方案相对于独立方法的优越性,证明了吞吐量和能源效率的显著提高。通过解决这些目标,提出的混合方案为下一代5G-HCRAN提供了一种实用且可扩展的方法。
{"title":"Energy-efficient swarm intelligence-based resource allocation scheme for 5G-HCRAN","authors":"Tejas Kishor Patil,&nbsp;Paramveer Kumar,&nbsp;Pavan Kumar Mishra,&nbsp;Sudhakar Pandey","doi":"10.1016/j.comnet.2026.112034","DOIUrl":"10.1016/j.comnet.2026.112034","url":null,"abstract":"<div><div>The rapid evolution of 5G-HCRAN (Heterogeneous Cloud Radio Access Network) necessitates innovative resource allocation schemes to meet diverse user demands while optimizing energy efficiency. This research introduces a novel resource allocation scheme explicitly designed for 5G-HCRAN, emphasizing the maximization of throughput and energy efficiency. The proposed methodology employs a hybrid Particle Swarm Optimization-Ant Colony Optimization (PSO-ACO) scheme that combines the strengths of both Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) to achieve a more efficient and effective optimization process. PSO contributes its global search capability via particle-based solution updates, while ACO introduces a pheromone-driven decision mechanism that smoothly adapts to dynamic network conditions. By integrating these complementary behaviors, the hybrid PSO-ACO scheme can evaluate resource-allocation choices more consistently and respond more effectively to network variability. This combined strategy supports more efficient utilization of limited network resources and significantly improves 5G-HCRAN performance. Simulation results validate the superiority of the proposed hybrid resource allocation scheme over standalone methods, demonstrating significant improvements in throughput and better energy efficiency. By addressing the objectives, the proposed hybrid scheme provides a practical and scalable approach for the next generation 5G-HCRAN.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112034"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LTGAT: A lightweight temporal graph attention accelerator for deterministic routing in resource-constrained delay-tolerant non-terrestrial networks LTGAT:一种轻量级时间图注意力加速器,用于资源受限、容错延迟的非地面网络中的确定性路由
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112035
Dalia I. Elewaily , Ahmed I. Saleh , Hesham A. Ali , Mohamed M. Abdelsalam
This paper introduces the Lightweight Temporal Graph Attention (LTGAT) model, with the primary aim of developing an efficient routing solution to enhance Delay-Tolerant Networking (DTN) performance in resource-constrained non-terrestrial environments. The primary motivation is to overcome the limitations of traditional deterministic routing methods, such as Contact Graph Routing (CGR), which suffer from significant computational overhead in large-scale time-varying topologies, due to their reliance on repeated contact graph searches. The proposed LTGAT achieves this by integrating a lightweight architecture that combines a two-head Graph Attention Network (GAT) and a Gated Recurrent Unit (GRU) to learn the complex representation of the known spatial-temporal structure in the scheduled contact plan, enabling fast routing decisions with minimal computational and energy demands. The significance of this work is validated through experiments across six realistic simulated lunar scenarios, where LTGAT demonstrates a substantial reduction in delivery times by up to 32 % compared to CGR, with processing times scaled to 105.445–286.712 ms on a Proton 200k On-Board Computer, reflecting improvements up to 89.9–91.0 % over CGR and 22.6–40.2 % over GAUSS. Additionally, LTGAT achieves energy consumption of 15.82–43.01 mJ per routing decision, and preserves CGR’s perfect delivery reliability by achieving a delivery ratio of 1.0 on bundles that CGR itself successfully processes, far outperforming GAUSS (0.25 - 0.85) on the same bundles, while recovering 29–100 % of the bundles that CGR drops. These results confirm LTGAT’s suitability for resource-limited CubeSat deployments. This research contributes a lightweight, computation-efficient routing framework, offering a critical advancement for resource-constrained non-terrestrial communication systems and providing a foundation for future interplanetary network studies.
本文介绍了轻量级时间图注意(LTGAT)模型,其主要目的是开发一种有效的路由解决方案,以提高资源受限的非地面环境中容忍延迟网络(DTN)的性能。其主要动机是克服传统确定性路由方法的局限性,如接触图路由(CGR),由于依赖于重复的接触图搜索,在大规模时变拓扑中存在显著的计算开销。提出的LTGAT通过集成轻量级架构来实现这一目标,该架构结合了双头图注意网络(GAT)和门控循环单元(GRU),以学习预定接触计划中已知时空结构的复杂表示,从而以最小的计算和能量需求实现快速路由决策。这项工作的重要性通过六个真实的模拟月球场景的实验得到了验证,其中LTGAT与CGR相比,交付时间大幅减少了32%,处理时间在质子200k机载计算机上扩展到105.445-286.712毫秒,比CGR提高了89.9 - 91.0%,比GAUSS提高了22.6 - 40.2%。此外,LTGAT在每个路由决策中实现了15.82-43.01 mJ的能量消耗,并通过在CGR本身成功处理的束上实现1.0的交付比来保持CGR的完美交付可靠性,远远优于相同束上的GAUSS(0.25 - 0.85),同时回收了CGR掉落的束的29 - 100%。这些结果证实了LTGAT适用于资源有限的立方体卫星部署。该研究提供了一种轻量级、计算效率高的路由框架,为资源受限的非地面通信系统提供了重要的进步,并为未来的行星际网络研究奠定了基础。
{"title":"LTGAT: A lightweight temporal graph attention accelerator for deterministic routing in resource-constrained delay-tolerant non-terrestrial networks","authors":"Dalia I. Elewaily ,&nbsp;Ahmed I. Saleh ,&nbsp;Hesham A. Ali ,&nbsp;Mohamed M. Abdelsalam","doi":"10.1016/j.comnet.2026.112035","DOIUrl":"10.1016/j.comnet.2026.112035","url":null,"abstract":"<div><div>This paper introduces the Lightweight Temporal Graph Attention (LTGAT) model, with the primary aim of developing an efficient routing solution to enhance Delay-Tolerant Networking (DTN) performance in resource-constrained non-terrestrial environments. The primary motivation is to overcome the limitations of traditional deterministic routing methods, such as Contact Graph Routing (CGR), which suffer from significant computational overhead in large-scale time-varying topologies, due to their reliance on repeated contact graph searches. The proposed LTGAT achieves this by integrating a lightweight architecture that combines a two-head Graph Attention Network (GAT) and a Gated Recurrent Unit (GRU) to learn the complex representation of the known spatial-temporal structure in the scheduled contact plan, enabling fast routing decisions with minimal computational and energy demands. The significance of this work is validated through experiments across six realistic simulated lunar scenarios, where LTGAT demonstrates a substantial reduction in delivery times by up to 32 % compared to CGR, with processing times scaled to 105.445–286.712 ms on a Proton 200k On-Board Computer, reflecting improvements up to 89.9–91.0 % over CGR and 22.6–40.2 % over GAUSS. Additionally, LTGAT achieves energy consumption of 15.82–43.01 mJ per routing decision, and preserves CGR’s perfect delivery reliability by achieving a delivery ratio of 1.0 on bundles that CGR itself successfully processes, far outperforming GAUSS (0.25 - 0.85) on the same bundles, while recovering 29–100 % of the bundles that CGR drops. These results confirm LTGAT’s suitability for resource-limited CubeSat deployments. This research contributes a lightweight, computation-efficient routing framework, offering a critical advancement for resource-constrained non-terrestrial communication systems and providing a foundation for future interplanetary network studies.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112035"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient smart home message verification protocol based on Chebyshev chaotic mapping 基于切比雪夫混沌映射的高效智能家居消息验证协议
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-20 DOI: 10.1016/j.comnet.2026.112033
Vincent Omollo Nyangaresi , Mohd Shariq , Daisy Nyang’anyi Ondwari , Muhammad Shafiq , Khalid Alsubhi , Mehedi Masud
Smart home networks deploy a myriad of sensors and intelligent devices to collect and disseminate massive and sensitive data, facilitating task automation for enhancing comfort, quality of life, efficiency, and sustainability. However, the utilization of public channels for interactions between users and smart home devices raises serious privacy and security issues. Numerous authentication schemes have been proposed in recent literature; most of them are prone to security attacks, including offline guessing, privileged insiders, and impersonation. In addition, some of them have complicated architectures that result in high resource consumption. In this paper, efficient Chebyshev polynomials and hashing functions are leveraged to develop a robust authentication protocol for smart homes. The Burrows–Abadi–Needham (BAN) logic-based detailed formal security analysis confirms the robustness of the joint authentication and key negotiation procedures. In addition, informal security analysis shows that the proposed protocol is secure against the Dolev-Yao (D-Y) and Canetti and Krawczyk (C-K) adversary models, mitigating several known security attacks. In terms of performance, the developed scheme incurs relatively low computation, energy, and communication costs.
智能家居网络部署了无数的传感器和智能设备来收集和传播大量敏感数据,促进任务自动化,以提高舒适度、生活质量、效率和可持续性。然而,利用公共渠道进行用户和智能家居设备之间的交互会引发严重的隐私和安全问题。在最近的文献中提出了许多认证方案;它们中的大多数都容易受到安全攻击,包括离线猜测、特权内部人员和冒充。此外,它们中的一些具有复杂的体系结构,从而导致高资源消耗。在本文中,利用高效的切比雪夫多项式和哈希函数来开发智能家居的鲁棒认证协议。基于Burrows-Abadi-Needham (BAN)逻辑的详细形式化安全分析证实了联合认证和密钥协商程序的鲁棒性。此外,非正式的安全分析表明,提议的协议对Dolev-Yao (D-Y)和Canetti和Krawczyk (C-K)对手模型是安全的,减轻了几种已知的安全攻击。在性能方面,本方案的计算、能量和通信成本相对较低。
{"title":"Efficient smart home message verification protocol based on Chebyshev chaotic mapping","authors":"Vincent Omollo Nyangaresi ,&nbsp;Mohd Shariq ,&nbsp;Daisy Nyang’anyi Ondwari ,&nbsp;Muhammad Shafiq ,&nbsp;Khalid Alsubhi ,&nbsp;Mehedi Masud","doi":"10.1016/j.comnet.2026.112033","DOIUrl":"10.1016/j.comnet.2026.112033","url":null,"abstract":"<div><div>Smart home networks deploy a myriad of sensors and intelligent devices to collect and disseminate massive and sensitive data, facilitating task automation for enhancing comfort, quality of life, efficiency, and sustainability. However, the utilization of public channels for interactions between users and smart home devices raises serious privacy and security issues. Numerous authentication schemes have been proposed in recent literature; most of them are prone to security attacks, including offline guessing, privileged insiders, and impersonation. In addition, some of them have complicated architectures that result in high resource consumption. In this paper, efficient Chebyshev polynomials and hashing functions are leveraged to develop a robust authentication protocol for smart homes. The Burrows–Abadi–Needham (BAN) logic-based detailed formal security analysis confirms the robustness of the joint authentication and key negotiation procedures. In addition, informal security analysis shows that the proposed protocol is secure against the Dolev-Yao (D-Y) and Canetti and Krawczyk (C-K) adversary models, mitigating several known security attacks. In terms of performance, the developed scheme incurs relatively low computation, energy, and communication costs.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112033"},"PeriodicalIF":4.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A platform perspective for the computing continuum: Synergetic orchestration of compute and network resources for hyper-distributed applications 计算连续体的平台视角:超分布式应用程序的计算和网络资源的协同编排
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-17 DOI: 10.1016/j.comnet.2026.112029
Nikos Filinis , Ioannis Dimolitsas , Dimitrios Spatharakis , Paolo Bono , Anastasios Zafeiropoulos , Cristina Emilia Costa , Roberto Bruschi , Symeon Papavassiliou
The rapid advancements in technologies across the Computing Continuum have reinforced the need for the interplay of various network and compute orchestration mechanisms within distributed infrastructure architectures to support the hyper-distributed application (HDA) deployments. A unified approach to managing heterogeneous components is crucial for reconciling conflicting objectives and creating a synergetic framework. To undertake these challenges, we present NEPHELE, a platform that realizes a hierarchical multi-layered orchestration architecture that incorporates infrastructure and application orchestration workflows across diverse resource management layers. The proposed platform integrates well-defined components spanning network and multi-cluster compute domains to enable intent-driven, dynamic orchestration. At its core, the Synergetic Meta-Orchestrator (SMO) integrates diverse application requirements, generating deployment plans by interfacing with underlying orchestrators over distributed compute and network infrastructure. In the current work, we present the NEPHELE architecture, enumerate its interaction workflows, and evaluate key components of the overall architecture based on the instantiation and usage of the NEPHELE platform. The platform is evaluated in a multi-domain infrastructure setup to assess the operational overhead of the introduced orchestration functionality, considering also the assessment of different topology configurations on resource instantiation times and allocation dynamics, and network latency. Finally, we demonstrate the platform’s effectiveness in orchestrating distributed application graphs under varying placement intents, performance constraints, and workload stress conditions. The evaluation results outline the effectiveness of NEPHELE in orchestrating various infrastructure layers and application lifecycle scenarios through a unified interface.
计算连续体技术的快速发展加强了对分布式基础设施体系结构中各种网络和计算编排机制的相互作用的需求,以支持超分布式应用程序(HDA)部署。管理异构组件的统一方法对于协调冲突的目标和创建协同框架至关重要。为了应对这些挑战,我们提出了NEPHELE,这是一个实现分层多层编排架构的平台,它将跨不同资源管理层的基础设施和应用程序编排工作流结合在一起。提出的平台集成了跨网络和多集群计算域的定义良好的组件,以实现意图驱动的动态编排。协同元协调器(SMO)的核心是集成各种应用程序需求,通过与分布式计算和网络基础设施上的底层协调器接口生成部署计划。在当前的工作中,我们提出了NEPHELE体系结构,列举了它的交互工作流,并基于NEPHELE平台的实例化和使用评估了整个体系结构的关键组件。在多域基础设施设置中对平台进行评估,以评估引入的编排功能的操作开销,同时考虑对资源实例化时间和分配动态以及网络延迟的不同拓扑配置的评估。最后,我们演示了该平台在不同放置意图、性能约束和工作负载压力条件下编排分布式应用程序图的有效性。评估结果概述了NEPHELE通过统一接口编排各种基础设施层和应用程序生命周期场景的有效性。
{"title":"A platform perspective for the computing continuum: Synergetic orchestration of compute and network resources for hyper-distributed applications","authors":"Nikos Filinis ,&nbsp;Ioannis Dimolitsas ,&nbsp;Dimitrios Spatharakis ,&nbsp;Paolo Bono ,&nbsp;Anastasios Zafeiropoulos ,&nbsp;Cristina Emilia Costa ,&nbsp;Roberto Bruschi ,&nbsp;Symeon Papavassiliou","doi":"10.1016/j.comnet.2026.112029","DOIUrl":"10.1016/j.comnet.2026.112029","url":null,"abstract":"<div><div>The rapid advancements in technologies across the Computing Continuum have reinforced the need for the interplay of various network and compute orchestration mechanisms within distributed infrastructure architectures to support the hyper-distributed application (HDA) deployments. A unified approach to managing heterogeneous components is crucial for reconciling conflicting objectives and creating a synergetic framework. To undertake these challenges, we present NEPHELE, a platform that realizes a hierarchical multi-layered orchestration architecture that incorporates infrastructure and application orchestration workflows across diverse resource management layers. The proposed platform integrates well-defined components spanning network and multi-cluster compute domains to enable intent-driven, dynamic orchestration. At its core, the Synergetic Meta-Orchestrator (SMO) integrates diverse application requirements, generating deployment plans by interfacing with underlying orchestrators over distributed compute and network infrastructure. In the current work, we present the NEPHELE architecture, enumerate its interaction workflows, and evaluate key components of the overall architecture based on the instantiation and usage of the NEPHELE platform. The platform is evaluated in a multi-domain infrastructure setup to assess the operational overhead of the introduced orchestration functionality, considering also the assessment of different topology configurations on resource instantiation times and allocation dynamics, and network latency. Finally, we demonstrate the platform’s effectiveness in orchestrating distributed application graphs under varying placement intents, performance constraints, and workload stress conditions. The evaluation results outline the effectiveness of NEPHELE in orchestrating various infrastructure layers and application lifecycle scenarios through a unified interface.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112029"},"PeriodicalIF":4.6,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TrafficCL: Contrastive learning on network traffic for accurate, efficient and robust IP cross-regional detection TrafficCL:基于网络流量的对比学习,实现准确、高效、鲁棒的IP跨区域检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-16 DOI: 10.1016/j.comnet.2026.112022
Yiyang Huang , Mingxin Cui , Gaopeng Gou , Chang Liu , Yong Wang , Bing Xia , Guoming Ren , Zheyuan Gu , Xiyuan Zhang , Gang Xiong
Dynamic IP technologies such as IP address pool rotation by Internet operators and elastic IP drift by cloud service providers are widely adopted, breaking the static binding between IP addresses and geographical locations and posing severe challenges to the accuracy, efficiency, and robustness of IP cross-regional detection. Traditional solutions rely on third-party IP geolocation databases, whose large-scale batch update mode fails to synchronize IP regional attribution in a timely manner, struggling to adapt to dynamic IP changes. This results in insufficient detection accuracy and efficiency, compromising the stability of geographically related network services. To address this issue, this paper proposes TrafficCL, a traffic feature-based IP cross-regional detection method: it constructs a geographically associated traffic feature set, aligns traffic embedding distance with geographical distance via contrastive learning to enhance geographical attributes, integrates data augmentation to improve model robustness, designs a lightweight binary classification task for regional deviation detection, and adopts a targeted update strategy to avoid large-scale update latency. Experimental results show that TrafficCL significantly outperforms the active probing method PoP: on the Beijing cross-district dataset, the accuracy increases from 0.781 to 0.982, the F1-score improves by 2.2 times, and the processing efficiency for ten-thousand-level samples is enhanced by 23.6 times. When facing 10 % data loss, 10 % network feature fluctuation, and a positional offset of approximately 500 m, the F1-score degradation is less than 3 % in all cases, demonstrating excellent robustness. This method effectively improves the accuracy, efficiency, and robustness of IP cross-regional detection, and has practical significance for ensuring the stability of geographically related network services.
互联网运营商的IP地址池轮换、云服务提供商的弹性IP漂移等动态IP技术被广泛采用,打破了IP地址与地理位置之间的静态绑定,对IP跨区域检测的准确性、高效性和鲁棒性提出了严峻挑战。传统解决方案依赖第三方IP地理定位数据库,其大规模批量更新方式无法及时同步IP区域归属,难以适应IP动态变化。这导致检测精度和效率不足,影响地理相关网络业务的稳定性。针对这一问题,本文提出了基于流量特征的IP跨区域检测方法TrafficCL:构建地理关联的交通特征集,通过对比学习将交通嵌入距离与地理距离对齐增强地理属性,集成数据增强提高模型鲁棒性,设计轻量级二值分类任务进行区域偏差检测,采用针对性更新策略避免大规模更新延迟。实验结果表明,TrafficCL显著优于主动探测方法PoP:在北京跨区域数据集上,准确率从0.781提高到0.982,f1得分提高2.2倍,万级样本处理效率提高23.6倍。当面对10%的数据丢失、10%的网络特征波动和大约500 m的位置偏移时,f1分数在所有情况下的退化都小于3%,表现出出色的鲁棒性。该方法有效提高了IP跨区域检测的准确性、效率和鲁棒性,对保障地理相关网络业务的稳定性具有实际意义。
{"title":"TrafficCL: Contrastive learning on network traffic for accurate, efficient and robust IP cross-regional detection","authors":"Yiyang Huang ,&nbsp;Mingxin Cui ,&nbsp;Gaopeng Gou ,&nbsp;Chang Liu ,&nbsp;Yong Wang ,&nbsp;Bing Xia ,&nbsp;Guoming Ren ,&nbsp;Zheyuan Gu ,&nbsp;Xiyuan Zhang ,&nbsp;Gang Xiong","doi":"10.1016/j.comnet.2026.112022","DOIUrl":"10.1016/j.comnet.2026.112022","url":null,"abstract":"<div><div>Dynamic IP technologies such as IP address pool rotation by Internet operators and elastic IP drift by cloud service providers are widely adopted, breaking the static binding between IP addresses and geographical locations and posing severe challenges to the accuracy, efficiency, and robustness of IP cross-regional detection. Traditional solutions rely on third-party IP geolocation databases, whose large-scale batch update mode fails to synchronize IP regional attribution in a timely manner, struggling to adapt to dynamic IP changes. This results in insufficient detection accuracy and efficiency, compromising the stability of geographically related network services. To address this issue, this paper proposes TrafficCL, a traffic feature-based IP cross-regional detection method: it constructs a geographically associated traffic feature set, aligns traffic embedding distance with geographical distance via contrastive learning to enhance geographical attributes, integrates data augmentation to improve model robustness, designs a lightweight binary classification task for regional deviation detection, and adopts a targeted update strategy to avoid large-scale update latency. Experimental results show that TrafficCL significantly outperforms the active probing method PoP: on the Beijing cross-district dataset, the accuracy increases from 0.781 to 0.982, the F1-score improves by 2.2 times, and the processing efficiency for ten-thousand-level samples is enhanced by 23.6 times. When facing 10 % data loss, 10 % network feature fluctuation, and a positional offset of approximately 500 m, the F1-score degradation is less than 3 % in all cases, demonstrating excellent robustness. This method effectively improves the accuracy, efficiency, and robustness of IP cross-regional detection, and has practical significance for ensuring the stability of geographically related network services.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112022"},"PeriodicalIF":4.6,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LCC-AKA: Lightweight certificateless cross-domain authentication key agreement protocol for IoT devices LCC-AKA:物联网设备的轻量级无证书跨域认证密钥协议
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-16 DOI: 10.1016/j.comnet.2026.112018
Yingjie Cai , Tianbo Lu , Jiaze Shang , Yanfang Li , Qitai Gong , Hanrui Chen
Authentication key agreement (AKA) protocol is an effective method for achieving secure communication between Internet of Things (IoT) devices. However, existing public key infrastructure-based and identity-based AKA protocols face limitations due to complex certificate management and key escrow issues. Furthermore, cross-domain communication is a fundamental requirement for IoT. However, current solutions addressing this challenge rely on trusted third parties, which undoubtedly increases the communication overhead and system complexity during the authentication phase. To address these challenges, we propose a new provably secure lightweight certificateless cross-domain authentication key agreement protocol (LCC-AKA). By introducing a certificateless public key cryptographic mechanism during the registration phase, we eliminate the need for complex certificate management and the limitations of key escrow, while also preventing insider attacks even under the semi-honest Key Generation Center (KGC) assumption. In the cross-domain authentication key agreement phase, we present a mechanism that enables direct cross-domain authentication and key agreement between devices without relying on trusted third parties, utilizing lightweight elliptic curve and hash function operations to achieve efficiency. In terms of security, we analyze the security vulnerabilities of existing certificateless cross-domain AKA schemes and extend the Real-Or-Random (ROR) model. The LCC-AKA protocol is provably secure under the extended ROR model and BAN logic. Security and performance analyses demonstrate that the LCC-AKA protocol can resist both insider and outsider attacks, including public key replacement attacks, while maintaining low computational and communication overhead.
认证密钥协议(AKA)是实现物联网设备间安全通信的有效方法。然而,由于复杂的证书管理和密钥托管问题,现有的基于公共密钥基础设施和基于身份的AKA协议面临限制。此外,跨域通信是物联网的基本要求。然而,解决这一挑战的当前解决方案依赖于受信任的第三方,这无疑增加了身份验证阶段的通信开销和系统复杂性。为了解决这些挑战,我们提出了一种新的可证明安全的轻量级无证书跨域认证密钥协议(LCC-AKA)。通过在注册阶段引入无证书公钥加密机制,我们消除了对复杂证书管理的需要和密钥托管的限制,同时即使在半诚实的密钥生成中心(KGC)假设下也可以防止内部攻击。在跨域认证密钥协议阶段,我们提出了一种机制,可以在不依赖可信第三方的情况下实现设备之间的直接跨域认证和密钥协议,利用轻量级椭圆曲线和哈希函数操作来实现效率。在安全性方面,我们分析了现有无证书跨域AKA方案的安全漏洞,并扩展了Real-Or-Random (ROR)模型。在扩展的ROR模型和BAN逻辑下,证明了LCC-AKA协议的安全性。安全性和性能分析表明,lc - aka协议可以抵抗内部和外部攻击,包括公钥替换攻击,同时保持较低的计算和通信开销。
{"title":"LCC-AKA: Lightweight certificateless cross-domain authentication key agreement protocol for IoT devices","authors":"Yingjie Cai ,&nbsp;Tianbo Lu ,&nbsp;Jiaze Shang ,&nbsp;Yanfang Li ,&nbsp;Qitai Gong ,&nbsp;Hanrui Chen","doi":"10.1016/j.comnet.2026.112018","DOIUrl":"10.1016/j.comnet.2026.112018","url":null,"abstract":"<div><div>Authentication key agreement (AKA) protocol is an effective method for achieving secure communication between Internet of Things (IoT) devices. However, existing public key infrastructure-based and identity-based AKA protocols face limitations due to complex certificate management and key escrow issues. Furthermore, cross-domain communication is a fundamental requirement for IoT. However, current solutions addressing this challenge rely on trusted third parties, which undoubtedly increases the communication overhead and system complexity during the authentication phase. To address these challenges, we propose a new provably secure lightweight certificateless cross-domain authentication key agreement protocol (LCC-AKA). By introducing a certificateless public key cryptographic mechanism during the registration phase, we eliminate the need for complex certificate management and the limitations of key escrow, while also preventing insider attacks even under the semi-honest Key Generation Center (KGC) assumption. In the cross-domain authentication key agreement phase, we present a mechanism that enables direct cross-domain authentication and key agreement between devices without relying on trusted third parties, utilizing lightweight elliptic curve and hash function operations to achieve efficiency. In terms of security, we analyze the security vulnerabilities of existing certificateless cross-domain AKA schemes and extend the Real-Or-Random (ROR) model. The LCC-AKA protocol is provably secure under the extended ROR model and BAN logic. Security and performance analyses demonstrate that the LCC-AKA protocol can resist both insider and outsider attacks, including public key replacement attacks, while maintaining low computational and communication overhead.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112018"},"PeriodicalIF":4.6,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1