Pub Date : 2026-01-31DOI: 10.1016/j.comnet.2026.112066
Jing Chen , Xiaoqiang Di , Yuming Jiang , Hui Qi , Jinyao Liu , Xu Yan
As remote sensing satellite networks develop, directly linking user terminals to satellites to access data is becoming a key trend. To meet growing user demands while managing limited transmission resources, this paper proposes a Demand Aggregation-based Network Utility Maximization Transmission Scheme(DANUMTS). It uses the NDN architecture and demand aggregation based on spatio-temporal attributes of remote sensing data to prevent redundant data transmission and resource waste. The scheme also designs a demand-link matching matrix for demand selection at each hop and establishes a cooperative rate control model between terminals and networks. By applying the Lagrangian dual method, the model is divided into two subproblems to simplify the optimization process and enable real-time decision-making. Simulation results demonstrate that DANUMTS outperforms existing methods in terms of demand completion time, data rate, network throughput, and the number of completed demands, with more significant improvements when demand aggregation opportunities arise.
{"title":"Demand aggregation-based transmission in remote sensing satellite networks","authors":"Jing Chen , Xiaoqiang Di , Yuming Jiang , Hui Qi , Jinyao Liu , Xu Yan","doi":"10.1016/j.comnet.2026.112066","DOIUrl":"10.1016/j.comnet.2026.112066","url":null,"abstract":"<div><div>As remote sensing satellite networks develop, directly linking user terminals to satellites to access data is becoming a key trend. To meet growing user demands while managing limited transmission resources, this paper proposes a Demand Aggregation-based Network Utility Maximization Transmission Scheme(DANUMTS). It uses the NDN architecture and demand aggregation based on spatio-temporal attributes of remote sensing data to prevent redundant data transmission and resource waste. The scheme also designs a demand-link matching matrix for demand selection at each hop and establishes a cooperative rate control model between terminals and networks. By applying the Lagrangian dual method, the model is divided into two subproblems to simplify the optimization process and enable real-time decision-making. Simulation results demonstrate that DANUMTS outperforms existing methods in terms of demand completion time, data rate, network throughput, and the number of completed demands, with more significant improvements when demand aggregation opportunities arise.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112066"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1016/j.comnet.2026.112077
MingXing Lu , Zhao Wang , Hongbo Fan
A vital part of the Internet of Things (IoT) ecosystem, Mobile Wireless Sensor Networks (MWSNs) allow for intelligent monitoring in dynamic settings including smart cities, healthcare, and disaster management. Quality of Service (QoS) is harmed by the difficulties brought about by the mobility of sensor and sink nodes, such as unequal energy consumption, congestion close to mobile sinks, and unstable routing. This research suggests the Bio-Inspired Dynamic Swarm Routing (BDSR) protocol, an energy-adaptive and self-organizing routing architecture made for large-scale MWSNs, as a solution to these problems. To accomplish congestion-aware and energy-balanced communication, BDSR combines dynamic energy clustering, swarm-driven ring adaptation, and predictive pheromone learning. Based on local pheromone gradients, queue usage, and residual energy, the protocol automatically modifies cluster formation and routing weights. Compared to state-of-the-art techniques like SMEOR, AECR, MSHRP, and Hybrid GS-MBO, extensive NS-2 simulations demonstrate that BDSR increases throughput by 47%, lowers latency and end-to-end delay by 52%, and prolongs network lifetime by 38%. For high-mobility IoT applications that need real-time, energy-efficient, and congestion-tolerant data dissemination, the results validate the scalability and resilience of BDSR.
{"title":"Adaptive ring-synchronized hierarchical routing for energy-efficient and congestion-aware data dissemination in mobile wireless sensor networks","authors":"MingXing Lu , Zhao Wang , Hongbo Fan","doi":"10.1016/j.comnet.2026.112077","DOIUrl":"10.1016/j.comnet.2026.112077","url":null,"abstract":"<div><div>A vital part of the Internet of Things (IoT) ecosystem, Mobile Wireless Sensor Networks (MWSNs) allow for intelligent monitoring in dynamic settings including smart cities, healthcare, and disaster management. Quality of Service (QoS) is harmed by the difficulties brought about by the mobility of sensor and sink nodes, such as unequal energy consumption, congestion close to mobile sinks, and unstable routing. This research suggests the Bio-Inspired Dynamic Swarm Routing (BDSR) protocol, an energy-adaptive and self-organizing routing architecture made for large-scale MWSNs, as a solution to these problems. To accomplish congestion-aware and energy-balanced communication, BDSR combines dynamic energy clustering, swarm-driven ring adaptation, and predictive pheromone learning. Based on local pheromone gradients, queue usage, and residual energy, the protocol automatically modifies cluster formation and routing weights. Compared to state-of-the-art techniques like SMEOR, AECR, MSHRP, and Hybrid GS-MBO, extensive NS-2 simulations demonstrate that BDSR increases throughput by 47%, lowers latency and end-to-end delay by 52%, and prolongs network lifetime by 38%. For high-mobility IoT applications that need real-time, energy-efficient, and congestion-tolerant data dissemination, the results validate the scalability and resilience of BDSR.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112077"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1016/j.comnet.2026.112081
Abate Selamawit Chane, Harun Ur Rashid, Kamrul Hasan, Awoke Loret Abiy, Seong Ho Jeong
The rapid evolution of 5G and emerging technologies is reshaping cellular network architectures. In order to support the growing demands for these technologies, many network designs now incorporate Ultra-Dense Networks (UDNs), particularly in the millimeter wave (mmWave) operations, where dense base station layouts are utilized to overcome propagation challenges and improve capacity. However, such dense deployments significantly complicate mobility management by triggering more frequent handovers, leading to increased signaling overhead and frequent service disruption, as many of these handovers are redundant or offer minimal benefit. To minimize interruptions caused by frequent handovers (HOs), effective handover decision strategies are critical. Several existing schemes have been developed for low to medium mobility scenarios and typically rely on static decision policies, which fail to account for the dynamic nature of the network. Others apply reinforcement learning techniques, yet their evaluations are often restricted to limited mobility settings and lack validation under high-speed conditions. To address these limitations, we propose a handover decision framework based on deep reinforcement learning (DRL) to intelligently suppress unnecessary handovers in mmWave UDNs. The framework leverages the Advantage Actor-Critic (A2C) algorithm, which is well-suited for learning optimal policies in dynamic network environments. A handover skipping strategy is incorporated to improve mobility robustness. Performance is evaluated using handover rate and throughput as key metrics. Experimental results demonstrate that the proposed scheme effectively learns optimal handover behavior through extensive training and outperforms several benchmark approaches from prior studies. As user speed increases, the proposed approach exhibits the most stable handover performance, with only a 28.74% increase in handover rate and outperforms the baselines, which show increases ranging from 60.7% to 91.6%. It also demonstrates strong resilience to mobility-induced degradation, with just a 10% drop in throughput, significantly lower than the 21.3% to 57.1% drops observed in the baseline schemes. In high-speed scenarios, the integration of dynamic handover skipping further improves the algorithm’s performance, yielding an 82.1% increase in cumulative reward and a 39% improvement in throughput.
{"title":"Optimizing handover decisions with skipping mechanisms in 5G mmWave UDNs using reinforcement learning","authors":"Abate Selamawit Chane, Harun Ur Rashid, Kamrul Hasan, Awoke Loret Abiy, Seong Ho Jeong","doi":"10.1016/j.comnet.2026.112081","DOIUrl":"10.1016/j.comnet.2026.112081","url":null,"abstract":"<div><div>The rapid evolution of 5G and emerging technologies is reshaping cellular network architectures. In order to support the growing demands for these technologies, many network designs now incorporate Ultra-Dense Networks (UDNs), particularly in the millimeter wave (mmWave) operations, where dense base station layouts are utilized to overcome propagation challenges and improve capacity. However, such dense deployments significantly complicate mobility management by triggering more frequent handovers, leading to increased signaling overhead and frequent service disruption, as many of these handovers are redundant or offer minimal benefit. To minimize interruptions caused by frequent handovers (HOs), effective handover decision strategies are critical. Several existing schemes have been developed for low to medium mobility scenarios and typically rely on static decision policies, which fail to account for the dynamic nature of the network. Others apply reinforcement learning techniques, yet their evaluations are often restricted to limited mobility settings and lack validation under high-speed conditions. To address these limitations, we propose a handover decision framework based on deep reinforcement learning (DRL) to intelligently suppress unnecessary handovers in mmWave UDNs. The framework leverages the Advantage Actor-Critic (A2C) algorithm, which is well-suited for learning optimal policies in dynamic network environments. A handover skipping strategy is incorporated to improve mobility robustness. Performance is evaluated using handover rate and throughput as key metrics. Experimental results demonstrate that the proposed scheme effectively learns optimal handover behavior through extensive training and outperforms several benchmark approaches from prior studies. As user speed increases, the proposed approach exhibits the most stable handover performance, with only a 28.74% increase in handover rate and outperforms the baselines, which show increases ranging from 60.7% to 91.6%. It also demonstrates strong resilience to mobility-induced degradation, with just a 10% drop in throughput, significantly lower than the 21.3% to 57.1% drops observed in the baseline schemes. In high-speed scenarios, the integration of dynamic handover skipping further improves the algorithm’s performance, yielding an 82.1% increase in cumulative reward and a 39% improvement in throughput.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112081"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1016/j.comnet.2026.112036
Filippo Berto , Marco Anisetti , Qiyang Zhang , Shangguang Wang , Claudio A. Ardagna
Satellite telecommunication networks are playing an increasingly pivotal role in modern communication infrastructures, owing to their expansive coverage, high reliability, and growing capabilities in computing, storage, and bandwidth. In response to evolving market demands, mobile network operators are progressively integrating satellite systems with edge-cloud computing platforms to deliver advanced networking functionalities within a unified architecture. This integration places strong demands on the non-functional assessment (e.g., reliability, availability, and resource efficiency) of satellite-based edge nodes, introducing unprecedented challenges due to their unique operational constraints. In this paper, we propose a lightweight certification framework tailored for satellite computing systems, designed to assess and validate the non-functional posture of satellite edge networks. Our approach explicitly addresses the distinctive characteristics of satellite environments, including intermittent connectivity and constrained resource availability. We validate the proposed scheme through a realistic testbed implementation, modeling a 5G-enabled satellite edge node based on the Tiansuan satellite constellation, an experimental platform jointly developed by Beijing University of Posts and Telecommunications, Spacety, and Peking University.
{"title":"Non-functional certification of edge-computing satellite systems","authors":"Filippo Berto , Marco Anisetti , Qiyang Zhang , Shangguang Wang , Claudio A. Ardagna","doi":"10.1016/j.comnet.2026.112036","DOIUrl":"10.1016/j.comnet.2026.112036","url":null,"abstract":"<div><div>Satellite telecommunication networks are playing an increasingly pivotal role in modern communication infrastructures, owing to their expansive coverage, high reliability, and growing capabilities in computing, storage, and bandwidth. In response to evolving market demands, mobile network operators are progressively integrating satellite systems with edge-cloud computing platforms to deliver advanced networking functionalities within a unified architecture. This integration places strong demands on the non-functional assessment (e.g., reliability, availability, and resource efficiency) of satellite-based edge nodes, introducing unprecedented challenges due to their unique operational constraints. In this paper, we propose a lightweight certification framework tailored for satellite computing systems, designed to assess and validate the non-functional posture of satellite edge networks. Our approach explicitly addresses the distinctive characteristics of satellite environments, including intermittent connectivity and constrained resource availability. We validate the proposed scheme through a realistic testbed implementation, modeling a 5G-enabled satellite edge node based on the Tiansuan satellite constellation, an experimental platform jointly developed by Beijing University of Posts and Telecommunications, Spacety, and Peking University.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112036"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.comnet.2026.112075
Wei Xiong , Yang Wang , Haiyang Jiang , Hongtao Guan
Fast-flux domains are frequently exploited by cybercriminals to perform various attacks, making their detection crucial for maintaining network security. Traditional detection methods rely on manually defined statistical indicators to characterize the spatial distribution of a domain’s associated hosts, including the resolved hosts and authoritative name servers. However, given the increasingly decentralized nature of internet services, these statistical indicators may fail to capture the feature completely, resulting in inaccurate detection. To address this limitation, our proposed method leverages a graph structure to not only provide a more comprehensive representation of the existing feature but also incorporate a supplementary feature considering the spatial distribution between a domain’s client and the resolved hosts assigned to it. At the same time, we customize a graph sampling method to avoid significant increase in detection time caused by excessive graph size. To determine whether the constructed graph represents a fast-flux or benign domain, twelve types of Graph Neural Network (GNN) models, formed by pairwise combinations of three graph convolution methods and four graph pooling methods, are examined. Evaluation datasets are constructed from both public sources and real-world data, demonstrating that the GAT-SAG model performs optimally among the twelve GNN models and significantly outperforms state-of-the-art statistics-based models in terms of accuracy, with only a tolerable increase in time consumption.
{"title":"Graph-based fast-flux domain detection using graph neural networks","authors":"Wei Xiong , Yang Wang , Haiyang Jiang , Hongtao Guan","doi":"10.1016/j.comnet.2026.112075","DOIUrl":"10.1016/j.comnet.2026.112075","url":null,"abstract":"<div><div>Fast-flux domains are frequently exploited by cybercriminals to perform various attacks, making their detection crucial for maintaining network security. Traditional detection methods rely on manually defined statistical indicators to characterize the spatial distribution of a domain’s associated hosts, including the resolved hosts and authoritative name servers. However, given the increasingly decentralized nature of internet services, these statistical indicators may fail to capture the feature completely, resulting in inaccurate detection. To address this limitation, our proposed method leverages a graph structure to not only provide a more comprehensive representation of the existing feature but also incorporate a supplementary feature considering the spatial distribution between a domain’s client and the resolved hosts assigned to it. At the same time, we customize a graph sampling method to avoid significant increase in detection time caused by excessive graph size. To determine whether the constructed graph represents a fast-flux or benign domain, twelve types of Graph Neural Network (GNN) models, formed by pairwise combinations of three graph convolution methods and four graph pooling methods, are examined. Evaluation datasets are constructed from both public sources and real-world data, demonstrating that the GAT-SAG model performs optimally among the twelve GNN models and significantly outperforms state-of-the-art statistics-based models in terms of accuracy, with only a tolerable increase in time consumption.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112075"},"PeriodicalIF":4.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.comnet.2026.112060
Zhipeng Qin , Hanbing Yan , Xiangyu Li , Peng Wang
With advancements in encrypted network communication technologies, botnets increasingly use encrypted DNS traffic to spread covertly and execute attacks. Botnet traffic exhibits diverse and complex behaviors, and detecting botnets within encrypted DNS traffic poses challenges, such as high concealment, low detection efficiency, and difficulties in feature matching. To address these issues, this paper proposes a botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation. This method utilizes an adaptive feature extraction algorithm to capture encrypted DNS traffic features, applies spatial clustering based on traffic characteristics for multi-classification of botnets, and adopts a multi-level knowledge distillation strategy to develop several specialized botnet detection models. These models operate in parallel, enhancing detection efficiency and accuracy. Experimental results demonstrate that this approach significantly reduces computational complexity while maintaining high precision, improving detection efficiency and real-time capabilities.
{"title":"A botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation","authors":"Zhipeng Qin , Hanbing Yan , Xiangyu Li , Peng Wang","doi":"10.1016/j.comnet.2026.112060","DOIUrl":"10.1016/j.comnet.2026.112060","url":null,"abstract":"<div><div>With advancements in encrypted network communication technologies, botnets increasingly use encrypted DNS traffic to spread covertly and execute attacks. Botnet traffic exhibits diverse and complex behaviors, and detecting botnets within encrypted DNS traffic poses challenges, such as high concealment, low detection efficiency, and difficulties in feature matching. To address these issues, this paper proposes a botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation. This method utilizes an adaptive feature extraction algorithm to capture encrypted DNS traffic features, applies spatial clustering based on traffic characteristics for multi-classification of botnets, and adopts a multi-level knowledge distillation strategy to develop several specialized botnet detection models. These models operate in parallel, enhancing detection efficiency and accuracy. Experimental results demonstrate that this approach significantly reduces computational complexity while maintaining high precision, improving detection efficiency and real-time capabilities.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112060"},"PeriodicalIF":4.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.comnet.2026.112061
Mohammad Amaz Uddin , Md Mahiuddin , Iqbal H. Sarker
Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm. Attackers, often posing as trustworthy entities, exploit technological advancements and sophistication to make the detection and prevention of phishing more challenging. Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape. In this research paper, we present a fine-tuned transformer-based masked language model, RoBERTa (Robustly Optimized BERT Pretraining Approach), for phishing email detection. In the detection process, we employ a phishing email dataset and apply the preprocessing techniques to clean and address the class imbalance issues, thereby enhancing model performance. The results of the experiment demonstrate that our fine-tuned model outperforms traditional machine learning models with an accuracy of 98.45%. To ensure model transparency and user trust, we propose a hybrid explanation approach, LITA (LIME-Transformer Attribution), which integrates the potential of Local Interpretable Model-Agnostic Explanations (LIME) and Transformers Interpret methods. The proposed method provides more consistent and user-friendly insights, mitigating local attribution inconsistencies between the two explanation approaches. Moreover, the study highlights the model’s ability to generate its predictions by presenting positive and negative contribution scores using LIME, Transformers Interpret, and LITA.
{"title":"An explainable transformer-based model for phishing email detection: A large language model approach","authors":"Mohammad Amaz Uddin , Md Mahiuddin , Iqbal H. Sarker","doi":"10.1016/j.comnet.2026.112061","DOIUrl":"10.1016/j.comnet.2026.112061","url":null,"abstract":"<div><div>Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm. Attackers, often posing as trustworthy entities, exploit technological advancements and sophistication to make the detection and prevention of phishing more challenging. Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape. In this research paper, we present a fine-tuned transformer-based masked language model, RoBERTa (Robustly Optimized BERT Pretraining Approach), for phishing email detection. In the detection process, we employ a phishing email dataset and apply the preprocessing techniques to clean and address the class imbalance issues, thereby enhancing model performance. The results of the experiment demonstrate that our fine-tuned model outperforms traditional machine learning models with an accuracy of 98.45%. To ensure model transparency and user trust, we propose a hybrid explanation approach, LITA (LIME-Transformer Attribution), which integrates the potential of Local Interpretable Model-Agnostic Explanations (LIME) and Transformers Interpret methods. The proposed method provides more consistent and user-friendly insights, mitigating local attribution inconsistencies between the two explanation approaches. Moreover, the study highlights the model’s ability to generate its predictions by presenting positive and negative contribution scores using LIME, Transformers Interpret, and LITA.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112061"},"PeriodicalIF":4.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1016/j.comnet.2026.112058
Francesco Fiorini, Rosario G. Garroppo, Michele Pagano
Quantum Key Distribution (QKD) protocols are critical for ensuring secure communication against the threats posed by post-quantum technologies. Among these, the BB84 protocol remains the most widely studied and implemented QKD scheme, providing a foundation for secure communication based on the principles of quantum mechanics. This paper investigates the BB84 protocol under a partial intercept-resend attack in a realistic scenario that accounts for system noise. In this context, existing attack detection methods rely on estimating the quantum bit error rate (QBER) in the portion of key bits exchanged over the classical channel to identify the attack. The proposed approach introduces a novel scheme in which the two communicating parties agree on the maximum fraction of shared key bits that can be correctly intercepted by the attacker. This parameter can be configured according to the security requirements of the application. The paper first presents the theoretical model for computing this parameter, which is subsequently used to develop a threshold-based detection method. Unlike other detection methods for intercept-resend attacks, the proposed scheme is independent of the interception density and relies solely on the system noise and the application’s security requirements. Finally, an enhanced version of the Python Quantum Solver library is implemented to test the proposed method using the Qiskit framework. Simulation results demonstrate the high accuracy and very low false negative rate of the proposed method, with a slight degradation in performance observed when the actual interception rate approaches the threshold defined by the security requirements.
{"title":"Threshold-based eavesdropper detection for partial intercept-resend attack in noisy BB84 quantum key distribution","authors":"Francesco Fiorini, Rosario G. Garroppo, Michele Pagano","doi":"10.1016/j.comnet.2026.112058","DOIUrl":"10.1016/j.comnet.2026.112058","url":null,"abstract":"<div><div>Quantum Key Distribution (QKD) protocols are critical for ensuring secure communication against the threats posed by post-quantum technologies. Among these, the BB84 protocol remains the most widely studied and implemented QKD scheme, providing a foundation for secure communication based on the principles of quantum mechanics. This paper investigates the BB84 protocol under a partial intercept-resend attack in a realistic scenario that accounts for system noise. In this context, existing attack detection methods rely on estimating the quantum bit error rate (QBER) in the portion of key bits exchanged over the classical channel to identify the attack. The proposed approach introduces a novel scheme in which the two communicating parties agree on the maximum fraction of shared key bits that can be correctly intercepted by the attacker. This parameter can be configured according to the security requirements of the application. The paper first presents the theoretical model for computing this parameter, which is subsequently used to develop a threshold-based detection method. Unlike other detection methods for intercept-resend attacks, the proposed scheme is independent of the interception density and relies solely on the system noise and the application’s security requirements. Finally, an enhanced version of the Python Quantum Solver library is implemented to test the proposed method using the Qiskit framework. Simulation results demonstrate the high accuracy and very low false negative rate of the proposed method, with a slight degradation in performance observed when the actual interception rate approaches the threshold defined by the security requirements.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112058"},"PeriodicalIF":4.6,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-25DOI: 10.1016/j.comnet.2026.112057
Rafael Direito , Kostis Trantzas , Jorge Gallego-Madrid , Ana Hermosilla , Diogo Gomes , Christos Tranoris , Rui L.A. Aguiar , Antonio Skarmeta , Spyros Denazis
The advent of 5G and Beyond 5G networks has propelled the development of innovative applications and services that harness network programmability, data from management and control interfaces, and the capabilities of network slicing. However, ensuring these applications function as intended and effectively utilize 5G/B5G capabilities remains a challenge, mainly due to their reliance on complex interactions with control plane Network Functions. This work addresses this issue by proposing a novel architecture to enhance the onboarding, orchestration, and validation of 5G/B5G-capable applications and services, while enabling the creation of application-tailored network slices. By integrating DevOps principles into the NFV ecosystem, the proposed architecture automates workflows for deployment, testing, and validation, while adhering to standardized onboarding models and continuous integration practices. Furthermore, we also address the realization of such architecture into a platform that supports extensive testing across multiple dimensions, including 5G readiness, security, performance, scalability, and availability. Besides introducing such a platform, this work also demonstrates its feasibility through the orchestration and validation of an automotive application that manages virtual On-Board Units within a 5G-enabled environment. The obtained results underscore the effectiveness of the proposed architecture, as well as the performance and scalability of the platform that materializes it. By integrating DevOps principles, our work aids in reducing deployment complexity, automating testing and validation, and enhancing the reliability of next-generation Network Applications, therefore accelerating their time-to-market.
{"title":"A comprehensive approach for the onboarding, orchestration, and validation of network applications","authors":"Rafael Direito , Kostis Trantzas , Jorge Gallego-Madrid , Ana Hermosilla , Diogo Gomes , Christos Tranoris , Rui L.A. Aguiar , Antonio Skarmeta , Spyros Denazis","doi":"10.1016/j.comnet.2026.112057","DOIUrl":"10.1016/j.comnet.2026.112057","url":null,"abstract":"<div><div>The advent of 5G and Beyond 5G networks has propelled the development of innovative applications and services that harness network programmability, data from management and control interfaces, and the capabilities of network slicing. However, ensuring these applications function as intended and effectively utilize 5G/B5G capabilities remains a challenge, mainly due to their reliance on complex interactions with control plane Network Functions. This work addresses this issue by proposing a novel architecture to enhance the onboarding, orchestration, and validation of 5G/B5G-capable applications and services, while enabling the creation of application-tailored network slices. By integrating DevOps principles into the NFV ecosystem, the proposed architecture automates workflows for deployment, testing, and validation, while adhering to standardized onboarding models and continuous integration practices. Furthermore, we also address the realization of such architecture into a platform that supports extensive testing across multiple dimensions, including 5G readiness, security, performance, scalability, and availability. Besides introducing such a platform, this work also demonstrates its feasibility through the orchestration and validation of an automotive application that manages virtual On-Board Units within a 5G-enabled environment. The obtained results underscore the effectiveness of the proposed architecture, as well as the performance and scalability of the platform that materializes it. By integrating DevOps principles, our work aids in reducing deployment complexity, automating testing and validation, and enhancing the reliability of next-generation Network Applications, therefore accelerating their time-to-market.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112057"},"PeriodicalIF":4.6,"publicationDate":"2026-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.comnet.2026.112031
Muzun Althunayyan , Amir Javed , Omer Rana
Connected and Autonomous Vehicles (CAVs) have advanced modern transportation by improving the efficiency, safety, and convenience of mobility through automation and connectivity, yet they remain vulnerable to cybersecurity threats, particularly through the insecure Controller Area Network (CAN) bus. Cyberattacks can have devastating consequences in connected vehicles, including the loss of control over critical systems, necessitating robust security solutions. In-vehicle Intrusion Detection Systems (IDSs) offer a promising approach by detecting malicious activities in real time. This survey provides a comprehensive review of state-of-the-art research on learning-based in-vehicle IDSs, focusing on Machine Learning (ML), Deep Learning (DL), and Federated Learning (FL) approaches. Based on the reviewed studies, we critically examine existing IDS approaches, categorising them by the types of attacks they detect-known, unknown, and combined known-unknown attacks-while identifying their limitations. We also review the evaluation metrics used in research, emphasising the need to consider multiple criteria to meet the requirements of safety-critical systems. Additionally, we analyse FL-based IDSs and highlight their limitations. By doing so, this survey helps identify effective security measures, address existing limitations, and guide future research toward more resilient and adaptive protection mechanisms, ensuring the safety and reliability of CAVs.
{"title":"A survey of learning-based intrusion detection systems for in-vehicle networks","authors":"Muzun Althunayyan , Amir Javed , Omer Rana","doi":"10.1016/j.comnet.2026.112031","DOIUrl":"10.1016/j.comnet.2026.112031","url":null,"abstract":"<div><div>Connected and Autonomous Vehicles (CAVs) have advanced modern transportation by improving the efficiency, safety, and convenience of mobility through automation and connectivity, yet they remain vulnerable to cybersecurity threats, particularly through the insecure Controller Area Network (CAN) bus. Cyberattacks can have devastating consequences in connected vehicles, including the loss of control over critical systems, necessitating robust security solutions. In-vehicle Intrusion Detection Systems (IDSs) offer a promising approach by detecting malicious activities in real time. This survey provides a comprehensive review of state-of-the-art research on learning-based in-vehicle IDSs, focusing on Machine Learning (ML), Deep Learning (DL), and Federated Learning (FL) approaches. Based on the reviewed studies, we critically examine existing IDS approaches, categorising them by the types of attacks they detect-known, unknown, and combined known-unknown attacks-while identifying their limitations. We also review the evaluation metrics used in research, emphasising the need to consider multiple criteria to meet the requirements of safety-critical systems. Additionally, we analyse FL-based IDSs and highlight their limitations. By doing so, this survey helps identify effective security measures, address existing limitations, and guide future research toward more resilient and adaptive protection mechanisms, ensuring the safety and reliability of CAVs.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112031"},"PeriodicalIF":4.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}