Pub Date : 2026-01-01Epub Date: 2025-11-13DOI: 10.1016/j.jnca.2025.104389
Muddasar Laghari , Yuanchang Zhong , Muhammad Junaid Tahir , Muhammad Adil
In response to cyber attacks targeting the Internet of Vehicles (IoV) ecosystem, we propose SIoV-DS, a secure framework addressing inter-vehicle communication, intra-vehicle networks, and infrastructure threats using a zero-trust approach. Vehicle data is first encoded with a Variational Autoencoder (V-AE) to mitigate inference attacks, then analyzed by an Extended Long Short-Term Memory (EX-LSTM) detector capable of identifying diverse attacks, including Denial of Service (DoS), spoofing, and malware. For interpretability, Shapley Additive Explanations (SHAP) provide insights into EX-LSTM decisions, assisting Security Operations Center (SOC) analysts. SIoV-DS is deployed over a Software-Defined Networking (SDN) architecture to ensure scalability. Evaluations on CIC-IoV2024 and Edge-IIoTset2022 datasets demonstrate high accuracy (99.78% and 95.01%, respectively), while inference-time analysis confirms feasibility for real-time detection, effectively securing the IoV ecosystem against advanced cyber threats.
{"title":"SIoV-IDS: SDN-enabled zero-trust framework for explainable intrusion detection in IoVs using Variational Autoencoders and EX-LSTM","authors":"Muddasar Laghari , Yuanchang Zhong , Muhammad Junaid Tahir , Muhammad Adil","doi":"10.1016/j.jnca.2025.104389","DOIUrl":"10.1016/j.jnca.2025.104389","url":null,"abstract":"<div><div>In response to cyber attacks targeting the Internet of Vehicles (IoV) ecosystem, we propose <strong>SIoV-DS</strong>, a secure framework addressing inter-vehicle communication, intra-vehicle networks, and infrastructure threats using a zero-trust approach. Vehicle data is first encoded with a <em>Variational Autoencoder (V-AE)</em> to mitigate inference attacks, then analyzed by an <em>Extended Long Short-Term Memory (EX-LSTM)</em> detector capable of identifying diverse attacks, including Denial of Service (DoS), spoofing, and malware. For interpretability, <em>Shapley Additive Explanations (SHAP)</em> provide insights into EX-LSTM decisions, assisting Security Operations Center (SOC) analysts. SIoV-DS is deployed over a <em>Software-Defined Networking (SDN)</em> architecture to ensure scalability. Evaluations on <em>CIC-IoV2024</em> and <em>Edge-IIoTset2022</em> datasets demonstrate high accuracy (99.78% and 95.01%, respectively), while inference-time analysis confirms feasibility for real-time detection, effectively securing the IoV ecosystem against advanced cyber threats.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104389"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-30DOI: 10.1016/j.jnca.2025.104369
Xinyu Fan , Shiyuan Xu , Yibo Cao , Xue Chen , Yu Chen , Tianrun Xu
The rapid development of intelligent transportation systems (ITS) has raised higher requirements for traffic data sharing and collaboration. As an effective solution, vehicular ad-hoc network (VANET) has emerged to support real-time data transfer between vehicles and infrastructure. However, VANET faces the challenges of data security and privacy. To alleviate these, many conditional privacy-preserving authentication (CPPA) schemes have been proposed. CPPA utilizes signature technology to ensure message authenticity while enabling the effective tracing of malicious vehicles. Unfortunately, traditional CPPA schemes fail to consider the security of secret keys stored in tamper-proof devices (TPDs). Additionally, most existing schemes still suffer from excessive computational and communication overhead. In this paper, we propose CPPA-SKU, an efficient CPPA scheme with message recovery for VANET. CPPA-SKU introduces a secret key update method using a secure pseudo-random function and Shamir’s secret sharing to prevent key leakage issues in TPDs. Additionally, CPPA-SKU enables the recovery of relevant messages, eliminating the need to embed messages in signatures, thereby reducing the communication overhead. Furthermore, CPPA-SKU is implemented based on the elliptic curve cryptosystem, which avoids expensive bilinear pairing operations while ensuring the security of signatures. We also formally prove the security of CPPA-SKU in the random oracle model. Comprehensive performance evaluations indicate that CPPA-SKU reduces computational overhead by approximately 1.3–2.8 and communication overhead by approximately 1.5-3.5.
{"title":"CPPA-SKU: Towards efficient conditional privacy-preserving authentication protocol with secret key update in VANET","authors":"Xinyu Fan , Shiyuan Xu , Yibo Cao , Xue Chen , Yu Chen , Tianrun Xu","doi":"10.1016/j.jnca.2025.104369","DOIUrl":"10.1016/j.jnca.2025.104369","url":null,"abstract":"<div><div>The rapid development of intelligent transportation systems (ITS) has raised higher requirements for traffic data sharing and collaboration. As an effective solution, vehicular ad-hoc network (VANET) has emerged to support real-time data transfer between vehicles and infrastructure. However, VANET faces the challenges of data security and privacy. To alleviate these, many conditional privacy-preserving authentication (CPPA) schemes have been proposed. CPPA utilizes signature technology to ensure message authenticity while enabling the effective tracing of malicious vehicles. Unfortunately, traditional CPPA schemes fail to consider the security of secret keys stored in tamper-proof devices (TPDs). Additionally, most existing schemes still suffer from excessive computational and communication overhead. In this paper, we propose CPPA-SKU, an efficient CPPA scheme with message recovery for VANET. CPPA-SKU introduces a secret key update method using a secure pseudo-random function and Shamir’s secret sharing to prevent key leakage issues in TPDs. Additionally, CPPA-SKU enables the recovery of relevant messages, eliminating the need to embed messages in signatures, thereby reducing the communication overhead. Furthermore, CPPA-SKU is implemented based on the elliptic curve cryptosystem, which avoids expensive bilinear pairing operations while ensuring the security of signatures. We also formally prove the security of CPPA-SKU in the random oracle model. Comprehensive performance evaluations indicate that CPPA-SKU reduces computational overhead by approximately 1.3<span><math><mo>×</mo></math></span>–2.8<span><math><mo>×</mo></math></span> and communication overhead by approximately 1.5<span><math><mo>×</mo></math></span>-3.5<span><math><mo>×</mo></math></span>.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104369"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145404577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-24DOI: 10.1016/j.jnca.2025.104397
Xiujun Wang, Wenlong Dong, Wenjie Hu, Juyan Li
With the rapid development of Internet of Things (IoT) technology, smart home systems have significantly enhanced the convenience and automation level of users’ daily lives. However, as sensitive data is transmitted between smart devices over open channels, the security risks associated with data transmission have become increasingly prominent. Authentication and key exchange (AKE) protocols are designed to facilitate identity authentication and confidential communication between smart devices. However, existing AKE protocols often suffer from low efficiency and poor scalability. These limitations make them unsuitable for resource-constrained IoT devices and unable to provide secure mutual authentication. To tackle these challenges, this study introduces a blockchain-assisted lightweight authentication scheme for smart homes. The proposed scheme integrates biometric authentication and device credentials to achieve multi-factor authentication. Meanwhile, blockchain technology is employed to record and protect interactions between users and smart devices, thereby enhancing the security, transparency, and auditability of the communication process. Formal security analysis under the Random Oracle Model (ROM) confirms the scheme’s key confidentiality. Furthermore, informal analysis demonstrates its robustness against common threats, including replay, man-in-the-middle, impersonation, and device capture attacks. Benchmarks against existing protocols demonstrate that our design incurs the least computational, communication, and energy overhead. It achieves this efficiency while preserving robust security and scalability, making it ideal for resource-limited smart-home devices.
{"title":"A blockchain-assisted lightweight authentication scheme for smart home environments","authors":"Xiujun Wang, Wenlong Dong, Wenjie Hu, Juyan Li","doi":"10.1016/j.jnca.2025.104397","DOIUrl":"10.1016/j.jnca.2025.104397","url":null,"abstract":"<div><div>With the rapid development of Internet of Things (IoT) technology, smart home systems have significantly enhanced the convenience and automation level of users’ daily lives. However, as sensitive data is transmitted between smart devices over open channels, the security risks associated with data transmission have become increasingly prominent. Authentication and key exchange (AKE) protocols are designed to facilitate identity authentication and confidential communication between smart devices. However, existing AKE protocols often suffer from low efficiency and poor scalability. These limitations make them unsuitable for resource-constrained IoT devices and unable to provide secure mutual authentication. To tackle these challenges, this study introduces a blockchain-assisted lightweight authentication scheme for smart homes. The proposed scheme integrates biometric authentication and device credentials to achieve multi-factor authentication. Meanwhile, blockchain technology is employed to record and protect interactions between users and smart devices, thereby enhancing the security, transparency, and auditability of the communication process. Formal security analysis under the Random Oracle Model (ROM) confirms the scheme’s key confidentiality. Furthermore, informal analysis demonstrates its robustness against common threats, including replay, man-in-the-middle, impersonation, and device capture attacks. Benchmarks against existing protocols demonstrate that our design incurs the least computational, communication, and energy overhead. It achieves this efficiency while preserving robust security and scalability, making it ideal for resource-limited smart-home devices.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104397"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-19DOI: 10.1016/j.jnca.2025.104390
Xiaole Li , Yinghui Jiang , Xing Wang , Jiuru Wang , Lei Gao , Shanwen Yi
After some disaster occurs, rapid data evacuation among cloud data centers is of great importance. Data evacuation optimization is a two-stage process including destination selection and flow scheduling. These two stages are related to each other, while evacuation efficiency is affected by evacuation distance, bandwidth allocation ratio, and total amount of evacuation flow at the same time. The mutual constraints among multiple factors make it difficult to find or approximate the optimal solution via single-objective optimization. This paper proposes a new two-stage data evacuation strategy using multi-objective reinforcement learning, with evacuation flow optimization as the central optimization objective across both stages. In the first stage, it simultaneously minimizes total path length and maximizes the total available bandwidth to determine source–destination pair for every evacuation transfer. In the second stage, it simultaneously allocates proportional bandwidth and maximizes the total amount of evacuation flow to find path and allocate bandwidth for every evacuation transfer. Reward function is set based on classifying candidate sets to search for optimal solution while ensuring that feasible solutions are obtained. Chebyshev scalarization function is used to evaluate action rewards and optimize action selection process. Performance comparison is implemented with state-of-the-art algorithms based on different data volumes and network scales. Simulation result demonstrates that the new strategy outperforms other algorithms with higher evacuation efficiency, good convergence and robustness.
{"title":"Data evacuation optimization using multi-objective reinforcement learning","authors":"Xiaole Li , Yinghui Jiang , Xing Wang , Jiuru Wang , Lei Gao , Shanwen Yi","doi":"10.1016/j.jnca.2025.104390","DOIUrl":"10.1016/j.jnca.2025.104390","url":null,"abstract":"<div><div>After some disaster occurs, rapid data evacuation among cloud data centers is of great importance. Data evacuation optimization is a two-stage process including destination selection and flow scheduling. These two stages are related to each other, while evacuation efficiency is affected by evacuation distance, bandwidth allocation ratio, and total amount of evacuation flow at the same time. The mutual constraints among multiple factors make it difficult to find or approximate the optimal solution via single-objective optimization. This paper proposes a new two-stage data evacuation strategy using multi-objective reinforcement learning, with evacuation flow optimization as the central optimization objective across both stages. In the first stage, it simultaneously minimizes total path length and maximizes the total available bandwidth to determine source–destination pair for every evacuation transfer. In the second stage, it simultaneously allocates proportional bandwidth and maximizes the total amount of evacuation flow to find path and allocate bandwidth for every evacuation transfer. Reward function is set based on classifying candidate sets to search for optimal solution while ensuring that feasible solutions are obtained. Chebyshev scalarization function is used to evaluate action rewards and optimize action selection process. Performance comparison is implemented with state-of-the-art algorithms based on different data volumes and network scales. Simulation result demonstrates that the new strategy outperforms other algorithms with higher evacuation efficiency, good convergence and robustness.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104390"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-24DOI: 10.1016/j.jnca.2025.104393
José Angel Sánchez Martín , Victor Mitrana , Mihaela Păun , José Ramó n Sánchez Couso
We continue the investigation regarding the simulation of different network topologies in networks having processors inspired by the DNA splicing which are located in their nodes. These networks are called networks of splicing processors. So far, it was shown how every network of splicing processors, no matter its topology, can be converted, by a direct construction, into an equivalent network with a desired topology, especially a common one like star, grid or complete (or full-mesh). A short discussion highlights the importance of wheel graph topology in relation to biology and DNA computing. This work completes this study by giving an effective construction of a wheel (ring-star) network of splicing processors that is equivalent to an arbitrary network. The size and time complexity of our construction is evaluated. Finally, we discuss a very preliminary simulation of the networks considered here by means of recent technologies and strategies that are suitable to handle the massive data and parallel processing requirements of these networks.
{"title":"Networks of splicing processors: Wheel graph topology simulation","authors":"José Angel Sánchez Martín , Victor Mitrana , Mihaela Păun , José Ramó n Sánchez Couso","doi":"10.1016/j.jnca.2025.104393","DOIUrl":"10.1016/j.jnca.2025.104393","url":null,"abstract":"<div><div>We continue the investigation regarding the simulation of different network topologies in networks having processors inspired by the DNA splicing which are located in their nodes. These networks are called networks of splicing processors. So far, it was shown how every network of splicing processors, no matter its topology, can be converted, by a direct construction, into an equivalent network with a desired topology, especially a common one like star, grid or complete (or full-mesh). A short discussion highlights the importance of wheel graph topology in relation to biology and DNA computing. This work completes this study by giving an effective construction of a wheel (ring-star) network of splicing processors that is equivalent to an arbitrary network. The size and time complexity of our construction is evaluated. Finally, we discuss a very preliminary simulation of the networks considered here by means of recent technologies and strategies that are suitable to handle the massive data and parallel processing requirements of these networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104393"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-10DOI: 10.1016/j.jnca.2025.104358
Muhammad Hasnain , Nadeem Javaid , Abdul Khader Jilani Saudagar , Neeraj Kumar
<div><div>Intrusion Detection (ID) in the Internet of Secure Things (IoST) has become increasingly critical due to the rising frequency and sophistication of cyber-attacks, which can lead to severe consequences such as data breaches, financial losses, and service disruptions. These risks are further intensified in computationally limited environments, where limited computational capacity and rapidly evolving threats make accurate and efficient detection challenging. In this study, a data-efficient ID framework tailored for resource-constrained environments is proposed by leveraging active learning and meta-heuristic optimization techniques. The proposed framework systematically addresses three critical limitations commonly observed in traditional models: data imbalance, inefficient hyperparameter tuning, and dependency on large labeled datasets. Initially, to mitigate class imbalance, adaptive synthetic sampling generates synthetic instances for minority classes, thereby enhancing learning in complex regions of the feature space. Next, for hyperparameter optimization, the Sandpiper Optimization (SO) algorithm fine-tunes the regularization parameter of Logistic Regression (LR), yielding significant improvements in model generalization. Finally, the challenge of limited labeled data is addressed through two active learning strategies: Active Learning Uncertainty-based (ALU) and Active Learning Entropy-based (ALE). These strategies selectively query the most informative samples from the unlabeled pool, ensuring maximum learning with minimal annotation effort. The performance of the proposed models is evaluated on two benchmark datasets: the wireless sensor networks and network intrusion detection datasets. Simulation results demonstrate that proposed models outperform base model LR. LRALE achieves improvements of 10.48% and 3.16% in accuracy, 19.48% and 3.16% in recall, and 7.23% and 1.04% in F1-score on WSN-DS and CIC-IDS-DS datasets, respectively. LRALU shows improvements of 18.18% and 2.11% in accuracy, 18.18% and 2.11% in recall, and 14.63% and 2.08% in Receiver Operating Characteristic-Area Under the Curve (ROC-AUC). Similarly, LRSO achieves improvements of 9.09% and 2.11% in accuracy, 9.09% and 1.05% in recall, and 9.76% and 3.12% in ROC-AUC on WSN-DS and CIC-IDS-DS datasets, respectively. To ensure model generalization and stability across different data partitions, a rigorous 10-fold cross-validation is conducted. Model interpretability is further enhanced using eXplainable artificial intelligence techniques, including Local interpretable model-agnostic explanations and Shapley additive explanations, to elucidate feature contributions and improve transparency. Additionally, statistical significance testing through paired <em>t</em>-tests confirms the robustness and reliability of the proposed models. Overall, this framework introduces a comprehensive, annotation-efficient, and transparent ID solution that significantly advances the domain, m
{"title":"An intelligent and explainable intrusion detection framework for Internet of Sensor Things using generalizable optimized active Machine Learning","authors":"Muhammad Hasnain , Nadeem Javaid , Abdul Khader Jilani Saudagar , Neeraj Kumar","doi":"10.1016/j.jnca.2025.104358","DOIUrl":"10.1016/j.jnca.2025.104358","url":null,"abstract":"<div><div>Intrusion Detection (ID) in the Internet of Secure Things (IoST) has become increasingly critical due to the rising frequency and sophistication of cyber-attacks, which can lead to severe consequences such as data breaches, financial losses, and service disruptions. These risks are further intensified in computationally limited environments, where limited computational capacity and rapidly evolving threats make accurate and efficient detection challenging. In this study, a data-efficient ID framework tailored for resource-constrained environments is proposed by leveraging active learning and meta-heuristic optimization techniques. The proposed framework systematically addresses three critical limitations commonly observed in traditional models: data imbalance, inefficient hyperparameter tuning, and dependency on large labeled datasets. Initially, to mitigate class imbalance, adaptive synthetic sampling generates synthetic instances for minority classes, thereby enhancing learning in complex regions of the feature space. Next, for hyperparameter optimization, the Sandpiper Optimization (SO) algorithm fine-tunes the regularization parameter of Logistic Regression (LR), yielding significant improvements in model generalization. Finally, the challenge of limited labeled data is addressed through two active learning strategies: Active Learning Uncertainty-based (ALU) and Active Learning Entropy-based (ALE). These strategies selectively query the most informative samples from the unlabeled pool, ensuring maximum learning with minimal annotation effort. The performance of the proposed models is evaluated on two benchmark datasets: the wireless sensor networks and network intrusion detection datasets. Simulation results demonstrate that proposed models outperform base model LR. LRALE achieves improvements of 10.48% and 3.16% in accuracy, 19.48% and 3.16% in recall, and 7.23% and 1.04% in F1-score on WSN-DS and CIC-IDS-DS datasets, respectively. LRALU shows improvements of 18.18% and 2.11% in accuracy, 18.18% and 2.11% in recall, and 14.63% and 2.08% in Receiver Operating Characteristic-Area Under the Curve (ROC-AUC). Similarly, LRSO achieves improvements of 9.09% and 2.11% in accuracy, 9.09% and 1.05% in recall, and 9.76% and 3.12% in ROC-AUC on WSN-DS and CIC-IDS-DS datasets, respectively. To ensure model generalization and stability across different data partitions, a rigorous 10-fold cross-validation is conducted. Model interpretability is further enhanced using eXplainable artificial intelligence techniques, including Local interpretable model-agnostic explanations and Shapley additive explanations, to elucidate feature contributions and improve transparency. Additionally, statistical significance testing through paired <em>t</em>-tests confirms the robustness and reliability of the proposed models. Overall, this framework introduces a comprehensive, annotation-efficient, and transparent ID solution that significantly advances the domain, m","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104358"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145384600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-30DOI: 10.1016/j.jnca.2025.104371
Wenhui Yu , Jinyao Liu , Xiaoqiang Di , Pei Xiao , Hui Qi
The diversity of network forms and services poses challenges to the TCP protocol in achieving good performance. The current XQUIC implementation of the QUIC protocol still adopts TCP’s heuristic congestion control mechanisms, resulting in limited performance gains. In recent years, reinforcement learning-based congestion control has emerged as an effective alternative to traditional strategies, but existing algorithms are not optimized for dynamic network characteristics. In this paper, we propose a deep reinforcement learning-based congestion control algorithm, Dynamic Network Congestion Control for QUIC Based on PPO (DNCCQ-PPO). To address the heterogeneity of dynamic network training environments, we introduce a novel sampling interaction mechanism, action space, and reward function, and propose an asynchronous distributed training scheme. Additionally, we develop a generalized reinforcement learning framework for congestion control algorithm development that supports XQUIC, and verify the performance of DNCCQ-PPO within this framework. Experimental results demonstrate the algorithm’s fast convergence and excellent training performance. In performance tests, DNCCQ-PPO achieves throughput comparable to that of CUBIC while reducing latency by 54.78%. In multi-stream fairness tests, it outperforms several mainstream algorithms. In satellite network simulations, DNCCQ-PPO maintains high throughput while reducing latency by 69.58% and 72.77% compared to CUBIC and PCC, respectively.
{"title":"DNCCQ-PPO: A dynamic network congestion control algorithm based on deep reinforcement learning for XQUIC","authors":"Wenhui Yu , Jinyao Liu , Xiaoqiang Di , Pei Xiao , Hui Qi","doi":"10.1016/j.jnca.2025.104371","DOIUrl":"10.1016/j.jnca.2025.104371","url":null,"abstract":"<div><div>The diversity of network forms and services poses challenges to the TCP protocol in achieving good performance. The current XQUIC implementation of the QUIC protocol still adopts TCP’s heuristic congestion control mechanisms, resulting in limited performance gains. In recent years, reinforcement learning-based congestion control has emerged as an effective alternative to traditional strategies, but existing algorithms are not optimized for dynamic network characteristics. In this paper, we propose a deep reinforcement learning-based congestion control algorithm, Dynamic Network Congestion Control for QUIC Based on PPO (DNCCQ-PPO). To address the heterogeneity of dynamic network training environments, we introduce a novel sampling interaction mechanism, action space, and reward function, and propose an asynchronous distributed training scheme. Additionally, we develop a generalized reinforcement learning framework for congestion control algorithm development that supports XQUIC, and verify the performance of DNCCQ-PPO within this framework. Experimental results demonstrate the algorithm’s fast convergence and excellent training performance. In performance tests, DNCCQ-PPO achieves throughput comparable to that of CUBIC while reducing latency by 54.78%. In multi-stream fairness tests, it outperforms several mainstream algorithms. In satellite network simulations, DNCCQ-PPO maintains high throughput while reducing latency by 69.58% and 72.77% compared to CUBIC and PCC, respectively.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104371"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145404579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the application scope of blockchain technology continues to expand, challenges arise in the state verification of blockchain systems based on account models. Traditionally, Merkle Patricia Tries are used to maintain the state of the world, and the verification of a specific data block needs to be verified step by step up to the root node, which guarantees the data integrity, but in large-scale systems, problems such as low efficiency of verification and updating, insufficient security, and increased storage demand still occur, which affects the performance of blockchain networks. In this paper, we propose a Verkle-Accumulator-Based Multiple State Verifiable and Updatable (VA-MSVU) scheme for blockchain. The scheme integrates Verkle tree (VT), Verkle accumulator (VA), KZG polynomial commitment, and aggregated proofs to verify the integrity of multiple account states in batches. By mapping account states to the VT, our approach enhances security, reduces the size of state data, and improves both verification speed and update efficiency. Simulation results show that the VA-MSVU scheme has smaller proof size and faster verification speed than the existing stored data structure, demonstrating the advantages of the VA-MSVU scheme in terms of simplicity and efficiency. For verifying multiple account states, the aggregated proofs of the scheme have significant advantages over KZG polynomial commitment and single-point proof, excelling in proof size, verification and update rate. In addition, by adjusting the branching factor in Verkle tree, a trade-off between computational overhead and communication is achieved to improve the adaptability of the system to different network scenarios.
{"title":"Verkle-Accumulator-Based Multiple State Verifiable and Updatable (VA-MSVU) scheme for blockchain","authors":"Shangping Wang, Juanjuan Ma, Qi Huang, Xiaoling Xie","doi":"10.1016/j.jnca.2025.104392","DOIUrl":"10.1016/j.jnca.2025.104392","url":null,"abstract":"<div><div>As the application scope of blockchain technology continues to expand, challenges arise in the state verification of blockchain systems based on account models. Traditionally, Merkle Patricia Tries are used to maintain the state of the world, and the verification of a specific data block needs to be verified step by step up to the root node, which guarantees the data integrity, but in large-scale systems, problems such as low efficiency of verification and updating, insufficient security, and increased storage demand still occur, which affects the performance of blockchain networks. In this paper, we propose a Verkle-Accumulator-Based Multiple State Verifiable and Updatable (VA-MSVU) scheme for blockchain. The scheme integrates Verkle tree (VT), Verkle accumulator (VA), KZG polynomial commitment, and aggregated proofs to verify the integrity of multiple account states in batches. By mapping account states to the VT, our approach enhances security, reduces the size of state data, and improves both verification speed and update efficiency. Simulation results show that the VA-MSVU scheme has smaller proof size and faster verification speed than the existing stored data structure, demonstrating the advantages of the VA-MSVU scheme in terms of simplicity and efficiency. For verifying multiple account states, the aggregated proofs of the scheme have significant advantages over KZG polynomial commitment and single-point proof, excelling in proof size, verification and update rate. In addition, by adjusting the branching factor in Verkle tree, a trade-off between computational overhead and communication is achieved to improve the adaptability of the system to different network scenarios.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104392"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Community detection is a vital task in social network analysis, enabling the extraction of hidden structures and relationships. However, existing diffusion-based local community detection algorithms often depend on similarity-based scoring, which frequently failing to identify influential core nodes for expanding label. To address these shortcomings, we propose the local detecting and structuring communities (LDSC) method that integrates structural and relational insights with graph-based metrics and deep learning for refined community detection. LDSC uniquely stands out by combining Local Influence (LI) and Adaptive Absorbing Strength (AAS) metrics with GraphSAGE-based boundary refinement and adaptive community merging, tackling persistent challenges like scalability, boundary ambiguity, and structural cohesion unmet by prior methods. The method unfolds in four key phases: (1) Core Node Detection, employing a distinctive metric fusing LI and AAS to identify structurally significant nodes; (2) Label Diffusion, dynamically propagating labels from core nodes to neighbors for precise community formation; (3) Boundary Node Reassignment, using GraphSAGE to resolve ambiguities; and (4) Adaptive Community Merging, using an innovative local merging strategy to enhance cohesion. Evaluations on synthetic LFR benchmarks and real-world networks (e.g., Karate, Dolphins, DBLP, Amazon, LiveJournal, Orkut) demonstrate LDSC's superiority over baseline methods (e.g., LPA, CNM, WalkTrap, Louvain) and state-of-the-art approaches (e.g., Leiden, Infomap, LSMD, CLD_GE, FluidC, LCDR, LS), achieving perfect NMI/ARI (1.0) in Karate and Dolphins, top NMI in LiveJournal (0.92) and Orkut (0.65), average scores of 0.85 NMI and 0.75 ARI, and >15 % NMI improvement in large-scale networks like DBLP, showcasing unmatched accuracy, stability, and efficiency.
{"title":"Community detection via core node identification and local label diffusion with GraphSAGE boundary refinement in complex networks","authors":"Asgarali Bouyer , Pouya Shahgholi , Bahman Arasteh , Amin Golzari Oskouei , Xiaoyang Liu","doi":"10.1016/j.jnca.2025.104399","DOIUrl":"10.1016/j.jnca.2025.104399","url":null,"abstract":"<div><div>Community detection is a vital task in social network analysis, enabling the extraction of hidden structures and relationships. However, existing diffusion-based local community detection algorithms often depend on similarity-based scoring, which frequently failing to identify influential core nodes for expanding label. To address these shortcomings, we propose the local detecting and structuring communities (LDSC) method that integrates structural and relational insights with graph-based metrics and deep learning for refined community detection. LDSC uniquely stands out by combining Local Influence (LI) and Adaptive Absorbing Strength (AAS) metrics with GraphSAGE-based boundary refinement and adaptive community merging, tackling persistent challenges like scalability, boundary ambiguity, and structural cohesion unmet by prior methods. The method unfolds in four key phases: (1) Core Node Detection, employing a distinctive metric fusing LI and AAS to identify structurally significant nodes; (2) Label Diffusion, dynamically propagating labels from core nodes to neighbors for precise community formation; (3) Boundary Node Reassignment, using GraphSAGE to resolve ambiguities; and (4) Adaptive Community Merging, using an innovative local merging strategy to enhance cohesion. Evaluations on synthetic LFR benchmarks and real-world networks (e.g., Karate, Dolphins, DBLP, Amazon, LiveJournal, Orkut) demonstrate LDSC's superiority over baseline methods (e.g., LPA, CNM, WalkTrap, Louvain) and state-of-the-art approaches (e.g., Leiden, Infomap, LSMD, CLD_GE, FluidC, LCDR, LS), achieving perfect NMI/ARI (1.0) in Karate and Dolphins, top NMI in LiveJournal (0.92) and Orkut (0.65), average scores of 0.85 NMI and 0.75 ARI, and >15 % NMI improvement in large-scale networks like DBLP, showcasing unmatched accuracy, stability, and efficiency.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104399"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-14DOI: 10.1016/j.jnca.2025.104391
Boris Bellalta, Miguel Casasnovas, Ferran Maura, Alejandro Rodríguez, Juan S. Marquerie, Pablo L. García, Francesc Wilhelmi, Josep Blat
This paper evaluates the performance of Wi-Fi networks for interactive Virtual Reality (VR) streaming with adaptive bitrate control. It focuses on the interaction between VR traffic characteristics and Wi-Fi link-layer mechanisms, studying how this relationship impacts key performance indicators such as throughput, latency, and user scalability. We begin by outlining the architecture, operation, traffic patterns, and performance demands of cloud/edge split-rendering VR systems. Then, using simulations, we investigate both single-user scenarios — examining the effects of modulation and coding schemes (MCSs) and user-to-access point (AP) distance on bitrate sustainability and latency — and multi-user scenarios, assessing how many concurrent VR users a single AP can support. Results show that the use of adaptive bitrate (ABR) streaming, as exemplified by our NeSt-VR algorithm, significantly outperforms constant bitrate (CBR) approaches, enhancing user capacity and resilience to changing channel propagation conditions. To validate the simulation findings, we conduct an experimental evaluation using Rooms, an open-source eXtended Reality (XR) content creation platform. The experimental results closely match the simulations, reinforcing the conclusion that adaptive bitrate control substantially improves Wi-Fi’s ability to support reliable, multiuser interactive VR streaming.
{"title":"Understanding the Wi-Fi and VR streaming interplay: A comprehensible simulation and experimental study","authors":"Boris Bellalta, Miguel Casasnovas, Ferran Maura, Alejandro Rodríguez, Juan S. Marquerie, Pablo L. García, Francesc Wilhelmi, Josep Blat","doi":"10.1016/j.jnca.2025.104391","DOIUrl":"10.1016/j.jnca.2025.104391","url":null,"abstract":"<div><div>This paper evaluates the performance of Wi-Fi networks for interactive Virtual Reality (VR) streaming with adaptive bitrate control. It focuses on the interaction between VR traffic characteristics and Wi-Fi link-layer mechanisms, studying how this relationship impacts key performance indicators such as throughput, latency, and user scalability. We begin by outlining the architecture, operation, traffic patterns, and performance demands of cloud/edge split-rendering VR systems. Then, using simulations, we investigate both single-user scenarios — examining the effects of modulation and coding schemes (MCSs) and user-to-access point (AP) distance on bitrate sustainability and latency — and multi-user scenarios, assessing how many concurrent VR users a single AP can support. Results show that the use of adaptive bitrate (ABR) streaming, as exemplified by our NeSt-VR algorithm, significantly outperforms constant bitrate (CBR) approaches, enhancing user capacity and resilience to changing channel propagation conditions. To validate the simulation findings, we conduct an experimental evaluation using Rooms, an open-source eXtended Reality (XR) content creation platform. The experimental results closely match the simulations, reinforcing the conclusion that adaptive bitrate control substantially improves Wi-Fi’s ability to support reliable, multiuser interactive VR streaming.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104391"},"PeriodicalIF":8.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}