Pub Date : 2025-09-19DOI: 10.1016/j.jnca.2025.104323
Ali Nikoutadbir , Sajjad Torabi , Sadegh Bolouki
This paper addresses the challenge of achieving secure consensus in a vehicular platoon under dual deception attacks using an event-triggered control approach. The platoon consists of a leader and multiple follower vehicles that intermittently exchange position and velocity information to maintain stability. The study focuses on two types of deception attacks: gain modification attacks, where controller gains are manipulated, and false data injection attacks, which compromise sensor and control data integrity to destabilize the platoon. The research analyzes the duration, frequency, and impact of these attacks on system stability. To address these challenges, a robust event-triggered control scheme is proposed to ensure secure consensus despite the attacks. Sufficient consensus conditions are derived for both distributed static and dynamic event-triggered control schemes, considering constraints on attack duration and frequency. The influence of system matrices and triggering parameters on attack resilience is also analyzed. Additionally, a topology-switching scheme is introduced as a mitigation strategy when attack conditions exceed tolerable limits. The effectiveness of the proposed methodology is validated through simulations across various case studies, demonstrating its ability to maintain platoon stability under dual deception attacks.
{"title":"Secure event-triggered control for vehicle platooning against dual deception attacks","authors":"Ali Nikoutadbir , Sajjad Torabi , Sadegh Bolouki","doi":"10.1016/j.jnca.2025.104323","DOIUrl":"10.1016/j.jnca.2025.104323","url":null,"abstract":"<div><div>This paper addresses the challenge of achieving secure consensus in a vehicular platoon under dual deception attacks using an event-triggered control approach. The platoon consists of a leader and multiple follower vehicles that intermittently exchange position and velocity information to maintain stability. The study focuses on two types of deception attacks: gain modification attacks, where controller gains are manipulated, and false data injection attacks, which compromise sensor and control data integrity to destabilize the platoon. The research analyzes the duration, frequency, and impact of these attacks on system stability. To address these challenges, a robust event-triggered control scheme is proposed to ensure secure consensus despite the attacks. Sufficient consensus conditions are derived for both distributed static and dynamic event-triggered control schemes, considering constraints on attack duration and frequency. The influence of system matrices and triggering parameters on attack resilience is also analyzed. Additionally, a topology-switching scheme is introduced as a mitigation strategy when attack conditions exceed tolerable limits. The effectiveness of the proposed methodology is validated through simulations across various case studies, demonstrating its ability to maintain platoon stability under dual deception attacks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104323"},"PeriodicalIF":8.0,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145157537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-18DOI: 10.1016/j.jnca.2025.104324
Maram Helmy , Mohamed S. Hassan , Mahmoud H. Ismail , Usman Tariq
Traditional Adaptive Bitrate (ABR) algorithms in Dynamic Adaptive Streaming over HTTP (DASH) rely on basic throughput estimation techniques that often struggle to quickly adapt to network fluctuations. As users move across different transportation modes or change from one access point to another (e.g., Wi-Fi to cellular networks or between 4G/5G cells), available bandwidth can vary sharply, causing interruptions, abrupt quality shifts, which impact the ability of conventional ABR algorithms to provide seamless playback and maintain high quality-of-experience (QoE). To address these issues, this paper introduces a novel and comprehensive framework that significantly enhances the adaptability and intelligence of ABR algorithms. The proposed solution integrates three key components: a transformer-based throughput prediction model, a Mobility-Aware Throughput Prediction engine (MATH-P), and a Handoff-Aware Throughput Prediction engine (HATH-P). The transformer-based model outperforms state-of-the-art approaches in predicting throughput for both 4G and 5G networks, leveraging its ability to capture complex temporal patterns and long-term dependencies. The MATH-P engine adapts throughput predictions to varying mobility scenarios, while the HATH-P one manages seamless transitions by accurately predicting 4G/5G handoff events and selecting the appropriate throughput prediction model. The proposed systems were integrated into existing ABR algorithms, replacing traditional throughput estimation techniques. Experimental results demonstrate that the MATH-P and HATH-P engines significantly improve video streaming performance, reducing stall durations, enhancing video quality, and ensuring smoother playback.
{"title":"Autoformer-based mobility and handoff-aware prediction for QoE enhancement in adaptive video streaming in 4G/5G networks","authors":"Maram Helmy , Mohamed S. Hassan , Mahmoud H. Ismail , Usman Tariq","doi":"10.1016/j.jnca.2025.104324","DOIUrl":"10.1016/j.jnca.2025.104324","url":null,"abstract":"<div><div>Traditional Adaptive Bitrate (ABR) algorithms in Dynamic Adaptive Streaming over HTTP (DASH) rely on basic throughput estimation techniques that often struggle to quickly adapt to network fluctuations. As users move across different transportation modes or change from one access point to another (e.g., Wi-Fi to cellular networks or between 4G/5G cells), available bandwidth can vary sharply, causing interruptions, abrupt quality shifts, which impact the ability of conventional ABR algorithms to provide seamless playback and maintain high quality-of-experience (QoE). To address these issues, this paper introduces a novel and comprehensive framework that significantly enhances the adaptability and intelligence of ABR algorithms. The proposed solution integrates three key components: a transformer-based throughput prediction model, a Mobility-Aware Throughput Prediction engine (MATH-P), and a Handoff-Aware Throughput Prediction engine (HATH-P). The transformer-based model outperforms state-of-the-art approaches in predicting throughput for both 4G and 5G networks, leveraging its ability to capture complex temporal patterns and long-term dependencies. The MATH-P engine adapts throughput predictions to varying mobility scenarios, while the HATH-P one manages seamless transitions by accurately predicting 4G/5G handoff events and selecting the appropriate throughput prediction model. The proposed systems were integrated into existing ABR algorithms, replacing traditional throughput estimation techniques. Experimental results demonstrate that the MATH-P and HATH-P engines significantly improve video streaming performance, reducing stall durations, enhancing video quality, and ensuring smoother playback.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104324"},"PeriodicalIF":8.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.1016/j.jnca.2025.104333
Kaixi Wang , Yunhe Cui , Guowei Shen , Chun Guo , Yi Chen , Qing Qian
The flow table overflow attack on SDN switches is considered to be a destructive attack in SDN. By exhausting the computing and storage resources of SDN switches, this attack severely disrupts the normal communication functions of SDN networks. Graph neural networks are now being employed to detect flow table overflow attacks in SDN. When a flow graph is constructed, flow features are commonly utilized as nodes to represent the characteristics of flow table overflow attacks. However, a graph solely relying on these nodes and attributes may not fully encompass all the nuances of the flow table overflow attack. Additionally, GNN model may be difficult in capturing the graph information between different flow graphs over time, thus decreasing the detection accuracy of packet flow graph. To address these issues, we introduce PRAETOR, a detection method for flow table overflow attacks that leverages a packet flow graph and a dynamic spatio-temporal graph neural network. More particularly, The PaFlo-Graph algorithm and the EGST model are introduced by PRAETOR. The PaFlo-Graph algorithm generates a packet flow graph for each flow. It utilizes packet information to construct the graph with more detail, thereby better reflecting the characteristics of flow table overflow attacks. The EGST model is a dynamic spatio-temporal graph convolutional network designed to detect flow table overflow attacks by analyzing packet flow graphs. Experiments were conducted under two network topologies, where we used tcpreplay to replay packets from the bigFlow dataset to simulate SDN network flow. We also employed sFlow to sample packet features. Based on the sampled data, two datasets were constructed, each containing 1,760 network flows. For each packet, eight key features were extracted to represent its characteristics. The evaluation metrics include TPR, TNR, accuracy, precision, recall, F1-score, confusion matrix, ROC curves, and PR curves. Experimental results show that the proposed PaFlo-Graph algorithm generates more detailed flow graphs compared to KNN and CRAM, resulting in an average improvement of 6.49% in accuracy and 8.7% in precision. Furthermore, the overall detection framework, PRAETOR, achieves detection accuracies of 99.66% and 99.44% on Topo1 and Topo2, respectively. The precision scores reach 99.32% and 99.72%, and the F1-scores are 99.57% and 100%, respectively, indicating superior detection performance compared to other methods.
{"title":"PRAETOR:Packet flow graph and dynamic spatio-temporal graph neural network-based flow table overflow attack detection method","authors":"Kaixi Wang , Yunhe Cui , Guowei Shen , Chun Guo , Yi Chen , Qing Qian","doi":"10.1016/j.jnca.2025.104333","DOIUrl":"10.1016/j.jnca.2025.104333","url":null,"abstract":"<div><div>The flow table overflow attack on SDN switches is considered to be a destructive attack in SDN. By exhausting the computing and storage resources of SDN switches, this attack severely disrupts the normal communication functions of SDN networks. Graph neural networks are now being employed to detect flow table overflow attacks in SDN. When a flow graph is constructed, flow features are commonly utilized as nodes to represent the characteristics of flow table overflow attacks. However, a graph solely relying on these nodes and attributes may not fully encompass all the nuances of the flow table overflow attack. Additionally, GNN model may be difficult in capturing the graph information between different flow graphs over time, thus decreasing the detection accuracy of packet flow graph. To address these issues, we introduce PRAETOR, a detection method for flow table overflow attacks that leverages a packet flow graph and a dynamic spatio-temporal graph neural network. More particularly, The PaFlo-Graph algorithm and the EGST model are introduced by PRAETOR. The PaFlo-Graph algorithm generates a packet flow graph for each flow. It utilizes packet information to construct the graph with more detail, thereby better reflecting the characteristics of flow table overflow attacks. The EGST model is a dynamic spatio-temporal graph convolutional network designed to detect flow table overflow attacks by analyzing packet flow graphs. Experiments were conducted under two network topologies, where we used tcpreplay to replay packets from the bigFlow dataset to simulate SDN network flow. We also employed sFlow to sample packet features. Based on the sampled data, two datasets were constructed, each containing 1,760 network flows. For each packet, eight key features were extracted to represent its characteristics. The evaluation metrics include TPR, TNR, accuracy, precision, recall, F1-score, confusion matrix, ROC curves, and PR curves. Experimental results show that the proposed PaFlo-Graph algorithm generates more detailed flow graphs compared to KNN and CRAM, resulting in an average improvement of 6.49% in accuracy and 8.7% in precision. Furthermore, the overall detection framework, PRAETOR, achieves detection accuracies of 99.66% and 99.44% on Topo1 and Topo2, respectively. The precision scores reach 99.32% and 99.72%, and the F1-scores are 99.57% and 100%, respectively, indicating superior detection performance compared to other methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104333"},"PeriodicalIF":8.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.1016/j.jnca.2025.104329
Zhiyuan Li , Yujie Jin
Traffic classification is essential for effective intrusion detection and network management. However, with the pervasive use of encryption technologies, traditional machine learning-based and deep learning-based methods often fall short in capturing the fine-grained details in encrypted traffic. To address these limitations, we propose a memory-enhanced LSTM model based on Swin Transformer for multi-class encrypted traffic classification. Our approach first reconstructs raw encrypted traffic by converting each flow into single-channel images. A hierarchical attention network, incorporating both byte-level and packet-level attention, then performs comprehensive feature extraction on these traffic images. The resulting feature maps are subsequently classified to identify traffic flow categories. By combining the long-term dependency capabilities of LSTM with the Swin Transformer’s strengths in feature extraction, our model effectively captures global features across diverse traffic types. Furthermore, we enhance LSTM with memory attention, enabling the model to focus on more fine-grained information. Experimental results on three public datasets—USTC-TFC2016, ISCX-VPN2016, and CIC-IoT2022 show that our model, ST-MemA, improves the classification accuracy to 99.43%, 98.96% and 98.21% and -score to 0.9936, 0.9826 and 0.9746, respectively. The results also demonstrate that our proposed model outperforms current state-of-the-art models in classification accuracy and computational efficiency.
{"title":"ST-MemA: Leveraging Swin Transformer and memory-enhanced LSTM for encrypted traffic classification","authors":"Zhiyuan Li , Yujie Jin","doi":"10.1016/j.jnca.2025.104329","DOIUrl":"10.1016/j.jnca.2025.104329","url":null,"abstract":"<div><div>Traffic classification is essential for effective intrusion detection and network management. However, with the pervasive use of encryption technologies, traditional machine learning-based and deep learning-based methods often fall short in capturing the fine-grained details in encrypted traffic. To address these limitations, we propose a memory-enhanced LSTM model based on Swin Transformer for multi-class encrypted traffic classification. Our approach first reconstructs raw encrypted traffic by converting each flow into single-channel images. A hierarchical attention network, incorporating both byte-level and packet-level attention, then performs comprehensive feature extraction on these traffic images. The resulting feature maps are subsequently classified to identify traffic flow categories. By combining the long-term dependency capabilities of LSTM with the Swin Transformer’s strengths in feature extraction, our model effectively captures global features across diverse traffic types. Furthermore, we enhance LSTM with memory attention, enabling the model to focus on more fine-grained information. Experimental results on three public datasets—USTC-TFC2016, ISCX-VPN2016, and CIC-IoT2022 show that our model, ST-MemA, improves the classification accuracy to 99.43%, 98.96% and 98.21% and <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-score to 0.9936, 0.9826 and 0.9746, respectively. The results also demonstrate that our proposed model outperforms current state-of-the-art models in classification accuracy and computational efficiency.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104329"},"PeriodicalIF":8.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.1016/j.jnca.2025.104332
Xijian Luo , Jun Xie , Liqin Xiong , Yaqun Liu , Yuan He
Deploying Unmanned aerial vehicle mounted base stations (UAV-BSs) in post-disaster areas or battlefields, where the ground infrastructures are missing or destroyed, can quickly restore communication coverage. Due to the unstable and hostile properties of the environments, the ability to maintain the connectivity of the UAV-BSs network should be considered. In this paper, we study the deployment of UAV-BSs to provide full coverage for users with different quality of service (QoS) demands. The objective is to minimize the number of UAV-BSs under the constraints of user demands and UAV-BS service abilities. Besides, in absence of ground base stations, we also aim to construct a bi-connected topology for the UAV-BS network. However, the formulated problem, as a special instance of the geometric disk cover (GDC) problem, is NP-hard. To tackle this problem, we propose a heuristic algorithm, named Improved QoS-Prior Coverage and bi-Connectivity (IQP2C), by separately solving the user coverage and bi-connected topology construction subproblems. Firstly, IQP2C provides full coverage for users with minimum covering UAVs. Then, we propose an altitude-cluster-based method extending from the 2-D Hamilton cycle to construct bi-connectivity for the UAV-BS network. Simulation results validate the effectiveness of IQP2C in meeting different QoS demands and constructing fault-tolerant topology. Moreover, IQP2C outperforms other baselines in terms of minimized number of UAV-BSs for user coverage, minimized number of UAV-BSs for bi-connectivity as well as running time.
在地面基础设施缺失或被破坏的灾后地区或战场部署无人机基站(UAV-BSs),可以快速恢复通信覆盖。由于环境的不稳定和敌对性质,应考虑保持UAV-BSs网络连通性的能力。在本文中,我们研究了部署UAV-BSs以实现对不同服务质量(QoS)需求的用户的全覆盖。目标是在用户需求和无人机- bs服务能力的约束下,使无人机- bs的数量最小化。此外,在没有地面基站的情况下,我们还致力于构建无人机- bs网络的双连通拓扑结构。然而,作为几何磁盘覆盖(GDC)问题的一个特例,公式化问题是np困难的。为了解决这一问题,我们提出了一种启发式算法,即改进QoS-Prior Coverage and bi-Connectivity (IQP2C),该算法分别解决用户覆盖和双连通拓扑结构子问题。首先,IQP2C以最少的覆盖无人机为用户提供全覆盖。然后,我们提出了一种基于高度簇的方法,从二维Hamilton循环扩展到构建UAV-BS网络的双连通性。仿真结果验证了IQP2C在满足不同QoS需求和构建容错拓扑方面的有效性。此外,IQP2C在用户覆盖的最小化UAV-BSs数量、双向连接的最小化UAV-BSs数量以及运行时间方面优于其他基准。
{"title":"Fault-tolerant 3-D topology construction of UAV-BSs for full coverage of users with different QoS demands","authors":"Xijian Luo , Jun Xie , Liqin Xiong , Yaqun Liu , Yuan He","doi":"10.1016/j.jnca.2025.104332","DOIUrl":"10.1016/j.jnca.2025.104332","url":null,"abstract":"<div><div>Deploying Unmanned aerial vehicle mounted base stations (UAV-BSs) in post-disaster areas or battlefields, where the ground infrastructures are missing or destroyed, can quickly restore communication coverage. Due to the unstable and hostile properties of the environments, the ability to maintain the connectivity of the UAV-BSs network should be considered. In this paper, we study the deployment of UAV-BSs to provide full coverage for users with different quality of service (QoS) demands. The objective is to minimize the number of UAV-BSs under the constraints of user demands and UAV-BS service abilities. Besides, in absence of ground base stations, we also aim to construct a bi-connected topology for the UAV-BS network. However, the formulated problem, as a special instance of the geometric disk cover (GDC) problem, is NP-hard. To tackle this problem, we propose a heuristic algorithm, named Improved QoS-Prior Coverage and bi-Connectivity (IQP2C), by separately solving the user coverage and bi-connected topology construction subproblems. Firstly, IQP2C provides full coverage for users with minimum covering UAVs. Then, we propose an altitude-cluster-based method extending from the 2-D Hamilton cycle to construct bi-connectivity for the UAV-BS network. Simulation results validate the effectiveness of IQP2C in meeting different QoS demands and constructing fault-tolerant topology. Moreover, IQP2C outperforms other baselines in terms of minimized number of UAV-BSs for user coverage, minimized number of UAV-BSs for bi-connectivity as well as running time.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104332"},"PeriodicalIF":8.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.1016/j.jnca.2025.104331
Khizar Hameed , Faiqa Maqsood , Zhenfei Wang
This paper proposed an Artificial Intelligence (AI) enhanced Zero Knowledge Proofs (ZKPs) based comprehensive framework used to improve security, privacy, scalability and efficiency in forensic investigations for the multi-cloud environment, a growing concern for cybersecurity and digital forensics domains. With the growing invulnerability of data storage and inefficient processing in cloud computing landscapes, forensic investigations confront privacy preservation, data integrity, and interoperability issues amongst various cloud providers. Despite existing frameworks, there are few adaptive solutions that holistically solve these challenges. To address such issues and challenges, we propose a suite of frameworks, including an Adaptive Multi-Cloud Forensic Integration Framework (A-MCFIF), Multi-Factor Access Control Framework (MACF), Adaptive ZKP Optimization Framework (AZOF), and Privacy Enhanced Data Security Framework (PDSF) to bridge this gap. Incorporating AI-enhanced ZKP and Multi-Factor Authentication (MFA), these frameworks secure data and improve the efficiency of proof generation and verification while meeting privacy regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA). Our extensive evaluation of the proposed framework included computing efficiency, memory consumption, data handling efficiency, scalability, overall performance, and cost-effectiveness. We also analyse verification latency to assess the framework’s real-time processing capabilities, which overcome existing solutions. Furthermore, our research includes cloud-specific threat models such as insider threats and data breaches and shows the benefits of the proposed framework for counteracting these risks by proving mathematical and empirical security against privacy breaches. Finally, we bring new insights and contribute to the development of secure, privacy-compliant, and efficient forensic processes, which are elaborated as a comprehensive solution for more reconstructive forensic initiatives in increasingly sophisticated cloud environments.
{"title":"Artificial intelligence-enhanced zero-knowledge proofs for privacy-preserving digital forensics in cloud environments","authors":"Khizar Hameed , Faiqa Maqsood , Zhenfei Wang","doi":"10.1016/j.jnca.2025.104331","DOIUrl":"10.1016/j.jnca.2025.104331","url":null,"abstract":"<div><div>This paper proposed an Artificial Intelligence (AI) enhanced Zero Knowledge Proofs (ZKPs) based comprehensive framework used to improve security, privacy, scalability and efficiency in forensic investigations for the multi-cloud environment, a growing concern for cybersecurity and digital forensics domains. With the growing invulnerability of data storage and inefficient processing in cloud computing landscapes, forensic investigations confront privacy preservation, data integrity, and interoperability issues amongst various cloud providers. Despite existing frameworks, there are few adaptive solutions that holistically solve these challenges. To address such issues and challenges, we propose a suite of frameworks, including an Adaptive Multi-Cloud Forensic Integration Framework (A-MCFIF), Multi-Factor Access Control Framework (MACF), Adaptive ZKP Optimization Framework (AZOF), and Privacy Enhanced Data Security Framework (PDSF) to bridge this gap. Incorporating AI-enhanced ZKP and Multi-Factor Authentication (MFA), these frameworks secure data and improve the efficiency of proof generation and verification while meeting privacy regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA). Our extensive evaluation of the proposed framework included computing efficiency, memory consumption, data handling efficiency, scalability, overall performance, and cost-effectiveness. We also analyse verification latency to assess the framework’s real-time processing capabilities, which overcome existing solutions. Furthermore, our research includes cloud-specific threat models such as insider threats and data breaches and shows the benefits of the proposed framework for counteracting these risks by proving mathematical and empirical security against privacy breaches. Finally, we bring new insights and contribute to the development of secure, privacy-compliant, and efficient forensic processes, which are elaborated as a comprehensive solution for more reconstructive forensic initiatives in increasingly sophisticated cloud environments.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104331"},"PeriodicalIF":8.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1016/j.jnca.2025.104334
Xinyue Jiang , Chunming Wu , Zhengyan Zhou , Di Wang , Dezhang Kong , Muhammad Khurram Khan , Xuan Liu
To acquire per-hop flow level information, existing works have made significant contributions to offloading network measurement onto data center switches. Despite this, they still pose challenges due to increasingly complex measurement tasks and massive network traffic. In this paper, we introduce FlowTracker, a flow measurement primitive in the data plane. Our key innovation is a hash-based data structure with constant size and collision resolution, which allows fine-grained and real-time monitoring of various flow statistics. We have fully implemented a FlowTracker prototype on a testbed and used real-world packet traces to evaluate its performance. The results demonstrate FlowTracker’s efficiency under different measurement tasks. For example, with 0.5 MB of memory, FlowTracker can accurately estimate 98% heavy hitter out of 25K flows, with an average relative error of 1.28%. It also achieves 92.27% higher accuracy in packet delay estimation and 121.83% higher flow set coverage compared to competitors with only 64 KB of memory. Furthermore, FlowTracker imposes minimal overhead, requiring just 0.04% extra bandwidth for large-scale network processing. With these capabilities, FlowTracker can provide network operators with deep insights and efficient flow control of their networks.
{"title":"FlowTracker: A refined and versatile data plane measurement approach","authors":"Xinyue Jiang , Chunming Wu , Zhengyan Zhou , Di Wang , Dezhang Kong , Muhammad Khurram Khan , Xuan Liu","doi":"10.1016/j.jnca.2025.104334","DOIUrl":"10.1016/j.jnca.2025.104334","url":null,"abstract":"<div><div>To acquire per-hop flow level information, existing works have made significant contributions to offloading network measurement onto data center switches. Despite this, they still pose challenges due to increasingly complex measurement tasks and massive network traffic. In this paper, we introduce FlowTracker, a flow measurement primitive in the data plane. Our key innovation is a hash-based data structure with constant size and collision resolution, which allows fine-grained and real-time monitoring of various flow statistics. We have fully implemented a FlowTracker prototype on a testbed and used real-world packet traces to evaluate its performance. The results demonstrate FlowTracker’s efficiency under different measurement tasks. For example, with <span><math><mo>∼</mo></math></span>0.5 MB of memory, FlowTracker can accurately estimate 98% heavy hitter out of 25K flows, with an average relative error of 1.28%. It also achieves 92.27% higher accuracy in packet delay estimation and 121.83% higher flow set coverage compared to competitors with only 64 KB of memory. Furthermore, FlowTracker imposes minimal overhead, requiring just <span><math><mo>∼</mo></math></span>0.04% extra bandwidth for large-scale network processing. With these capabilities, FlowTracker can provide network operators with deep insights and efficient flow control of their networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104334"},"PeriodicalIF":8.0,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145134833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1016/j.jnca.2025.104328
Wenrui Jiang, Yongjian Liao, Qishan Gao, Han Xu, Hongwei Wang
Data collaboration allows multiple parties to jointly share and modify data stored in the cloud server. As unauthorized users may create or modify the shared data as they want by tampering with requests sent by authorized users to replace them with what the unauthorized users want to send, secure data collaboration in cloud computing requires data integrity protection of requests and precise privilege verification of users. However, while maintaining data integrity, it is difficult for current signature schemes to achieve the following demands: fine-grained access control, high scalability, a flexible and controllable hierarchical delegation mechanism, and efficient signing and verification. Therefore, we designed a scalable and flexible hierarchical attribute-based signature (HABS) model and proposed a signing policy HABS construction using the linear secret sharing scheme to construct an access structure. Furthermore, we proved the unforgeability of our HABS scheme in the standard model. We also analyzed and tested the performance of our HABS scheme and related scheme, and we found that our scheme has less signing computation consumption in large-scale systems with complex policies. Finally, we provided a specified application scenario of HABS used in data collaboration based on cloud computing.
{"title":"Secure and efficient data collaboration in cloud computing: Flexible delegation via hierarchical attribute-based signature","authors":"Wenrui Jiang, Yongjian Liao, Qishan Gao, Han Xu, Hongwei Wang","doi":"10.1016/j.jnca.2025.104328","DOIUrl":"10.1016/j.jnca.2025.104328","url":null,"abstract":"<div><div>Data collaboration allows multiple parties to jointly share and modify data stored in the cloud server. As unauthorized users may create or modify the shared data as they want by tampering with requests sent by authorized users to replace them with what the unauthorized users want to send, secure data collaboration in cloud computing requires data integrity protection of requests and precise privilege verification of users. However, while maintaining data integrity, it is difficult for current signature schemes to achieve the following demands: fine-grained access control, high scalability, a flexible and controllable hierarchical delegation mechanism, and efficient signing and verification. Therefore, we designed a scalable and flexible hierarchical attribute-based signature (HABS) model and proposed a signing policy HABS construction using the linear secret sharing scheme to construct an access structure. Furthermore, we proved the unforgeability of our HABS scheme in the standard model. We also analyzed and tested the performance of our HABS scheme and related scheme, and we found that our scheme has less signing computation consumption in large-scale systems with complex policies. Finally, we provided a specified application scenario of HABS used in data collaboration based on cloud computing.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104328"},"PeriodicalIF":8.0,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1016/j.jnca.2025.104327
Lifan Pan , Hao Guo , Wanxin Li
Federated Learning (FL) is an emerging machine learning paradigm that enables multiple parties to train a shared model while preserving data privacy collaboratively. However, malicious clients pose a significant threat to FL systems. This interference not only deteriorates model performance but also exacerbates the unfairness of the global model caused by data heterogeneity, leading to inconsistent performance across clients. We propose C-PFL, a committee-based personalized FL framework that improves both robustness and personalization. In contrast to prior approaches such as FedProto (which relies on the exchange of class prototypes), Ditto (which employs regularization between global and local models), and FedBABU (which freezes the classifier head during federated training), C-PFL introduces two principal innovations. C-PFL adopts a split-model design, updating only a shared backbone during global training while fine-tuning a personalized head locally. A dynamic committee of high-contribution clients validates submitted updates without public data, filtering low-quality or adversarial contributions before aggregation. Experiments on MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and AGNews show that C-PFL outperforms six state-of-the-art personalized FL baselines by up to 2.89% in non-adversarial settings, and by as much as 6.96% under 40% malicious clients. These results demonstrate C-PFL’s ability to sustain high accuracy and stability across diverse non-IID scenarios, even with significant adversarial participation.
{"title":"C-PFL: A committee-based personalized federated learning framework","authors":"Lifan Pan , Hao Guo , Wanxin Li","doi":"10.1016/j.jnca.2025.104327","DOIUrl":"10.1016/j.jnca.2025.104327","url":null,"abstract":"<div><div>Federated Learning (FL) is an emerging machine learning paradigm that enables multiple parties to train a shared model while preserving data privacy collaboratively. However, malicious clients pose a significant threat to FL systems. This interference not only deteriorates model performance but also exacerbates the unfairness of the global model caused by data heterogeneity, leading to inconsistent performance across clients. We propose C-PFL, a committee-based personalized FL framework that improves both robustness and personalization. In contrast to prior approaches such as FedProto (which relies on the exchange of class prototypes), Ditto (which employs regularization between global and local models), and FedBABU (which freezes the classifier head during federated training), C-PFL introduces two principal innovations. C-PFL adopts a split-model design, updating only a shared backbone during global training while fine-tuning a personalized head locally. A dynamic committee of high-contribution clients validates submitted updates without public data, filtering low-quality or adversarial contributions before aggregation. Experiments on MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and AGNews show that C-PFL outperforms six state-of-the-art personalized FL baselines by up to 2.89% in non-adversarial settings, and by as much as 6.96% under 40% malicious clients. These results demonstrate C-PFL’s ability to sustain high accuracy and stability across diverse non-IID scenarios, even with significant adversarial participation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104327"},"PeriodicalIF":8.0,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-14DOI: 10.1016/j.jnca.2025.104325
Wei Wei , Jie Huang , Qinghui Zhang , Tao Ma , Peng Li
Network infrastructure protection is critical for ensuring robustness against attacks and failures, yet existing approaches fundamentally limit their scope by addressing either node or edge vulnerabilities in isolation — an unrealistic assumption given real-world scenarios where both element types may fail simultaneously. Our work makes three key advances beyond current state-of-the-art: First, we introduce the novel concept of hybrid connectivity as a unified robustness metric that properly accounts for concurrent node-edge failures, demonstrating through theoretical analysis that traditional single-element metrics require prohibitively high connectivity thresholds. Second, we develop the first practical solution for large-scale networks via our hybrid cut-tree mapping algorithm, which employs an extended node cut formulation with dynamic programming to identify all vulnerable node-edge combinations in linear time — a dramatic complexity reduction from the exponential scaling of existing linear programming methods. Third, we prove and exploit a fundamental structural property that shielding any edge spanning tree plus leaf edges guarantees target hybrid connectivity, enabling our edge spanning tree algorithm to deliver near-optimal solutions at unprecedented scale. Experimental validation confirms our approach maintains 100% protection effectiveness (with no more than 6% cost overhead versus optimal) in small graphs while achieving 99.9% protection coverage in large-scale networks — outperforming all existing heuristics in protection cost while providing a times speedup over traditional methods.
{"title":"Hybrid connectivity-oriented efficient shielding for robustness enhancement in large-scale networks","authors":"Wei Wei , Jie Huang , Qinghui Zhang , Tao Ma , Peng Li","doi":"10.1016/j.jnca.2025.104325","DOIUrl":"10.1016/j.jnca.2025.104325","url":null,"abstract":"<div><div>Network infrastructure protection is critical for ensuring robustness against attacks and failures, yet existing approaches fundamentally limit their scope by addressing either node or edge vulnerabilities in isolation — an unrealistic assumption given real-world scenarios where both element types may fail simultaneously. Our work makes three key advances beyond current state-of-the-art: First, we introduce the novel concept of hybrid connectivity as a unified robustness metric that properly accounts for concurrent node-edge failures, demonstrating through theoretical analysis that traditional single-element metrics require prohibitively high connectivity thresholds. Second, we develop the first practical solution for large-scale networks via our hybrid cut-tree mapping algorithm, which employs an extended node cut formulation with dynamic programming to identify all vulnerable node-edge combinations in linear time — a dramatic complexity reduction from the exponential scaling of existing linear programming methods. Third, we prove and exploit a fundamental structural property that shielding any edge spanning tree plus leaf edges guarantees target hybrid connectivity, enabling our edge spanning tree algorithm to deliver near-optimal solutions at unprecedented scale. Experimental validation confirms our approach maintains 100% protection effectiveness (with no more than 6% cost overhead versus optimal) in small graphs while achieving 99.9% protection coverage in large-scale networks — outperforming all existing heuristics in protection cost while providing a <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup></mrow></math></span> times speedup over traditional methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104325"},"PeriodicalIF":8.0,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145093955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}