Pub Date : 2025-11-14DOI: 10.1016/j.jnca.2025.104391
Boris Bellalta, Miguel Casasnovas, Ferran Maura, Alejandro Rodríguez, Juan S. Marquerie, Pablo L. García, Francesc Wilhelmi, Josep Blat
This paper evaluates the performance of Wi-Fi networks for interactive Virtual Reality (VR) streaming with adaptive bitrate control. It focuses on the interaction between VR traffic characteristics and Wi-Fi link-layer mechanisms, studying how this relationship impacts key performance indicators such as throughput, latency, and user scalability. We begin by outlining the architecture, operation, traffic patterns, and performance demands of cloud/edge split-rendering VR systems. Then, using simulations, we investigate both single-user scenarios — examining the effects of modulation and coding schemes (MCSs) and user-to-access point (AP) distance on bitrate sustainability and latency — and multi-user scenarios, assessing how many concurrent VR users a single AP can support. Results show that the use of adaptive bitrate (ABR) streaming, as exemplified by our NeSt-VR algorithm, significantly outperforms constant bitrate (CBR) approaches, enhancing user capacity and resilience to changing channel propagation conditions. To validate the simulation findings, we conduct an experimental evaluation using Rooms, an open-source eXtended Reality (XR) content creation platform. The experimental results closely match the simulations, reinforcing the conclusion that adaptive bitrate control substantially improves Wi-Fi’s ability to support reliable, multiuser interactive VR streaming.
{"title":"Understanding the Wi-Fi and VR streaming interplay: A comprehensible simulation and experimental study","authors":"Boris Bellalta, Miguel Casasnovas, Ferran Maura, Alejandro Rodríguez, Juan S. Marquerie, Pablo L. García, Francesc Wilhelmi, Josep Blat","doi":"10.1016/j.jnca.2025.104391","DOIUrl":"10.1016/j.jnca.2025.104391","url":null,"abstract":"<div><div>This paper evaluates the performance of Wi-Fi networks for interactive Virtual Reality (VR) streaming with adaptive bitrate control. It focuses on the interaction between VR traffic characteristics and Wi-Fi link-layer mechanisms, studying how this relationship impacts key performance indicators such as throughput, latency, and user scalability. We begin by outlining the architecture, operation, traffic patterns, and performance demands of cloud/edge split-rendering VR systems. Then, using simulations, we investigate both single-user scenarios — examining the effects of modulation and coding schemes (MCSs) and user-to-access point (AP) distance on bitrate sustainability and latency — and multi-user scenarios, assessing how many concurrent VR users a single AP can support. Results show that the use of adaptive bitrate (ABR) streaming, as exemplified by our NeSt-VR algorithm, significantly outperforms constant bitrate (CBR) approaches, enhancing user capacity and resilience to changing channel propagation conditions. To validate the simulation findings, we conduct an experimental evaluation using Rooms, an open-source eXtended Reality (XR) content creation platform. The experimental results closely match the simulations, reinforcing the conclusion that adaptive bitrate control substantially improves Wi-Fi’s ability to support reliable, multiuser interactive VR streaming.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104391"},"PeriodicalIF":8.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.jnca.2025.104388
Jong Wook Kim , Beakcheol Jang
The widespread adoption of mobile devices, coupled with the rapid advancement of GPS and positioning technologies, has led to a significant increase in the collection of trajectory data. This trajectory data serves as a critical resource for numerous applications, leading to an increasing demand for its sharing and publication. However, the sensitive nature of trajectory data poses significant privacy risks, necessitating the development of privacy-preserving publication schemes. Differential privacy (DP) has emerged as a leading approach for protecting individual trajectories during data publication, but many existing approaches rely on a trusted central server, an assumption that is unrealistic in practical settings. In this paper, we present DistTraj, a novel distributed framework for privacy-preserving trajectory data publishing that eliminates the need for a trusted central server. The proposed framework leverages a distributed clustering scheme to generalize trajectories without relying on a centralized trusted server. To improve the effectiveness of DP in this decentralized setting, we propose a method to establish a tighter bound on the global sensitivity of the DP mechanism within the clustering process. Through extensive experiments on real-world datasets, we demonstrate that the proposed DistTraj framework, even without relying on a trusted central server, achieves performance comparable to state-of-the-art central server-based methods. These results show that DistTraj successfully balances privacy preservation and data utility in decentralized environments, where trusting a central server is impractical or infeasible.
{"title":"Privacy-preserving trajectory data publication: A distributed approach without trusted servers","authors":"Jong Wook Kim , Beakcheol Jang","doi":"10.1016/j.jnca.2025.104388","DOIUrl":"10.1016/j.jnca.2025.104388","url":null,"abstract":"<div><div>The widespread adoption of mobile devices, coupled with the rapid advancement of GPS and positioning technologies, has led to a significant increase in the collection of trajectory data. This trajectory data serves as a critical resource for numerous applications, leading to an increasing demand for its sharing and publication. However, the sensitive nature of trajectory data poses significant privacy risks, necessitating the development of privacy-preserving publication schemes. Differential privacy (DP) has emerged as a leading approach for protecting individual trajectories during data publication, but many existing approaches rely on a trusted central server, an assumption that is unrealistic in practical settings. In this paper, we present DistTraj, a novel distributed framework for privacy-preserving trajectory data publishing that eliminates the need for a trusted central server. The proposed framework leverages a distributed clustering scheme to generalize trajectories without relying on a centralized trusted server. To improve the effectiveness of DP in this decentralized setting, we propose a method to establish a tighter bound on the global sensitivity of the DP mechanism within the clustering process. Through extensive experiments on real-world datasets, we demonstrate that the proposed DistTraj framework, even without relying on a trusted central server, achieves performance comparable to state-of-the-art central server-based methods. These results show that DistTraj successfully balances privacy preservation and data utility in decentralized environments, where trusting a central server is impractical or infeasible.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104388"},"PeriodicalIF":8.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jnca.2025.104387
Zhengwei Ren , Pei He , Rongwei Yu , Li Deng , Yan Tong , Shiwei Xu
Dynamic searchable symmetric encryption (DSSE) enables users to perform update and search operations over encrypted data on cloud servers. However, many DSSE schemes are unable to efficiently perform conjunctive queries containing multiple keywords, limiting their search capabilities. Those DSSE schemes supporting conjunctive query fail to achieve real deletion, affecting the efficiencies of subsequent searches. In this paper, we propose a DSSE scheme supporting conjunctive query and non-interactive real deletion simultaneously. For a conjunctive query containing multiple keywords, we adjust the positions of these keywords so that the keyword contained by the least number of document(s) is at the forefront of the conjunctive query. The document(s) containing this keyword are then located, and on the basis of the document(s) the remaining keywords are checked to obtain the final search result. Moreover, cuckoo filter is adopted to store the ciphertext to be searched, making the conjunctive query efficient. We deploy two search databases on the cloud server to achieve non-interactive real deletion. Benefiting from these two databases, the deleted ciphertext will be physically removed from the cloud server with no impact on subsequent searches, improving search efficiencies of subsequent searches. Our scheme only utilizes a few hash functions and a pseudorandom function, while the forward privacy and backward privacy are still achieved. We conduct a formal security analysis and extensive experimental evaluations, showing that our scheme has efficiency advantages in both update and search processes.
{"title":"Dynamic searchable symmetric encryption with efficient conjunctive query and non-interactive real deletion","authors":"Zhengwei Ren , Pei He , Rongwei Yu , Li Deng , Yan Tong , Shiwei Xu","doi":"10.1016/j.jnca.2025.104387","DOIUrl":"10.1016/j.jnca.2025.104387","url":null,"abstract":"<div><div>Dynamic searchable symmetric encryption (DSSE) enables users to perform update and search operations over encrypted data on cloud servers. However, many DSSE schemes are unable to efficiently perform conjunctive queries containing multiple keywords, limiting their search capabilities. Those DSSE schemes supporting conjunctive query fail to achieve real deletion, affecting the efficiencies of subsequent searches. In this paper, we propose a DSSE scheme supporting conjunctive query and non-interactive real deletion simultaneously. For a conjunctive query containing multiple keywords, we adjust the positions of these keywords so that the keyword contained by the least number of document(s) is at the forefront of the conjunctive query. The document(s) containing this keyword are then located, and on the basis of the document(s) the remaining keywords are checked to obtain the final search result. Moreover, cuckoo filter is adopted to store the ciphertext to be searched, making the conjunctive query efficient. We deploy two search databases on the cloud server to achieve non-interactive real deletion. Benefiting from these two databases, the deleted ciphertext will be physically removed from the cloud server with no impact on subsequent searches, improving search efficiencies of subsequent searches. Our scheme only utilizes a few hash functions and a pseudorandom function, while the forward privacy and backward privacy are still achieved. We conduct a formal security analysis and extensive experimental evaluations, showing that our scheme has efficiency advantages in both update and search processes.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104387"},"PeriodicalIF":8.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing complexity and scale of cloud computing systems, task scheduling optimization has become critical for improving resource utilization, enhancing service reliability, and reducing overall energy consumption. Traditional swarm intelligence algorithms often struggle to achieve an effective balance between global exploration and local exploitation, leading to premature convergence or sub-optimal solutions, particularly in large-scale and high-dimensional problem scenarios. To address these challenges, this study proposes a Time Varying Mixed Function Frilled Lizard Optimization algorithm (TMCFLO) that incorporates a horned lizard-inspired camouflage strategy to increase population diversity and prevent premature convergence, alongside a novel mixed function oscillation mechanism, combining sine, cosine, power, logarithm, and Gaussian functions, to enhance local search precision and convergence efficiency. A time-varying expansion factor is further introduced to dynamically regulate oscillation amplitude, ensuring adaptive adjustment of search behavior throughout the optimization process. Extensive evaluations on the CEC 2022 benchmark set demonstrate that TMCFLO outperforms classical algorithms, including PSO, ACO, WOA, AOA, POA, ZOA, HO, RLLPSO and IHBA, achieving up to 26 percent improvement in optimization accuracy. In practical cloud computing task scheduling experiments with 1500 and 3000 tasks, TMCFLO achieves the lowest single task energy consumption of 0.2196, the lowest total energy consumption of 658.80, and the highest energy efficiency of 4.5569, confirming its effectiveness, scalability, and energy-efficient superiority for complex cloud scheduling problems.
{"title":"Task scheduling of cloud computing system by frilled lizard optimization with time varying expansion mixed function oscillation and horned lizard camouflage strategy","authors":"Hao-Ming Song, Si-Wen Zhang, Jie-Sheng Wang, Cheng Xing, Yu-Feng Sun, Yu-Cai Wang, Xiao-Fei Sui","doi":"10.1016/j.jnca.2025.104386","DOIUrl":"10.1016/j.jnca.2025.104386","url":null,"abstract":"<div><div>With the increasing complexity and scale of cloud computing systems, task scheduling optimization has become critical for improving resource utilization, enhancing service reliability, and reducing overall energy consumption. Traditional swarm intelligence algorithms often struggle to achieve an effective balance between global exploration and local exploitation, leading to premature convergence or sub-optimal solutions, particularly in large-scale and high-dimensional problem scenarios. To address these challenges, this study proposes a Time Varying Mixed Function Frilled Lizard Optimization algorithm (TMCFLO) that incorporates a horned lizard-inspired camouflage strategy to increase population diversity and prevent premature convergence, alongside a novel mixed function oscillation mechanism, combining sine, cosine, power, logarithm, and Gaussian functions, to enhance local search precision and convergence efficiency. A time-varying expansion factor is further introduced to dynamically regulate oscillation amplitude, ensuring adaptive adjustment of search behavior throughout the optimization process. Extensive evaluations on the CEC 2022 benchmark set demonstrate that TMCFLO outperforms classical algorithms, including PSO, ACO, WOA, AOA, POA, ZOA, HO, RLLPSO and IHBA, achieving up to 26 percent improvement in optimization accuracy. In practical cloud computing task scheduling experiments with 1500 and 3000 tasks, TMCFLO achieves the lowest single task energy consumption of 0.2196, the lowest total energy consumption of 658.80, and the highest energy efficiency of 4.5569, confirming its effectiveness, scalability, and energy-efficient superiority for complex cloud scheduling problems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104386"},"PeriodicalIF":8.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jnca.2025.104385
Shie-Yuan Wang , Tzu-Ching Lin
High-performance and high-precision flow monitoring is a crucial function for network management, network bandwidth usage accounting and billing, network security, network forensics, and other important tasks. Nowadays, many commercial switches/routers provide either sFlow, NetFlow, or IPFIX scheme for monitoring the flows traversing a network. sFlow is a scheme widely supported by many switches/routers due to its using a sampling-based method, which greatly reduces the CPU processing load on a switch/router and the network bandwidth required to transmit flow data to a remote collector. However, many small flows may go undetected and the estimated flow data (e.g., the packet count and byte count) for detected flows can significantly deviate from their ground truth.
NetFlow, which is Cisco Systems’ proprietary technology, does not use a sampling-based method by default. Instead, it tries to collect complete and correct flow data for every flow. However, as the link speed and the flow arrival rate continue to increase, NetFlow also provides a sampling-based option to reduce the CPU utilization of the switch/router. Because NetFlow is proprietary, an Internet Engineering Task Force (IETF) working group has defined IPFIX as an open flow information export protocol based on NetFlow Version 9. The requirements for IPFIX are defined in the RFC 3917 standards. Basically, IPFIX is the same as NetFlow Version 9.
Due to its high demand on the CPU of the switch/router, currently NetFlow is supported only on very high-end switches/routers and its design and implementation on these commercial switches/routers are not published in the literature. In this paper, we design and implement a high-performance and high-precision NetFlow/IPFIX system on a Programming Protocol-independent Packet Processors (P4) hardware switch. Based on a 20 Gbps playback of a packet trace gathered on an Internet backbone link, experimental results show that our novel method significantly outperforms the typical design and implementation method of NetFlow/IPFIX on a P4 hardware switch. For example, for the number of detected flows during the trace period, our method outperforms the typical method by a factor of 5.72. As for the number of flows whose packet and byte counts are correctly counted, our method outperforms the typical method by a factor of 8.57.
高性能、高精度的流量监控是网络管理、网络带宽计费、网络安全、网络取证等重要任务的关键功能。目前,许多商用交换机/路由器提供sFlow、NetFlow或IPFIX方案来监控流经网络的流量。sFlow是一种被许多交换机/路由器广泛支持的方案,因为它使用了基于采样的方法,大大降低了交换机/路由器的CPU处理负载和将流数据传输到远程采集器所需的网络带宽。然而,许多小流可能未被检测到,并且检测到的流的估计流数据(例如,数据包计数和字节计数)可能会明显偏离其基本事实。NetFlow是思科系统的专利技术,默认情况下不使用基于采样的方法。相反,它试图为每个流收集完整和正确的流量数据。然而,随着链路速度和流量到达率的不断增加,NetFlow还提供了一个基于采样的选项,以降低交换机/路由器的CPU利用率。由于NetFlow是专有的,互联网工程任务组(IETF)工作组已经将IPFIX定义为基于NetFlow Version 9的开放流量信息导出协议。对IPFIX的要求在RFC 3917标准中有定义。基本上,IPFIX与NetFlow Version 9相同。由于NetFlow对交换机/路由器CPU的要求很高,目前NetFlow只支持在非常高端的交换机/路由器上,其在这些商用交换机/路由器上的设计和实现没有在文献中发表。在本文中,我们设计并实现了一个基于P4 (Programming Protocol-independent Packet Processors)硬件交换机的高性能、高精度NetFlow/IPFIX系统。实验结果表明,该方法明显优于典型的NetFlow/IPFIX在P4硬件交换机上的设计和实现方法。例如,对于跟踪期间检测到的流的数量,我们的方法比典型方法的性能高出5.72倍。对于正确计算数据包和字节计数的流的数量,我们的方法比典型方法的性能高出8.57倍。
{"title":"Design, implementation, and performance evaluation of a high-performance and high-precision NetFlow/IPFIX flow-monitoring system on a P4 hardware switch","authors":"Shie-Yuan Wang , Tzu-Ching Lin","doi":"10.1016/j.jnca.2025.104385","DOIUrl":"10.1016/j.jnca.2025.104385","url":null,"abstract":"<div><div>High-performance and high-precision flow monitoring is a crucial function for network management, network bandwidth usage accounting and billing, network security, network forensics, and other important tasks. Nowadays, many commercial switches/routers provide either sFlow, NetFlow, or IPFIX scheme for monitoring the flows traversing a network. sFlow is a scheme widely supported by many switches/routers due to its using a sampling-based method, which greatly reduces the CPU processing load on a switch/router and the network bandwidth required to transmit flow data to a remote collector. However, many small flows may go undetected and the estimated flow data (e.g., the packet count and byte count) for detected flows can significantly deviate from their ground truth.</div><div>NetFlow, which is Cisco Systems’ proprietary technology, does not use a sampling-based method by default. Instead, it tries to collect complete and correct flow data for every flow. However, as the link speed and the flow arrival rate continue to increase, NetFlow also provides a sampling-based option to reduce the CPU utilization of the switch/router. Because NetFlow is proprietary, an Internet Engineering Task Force (IETF) working group has defined IPFIX as an open flow information export protocol based on NetFlow Version 9. The requirements for IPFIX are defined in the RFC 3917 standards. Basically, IPFIX is the same as NetFlow Version 9.</div><div>Due to its high demand on the CPU of the switch/router, currently NetFlow is supported only on very high-end switches/routers and its design and implementation on these commercial switches/routers are not published in the literature. In this paper, we design and implement a high-performance and high-precision NetFlow/IPFIX system on a Programming Protocol-independent Packet Processors (P4) hardware switch. Based on a 20 Gbps playback of a packet trace gathered on an Internet backbone link, experimental results show that our novel method significantly outperforms the typical design and implementation method of NetFlow/IPFIX on a P4 hardware switch. For example, for the number of detected flows during the trace period, our method outperforms the typical method by a factor of 5.72. As for the number of flows whose packet and byte counts are correctly counted, our method outperforms the typical method by a factor of 8.57.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104385"},"PeriodicalIF":8.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jnca.2025.104389
Muddasar Laghari , Yuanchang Zhong , Muhammad Junaid Tahir , Muhammad Adil
In response to cyber attacks targeting the Internet of Vehicles (IoV) ecosystem, we propose SIoV-DS, a secure framework addressing inter-vehicle communication, intra-vehicle networks, and infrastructure threats using a zero-trust approach. Vehicle data is first encoded with a Variational Autoencoder (V-AE) to mitigate inference attacks, then analyzed by an Extended Long Short-Term Memory (EX-LSTM) detector capable of identifying diverse attacks, including Denial of Service (DoS), spoofing, and malware. For interpretability, Shapley Additive Explanations (SHAP) provide insights into EX-LSTM decisions, assisting Security Operations Center (SOC) analysts. SIoV-DS is deployed over a Software-Defined Networking (SDN) architecture to ensure scalability. Evaluations on CIC-IoV2024 and Edge-IIoTset2022 datasets demonstrate high accuracy (99.78% and 95.01%, respectively), while inference-time analysis confirms feasibility for real-time detection, effectively securing the IoV ecosystem against advanced cyber threats.
{"title":"SIoV-IDS: SDN-enabled zero-trust framework for explainable intrusion detection in IoVs using Variational Autoencoders and EX-LSTM","authors":"Muddasar Laghari , Yuanchang Zhong , Muhammad Junaid Tahir , Muhammad Adil","doi":"10.1016/j.jnca.2025.104389","DOIUrl":"10.1016/j.jnca.2025.104389","url":null,"abstract":"<div><div>In response to cyber attacks targeting the Internet of Vehicles (IoV) ecosystem, we propose <strong>SIoV-DS</strong>, a secure framework addressing inter-vehicle communication, intra-vehicle networks, and infrastructure threats using a zero-trust approach. Vehicle data is first encoded with a <em>Variational Autoencoder (V-AE)</em> to mitigate inference attacks, then analyzed by an <em>Extended Long Short-Term Memory (EX-LSTM)</em> detector capable of identifying diverse attacks, including Denial of Service (DoS), spoofing, and malware. For interpretability, <em>Shapley Additive Explanations (SHAP)</em> provide insights into EX-LSTM decisions, assisting Security Operations Center (SOC) analysts. SIoV-DS is deployed over a <em>Software-Defined Networking (SDN)</em> architecture to ensure scalability. Evaluations on <em>CIC-IoV2024</em> and <em>Edge-IIoTset2022</em> datasets demonstrate high accuracy (99.78% and 95.01%, respectively), while inference-time analysis confirms feasibility for real-time detection, effectively securing the IoV ecosystem against advanced cyber threats.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104389"},"PeriodicalIF":8.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jnca.2025.104384
Shi-Yu Zhang , Chun-Cheng Lin , Zhen-Yin Annie Chen , Der-Jiunn Deng
Driven by the swift progression of smart construction, the number of sensors and smart devices on construction sites has increased dramatically, posing new challenges to data processing and communications. However, conventional cloud computing framework can hardly meet the requirement for processing enormous real-time data from construction sites, while existing approaches to deploying multi-access edge computing (MEC) servers overlooked the energy usage of MEC servers, as well as the unique physical and network security requirements within the multi-story structure of complex construction sites. Therefore, this work presents a mathematical programming model for private 5G network MEC systems on smart construction sites considering installation, connectivity, energy consumption, security maintenance, and cybersecurity; and further solve it with a hybrid metaheuristic approach that combines simplified harmony search (SHS) and variable neighborhood search (VNS) algorithms. The deployment of private 5G network edge computing servers and base stations is recognized as an NP-hard problem, where conventional mathematical models may fall short in finding practical, optimal solutions. Our proposed hybrid algorithm integrates the global search capability of SHS with the local search efficiency of VNS to comprehensively explore the solution space, providing a robust yet implementable method for complex optimization. The efficacy of this approach is validated through experimental evaluations in real-world construction site scenarios, demonstrating notable advantages in solution quality, stability, energy consumption, and overall cost reduction. Results show that the proposed algorithm significantly minimizes costs related to installation, security maintenance, and data protection, fulfilling diverse constraints effectively and making it a promising solution of deploying the MEC systems in private 5G networks for smart construction sites.
{"title":"Optimal multi-access edge computing system deployment in private 5G networks for multi-story construction sites","authors":"Shi-Yu Zhang , Chun-Cheng Lin , Zhen-Yin Annie Chen , Der-Jiunn Deng","doi":"10.1016/j.jnca.2025.104384","DOIUrl":"10.1016/j.jnca.2025.104384","url":null,"abstract":"<div><div>Driven by the swift progression of smart construction, the number of sensors and smart devices on construction sites has increased dramatically, posing new challenges to data processing and communications. However, conventional cloud computing framework can hardly meet the requirement for processing enormous real-time data from construction sites, while existing approaches to deploying multi-access edge computing (MEC) servers overlooked the energy usage of MEC servers, as well as the unique physical and network security requirements within the multi-story structure of complex construction sites. Therefore, this work presents a mathematical programming model for private 5G network MEC systems on smart construction sites considering installation, connectivity, energy consumption, security maintenance, and cybersecurity; and further solve it with a hybrid metaheuristic approach that combines simplified harmony search (SHS) and variable neighborhood search (VNS) algorithms. The deployment of private 5G network edge computing servers and base stations is recognized as an NP-hard problem, where conventional mathematical models may fall short in finding practical, optimal solutions. Our proposed hybrid algorithm integrates the global search capability of SHS with the local search efficiency of VNS to comprehensively explore the solution space, providing a robust yet implementable method for complex optimization. The efficacy of this approach is validated through experimental evaluations in real-world construction site scenarios, demonstrating notable advantages in solution quality, stability, energy consumption, and overall cost reduction. Results show that the proposed algorithm significantly minimizes costs related to installation, security maintenance, and data protection, fulfilling diverse constraints effectively and making it a promising solution of deploying the MEC systems in private 5G networks for smart construction sites.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104384"},"PeriodicalIF":8.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The evolution of computing technologies and the generation of massive amounts of data fueled the development of Artificial Intelligence (AI), specifically Deep Learning (DL), solutions to extract key patterns from data, and the generation of insights and knowledge useful to achieve optimized service execution. Traditional cloud-based execution of DL solutions faces several challenges, such as latency, data privacy, and reliability, while trying to meet service requirements. In contrast, the limited computing and storage resources on the edge pose daunting challenges in executing resource-intensive DL solutions closer to the customer. This scenario led to the birth of an interdisciplinary research field named Edge-AI or Edge-Intelligence, aiming to mitigate the limitations of cloud and edge-based DL executions. In this context, this work proposes a reference layered Edge-AI framework to ensure the successful deployment of the Edge-Intelligence paradigm, encompassing three novel layers for the optimization of edge infrastructure, edge inference, and edge training. The work presents a detailed investigation and analysis of the schemes centered around the above-listed layers of the proposed Edge-AI framework. Furthermore, this work discusses potential application domains for Edge-AI, delving into a set of potential limitations, and ending up identifying future research directions in terms of Edge-AI infrastructure deployment, inference and training, which are functionalities needed to deploy and use robust, sustainable, and efficient intelligent edge networks.
{"title":"Edge-AI: A systematic review on architectures, applications, and challenges","authors":"Himanshu Gauttam , Garima Nain , K.K. Pattanaik , Paulo Mendes","doi":"10.1016/j.jnca.2025.104375","DOIUrl":"10.1016/j.jnca.2025.104375","url":null,"abstract":"<div><div>The evolution of computing technologies and the generation of massive amounts of data fueled the development of <em>Artificial Intelligence</em> (AI), specifically <em>Deep Learning</em> (DL), solutions to extract key patterns from data, and the generation of insights and knowledge useful to achieve optimized service execution. Traditional cloud-based execution of DL solutions faces several challenges, such as latency, data privacy, and reliability, while trying to meet service requirements. In contrast, the limited computing and storage resources on the edge pose daunting challenges in executing resource-intensive DL solutions closer to the customer. This scenario led to the birth of an interdisciplinary research field named Edge-AI or Edge-Intelligence, aiming to mitigate the limitations of cloud and edge-based DL executions. In this context, this work proposes a reference layered Edge-AI framework to ensure the successful deployment of the Edge-Intelligence paradigm, encompassing three novel layers for the optimization of edge infrastructure, edge inference, and edge training. The work presents a detailed investigation and analysis of the schemes centered around the above-listed layers of the proposed Edge-AI framework. Furthermore, this work discusses potential application domains for Edge-AI, delving into a set of potential limitations, and ending up identifying future research directions in terms of Edge-AI infrastructure deployment, inference and training, which are functionalities needed to deploy and use robust, sustainable, and efficient intelligent edge networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104375"},"PeriodicalIF":8.0,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145461584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06DOI: 10.1016/j.jnca.2025.104370
Wanbanker Khongbuh , Goutam Saha
The Internet of Things (IoT) and software-defined networks (SDN) have opened up new opportunities for innovation. Many of the limitations of the IoT system can be rectified with the SDN concepts. Thus, the combination of SDN and IoT has tremendous potential in various application domains. As the number of IoT devices is increasing with time, the scalability issues need to be further improved. Another significant challenge in IoT environments is mobility. Maintaining seamless mobility and persistent connectivity for IoT devices operating over large-scale or geographically dispersed environments presents a significant research challenge. But scalability and mobility are complex challenges. Developing scalable, mobile, and adaptive network architectures is crucial for SDN-enabled IoT ecosystems. Using SDN-enabled IoT networks, we introduced a comprehensive approach to address these challenges. Here, a new protocol based on OpenFlow of SDN and 6LoWPAN of the IoT system, namely, 6LoWSD has been proposed. In this investigation, emphasis has been placed on techniques on how the proposed 6LoWSD can improve scalability and mobility issues. In this study, experiments with the proposed protocol were performed using physical devices and a simulated platform. The results were compared with the 6LoWPAN counterpart and were found to be satisfactory.
{"title":"A comprehensive study of the 6LoWSD protocol architecture with respect to scalability and mobility for SDN-enabled IoT networks","authors":"Wanbanker Khongbuh , Goutam Saha","doi":"10.1016/j.jnca.2025.104370","DOIUrl":"10.1016/j.jnca.2025.104370","url":null,"abstract":"<div><div>The Internet of Things (IoT) and software-defined networks (SDN) have opened up new opportunities for innovation. Many of the limitations of the IoT system can be rectified with the SDN concepts. Thus, the combination of SDN and IoT has tremendous potential in various application domains. As the number of IoT devices is increasing with time, the scalability issues need to be further improved. Another significant challenge in IoT environments is mobility. Maintaining seamless mobility and persistent connectivity for IoT devices operating over large-scale or geographically dispersed environments presents a significant research challenge. But scalability and mobility are complex challenges. Developing scalable, mobile, and adaptive network architectures is crucial for SDN-enabled IoT ecosystems. Using SDN-enabled IoT networks, we introduced a comprehensive approach to address these challenges. Here, a new protocol based on OpenFlow of SDN and 6LoWPAN of the IoT system, namely, 6LoWSD has been proposed. In this investigation, emphasis has been placed on techniques on how the proposed 6LoWSD can improve scalability and mobility issues. In this study, experiments with the proposed protocol were performed using physical devices and a simulated platform. The results were compared with the 6LoWPAN counterpart and were found to be satisfactory.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104370"},"PeriodicalIF":8.0,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145461588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1016/j.jnca.2025.104373
Riccardo Venanzi, Giuseppe Di Modica, Luca Foschini, Paolo Bellavista
According to both academic and industry perspectives, the Fourth Industrial Revolution has brought about a paradigm shift in the manufacturing sector enabling companies to enhance their competitiveness in the global market. To achieve this goal, manufacturing companies will need to undertake a deep digital transformation, primarily by introducing advanced Information Technology (IT) into traditionally less digitalized departments, such as shop floors, where Operational Technology (OT) currently dominate. For the full achievement of Industry 4.0 revolution objectives, practitioners believe in the strong requirement of a progressive and tight integration between IT and OT departments. In the depicted scenario, communication technologies are expected to play a pivotal role in facilitating the integration process, but other more recent and advanced IT have also proven helpful. In particular, the topic of IT/OT integration has attracted significant attention from various research communities that have sought to identify both the opportunities and challenges associated with its implementation. Although some good surveys of those works have appeared in the literature, to the best of our knowledge, no comprehensive review has yet been conducted that is fully dedicated to the topic of IT/OT convergence. In this paper, we propose a holistic approach to examine the various dimensions of IT/OT integration, which we classify into five interconnected realms, Communication, IT-Driven Support to OT, Human Centricity, Advanced Industrial Control Systems, and cybersecurity. Furthermore, we develop a realm-oriented taxonomy to organize the surveyed works in a structured manner, offering readers a clear overview of the current state of the literature, along with insights into unexplored opportunities and future directions for IT/OT integration.
{"title":"Towards IT/OT integration in industry digitalization: A comprehensive survey","authors":"Riccardo Venanzi, Giuseppe Di Modica, Luca Foschini, Paolo Bellavista","doi":"10.1016/j.jnca.2025.104373","DOIUrl":"10.1016/j.jnca.2025.104373","url":null,"abstract":"<div><div>According to both academic and industry perspectives, the Fourth Industrial Revolution has brought about a paradigm shift in the manufacturing sector enabling companies to enhance their competitiveness in the global market. To achieve this goal, manufacturing companies will need to undertake a deep digital transformation, primarily by introducing advanced Information Technology (IT) into traditionally less digitalized departments, such as shop floors, where Operational Technology (OT) currently dominate. For the full achievement of Industry 4.0 revolution objectives, practitioners believe in the strong requirement of a progressive and tight integration between IT and OT departments. In the depicted scenario, communication technologies are expected to play a pivotal role in facilitating the integration process, but other more recent and advanced IT have also proven helpful. In particular, the topic of IT/OT integration has attracted significant attention from various research communities that have sought to identify both the opportunities and challenges associated with its implementation. Although some good surveys of those works have appeared in the literature, to the best of our knowledge, no comprehensive review has yet been conducted that is fully dedicated to the topic of IT/OT convergence. In this paper, we propose a holistic approach to examine the various dimensions of IT/OT integration, which we classify into five interconnected realms, Communication, IT-Driven Support to OT, Human Centricity, Advanced Industrial Control Systems, and cybersecurity. Furthermore, we develop a realm-oriented taxonomy to organize the surveyed works in a structured manner, offering readers a clear overview of the current state of the literature, along with insights into unexplored opportunities and future directions for IT/OT integration.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104373"},"PeriodicalIF":8.0,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145441548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}