Pub Date : 2025-09-13DOI: 10.1016/j.jnca.2025.104298
André Perdigão, José Quevedo, Rui L. Aguiar
There has been extensive discussion on the benefits and improvements that 5G networks can bring to industry operations, particularly with network slicing. However, to fully realize network slices, it is essential to thoroughly understand the mechanisms available within a 5G network that can be used to adapt network performance. This paper surveys and describes existing 5G network configurations and assesses the performance impact of several configurations using a real-world commercial standalone (SA) 5G network, bringing the challenges between purely theoretical mathematical models into realizations with existing equipment. The paper discusses how these features impact communication performance according to industrial requirements.
The survey describes and demonstrates the performance impact of various 5G configurations, enabling readers to understand the capabilities of current 5G networks and learn how to leverage 5G technology to enhance industrial operations. This knowledge is also crucial to fully realize network slices tailored to industrial requirements.
{"title":"5G slicing under the hood: An in-depth analysis of 5G RAN features and configurations","authors":"André Perdigão, José Quevedo, Rui L. Aguiar","doi":"10.1016/j.jnca.2025.104298","DOIUrl":"10.1016/j.jnca.2025.104298","url":null,"abstract":"<div><div>There has been extensive discussion on the benefits and improvements that 5G networks can bring to industry operations, particularly with network slicing. However, to fully realize network slices, it is essential to thoroughly understand the mechanisms available within a 5G network that can be used to adapt network performance. This paper surveys and describes existing 5G network configurations and assesses the performance impact of several configurations using a real-world commercial standalone (SA) 5G network, bringing the challenges between purely theoretical mathematical models into realizations with existing equipment. The paper discusses how these features impact communication performance according to industrial requirements.</div><div>The survey describes and demonstrates the performance impact of various 5G configurations, enabling readers to understand the capabilities of current 5G networks and learn how to leverage 5G technology to enhance industrial operations. This knowledge is also crucial to fully realize network slices tailored to industrial requirements.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104298"},"PeriodicalIF":8.0,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145059766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-13DOI: 10.1016/j.jnca.2025.104301
Jinglin Li , Haoran Wang , Sen Zhang , Peng-Yong Kong , Wendong Xiao
Mobile charging provides a new way for energy replenishment in Wireless Rechargeable Sensor Network (WRSN), where the Mobile Charger (MC) is employed for charging sensor nodes sequentially according to the mobile charging sequence scheduling result. Event detection is an essential application of WRSN, but when the events occur stochastically, Mobile Charging Sequence Scheduling for Optimal Stochastic Event Detection (MCSS-OSED) is difficult and challenging, and the non-deterministic detection property of the sensor makes MCSS-OSED complicated further. This paper proposes a novel Multistage Exploration Q-learning Algorithm (MEQA) for MCSS-OSED based on reinforcement learning. In MEQA, MC is taken as the agent to explore the space of the mobile charging sequences via the interactions with the environment for the optimal Quality of Event Detection (QED) evaluated by both considering the sensing probability of the sensor and the probability that events may occur in the monitoring region. Particularly, a new multistage exploration policy is designed for MC to improve the exploration efficiency by selecting the current suboptimal actions with a certain probability, and a novel reward function is presented to evaluate the MC charging action according to the real-time detection contribution of the sensor. Simulation results show that MEQA is efficient for MCSS-OSED and superior to the existing classical algorithms.
{"title":"Reinforcement learning based mobile charging sequence scheduling algorithm for optimal stochastic event detection in wireless rechargeable sensor networks","authors":"Jinglin Li , Haoran Wang , Sen Zhang , Peng-Yong Kong , Wendong Xiao","doi":"10.1016/j.jnca.2025.104301","DOIUrl":"10.1016/j.jnca.2025.104301","url":null,"abstract":"<div><div>Mobile charging provides a new way for energy replenishment in Wireless Rechargeable Sensor Network (WRSN), where the Mobile Charger (MC) is employed for charging sensor nodes sequentially according to the mobile charging sequence scheduling result. Event detection is an essential application of WRSN, but when the events occur stochastically, Mobile Charging Sequence Scheduling for Optimal Stochastic Event Detection (MCSS-OSED) is difficult and challenging, and the non-deterministic detection property of the sensor makes MCSS-OSED complicated further. This paper proposes a novel Multistage Exploration Q-learning Algorithm (MEQA) for MCSS-OSED based on reinforcement learning. In MEQA, MC is taken as the agent to explore the space of the mobile charging sequences via the interactions with the environment for the optimal Quality of Event Detection (QED) evaluated by both considering the sensing probability of the sensor and the probability that events may occur in the monitoring region. Particularly, a new multistage exploration policy is designed for MC to improve the exploration efficiency by selecting the current suboptimal actions with a certain probability, and a novel reward function is presented to evaluate the MC charging action according to the real-time detection contribution of the sensor. Simulation results show that MEQA is efficient for MCSS-OSED and superior to the existing classical algorithms.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104301"},"PeriodicalIF":8.0,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.jnca.2025.104322
Simon Dahdal , Sara Cavicchi , Alessandro Gilli , Filippo Poltronieri , Mauro Tortonesi , Niranjan Suri , Cesare Stefanelli
In the aftermath of natural disasters, Human Assistance & Disaster Recovery (HADR) operations have to deal with disrupted communication networks and constrained resources. Such harsh conditions make high-communication-overhead ML approaches — either centralized or distributed — impractical, thus hindering the adoption of AI solutions to implement a critical function for HADR operations: building accurate and up-to-date situational awareness. To address this issue we developed Roaming Machine Learning (RoamML), a novel Distributed Continual Learning Framework designed for HADR operations and based on the premise that moving an ML model is more efficient and robust than either large dataset transfers or frequent model parameter updates. RoamML deploys a mobile AI agent that incrementally train models across network nodes containing yet unprocessed data; at each stop, the agent initiate a local training phase to update its internal ML model parameters. To prioritize the processing of strategically valuable data, RoamML Agents follow a navigation system based upon the concept of Data Gravity, leveraging Multi-Criteria Decision Making techniques to simultaneously consider many objectives for Agent routing optimization, including model learning efficiency and network resource utilization, while seamlessly blending subjective insights from expert judgments with objective metrics derived from quantifiable data to determine each next hop. We conducted extensive experiments to evaluate RoamML, demonstrating the framework’s efficiency to train ML models under highly dynamic, resource-constrained environments. RoamML achieves similar performance to centralized ML training under ideal network conditions and outperforms it in a more realistic scenario with reduced network resources, ultimately saving up to 75% in bandwidth utilization across all experiments.
{"title":"RoamML distributed continual learning: Adaptive and flexible data-driven response for disaster recovery operations","authors":"Simon Dahdal , Sara Cavicchi , Alessandro Gilli , Filippo Poltronieri , Mauro Tortonesi , Niranjan Suri , Cesare Stefanelli","doi":"10.1016/j.jnca.2025.104322","DOIUrl":"10.1016/j.jnca.2025.104322","url":null,"abstract":"<div><div>In the aftermath of natural disasters, Human Assistance & Disaster Recovery (HADR) operations have to deal with disrupted communication networks and constrained resources. Such harsh conditions make high-communication-overhead ML approaches — either centralized or distributed — impractical, thus hindering the adoption of AI solutions to implement a critical function for HADR operations: building accurate and up-to-date situational awareness. To address this issue we developed Roaming Machine Learning (RoamML), a novel Distributed Continual Learning Framework designed for HADR operations and based on the premise that moving an ML model is more efficient and robust than either large dataset transfers or frequent model parameter updates. RoamML deploys a mobile AI agent that incrementally train models across network nodes containing yet unprocessed data; at each stop, the agent initiate a local training phase to update its internal ML model parameters. To prioritize the processing of strategically valuable data, RoamML Agents follow a navigation system based upon the concept of Data Gravity, leveraging Multi-Criteria Decision Making techniques to simultaneously consider many objectives for Agent routing optimization, including model learning efficiency and network resource utilization, while seamlessly blending subjective insights from expert judgments with objective metrics derived from quantifiable data to determine each next hop. We conducted extensive experiments to evaluate RoamML, demonstrating the framework’s efficiency to train ML models under highly dynamic, resource-constrained environments. RoamML achieves similar performance to centralized ML training under ideal network conditions and outperforms it in a more realistic scenario with reduced network resources, ultimately saving up to 75% in bandwidth utilization across all experiments.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104322"},"PeriodicalIF":8.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1016/j.jnca.2025.104293
Yong Chen, Jiaojiao Yuan, Huaju Liu, Zhaofeng Xin
With the advancement of high-speed railways toward intelligent systems, a large number of IoT devices have been deployed in both onboard and trackside systems. The resulting surge in data transmission has intensified competition for spectrum resources, thereby significantly increasing the demand for train-ground communication systems with high capacity, low latency, and strong interference resilience.The millimeter wave (mmWave) frequency band provides a large bandwidth to support massive data transmission from IoT devices. Aiming at addressing the issues of low network capacity, high interference, and low spectral efficiency in mmWave train-ground communication systems under 5G-R for high-speed railways, we propose a multi-agent attention mechanism for mmWave spectrum allocation in train-ground communication. First, we analyzed the spectrum requirements of mmWave BS and onboard MRS, constructed a spectrum resource allocation model with the optimization objective of maximizing system network capacity, and transformed it into a Markov decision process (MDP) model. Next, considering the need for coordinated spectrum allocation and interference suppression between mmWave BS and MRS, we develop a resource optimization strategy using the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. Specifically, we combine multi head attention mechanism to improve the Critic network of MADDPG algorithm. This enhancement enables coordinated global–local strategy optimization through attention weight computation, thereby improving decision-making efficiency. Simulation results demonstrate that compared to existing methods, our algorithm achieves superior spectrum allocation performance, significantly increases network capacity while reducing interference levels, and meets the spectrum requirements of HSR communication systems.
{"title":"Spectrum allocation method for millimeter-wave train-ground communication in high-speed rail based on multi-agent attention","authors":"Yong Chen, Jiaojiao Yuan, Huaju Liu, Zhaofeng Xin","doi":"10.1016/j.jnca.2025.104293","DOIUrl":"10.1016/j.jnca.2025.104293","url":null,"abstract":"<div><div>With the advancement of high-speed railways toward intelligent systems, a large number of IoT devices have been deployed in both onboard and trackside systems. The resulting surge in data transmission has intensified competition for spectrum resources, thereby significantly increasing the demand for train-ground communication systems with high capacity, low latency, and strong interference resilience.The millimeter wave (mmWave) frequency band provides a large bandwidth to support massive data transmission from IoT devices. Aiming at addressing the issues of low network capacity, high interference, and low spectral efficiency in mmWave train-ground communication systems under 5G-R for high-speed railways, we propose a multi-agent attention mechanism for mmWave spectrum allocation in train-ground communication. First, we analyzed the spectrum requirements of mmWave BS and onboard MRS, constructed a spectrum resource allocation model with the optimization objective of maximizing system network capacity, and transformed it into a Markov decision process (MDP) model. Next, considering the need for coordinated spectrum allocation and interference suppression between mmWave BS and MRS, we develop a resource optimization strategy using the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. Specifically, we combine multi head attention mechanism to improve the Critic network of MADDPG algorithm. This enhancement enables coordinated global–local strategy optimization through attention weight computation, thereby improving decision-making efficiency. Simulation results demonstrate that compared to existing methods, our algorithm achieves superior spectrum allocation performance, significantly increases network capacity while reducing interference levels, and meets the spectrum requirements of HSR communication systems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104293"},"PeriodicalIF":8.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-06DOI: 10.1016/j.jnca.2025.104303
Md Zahangir Alam , Mohamed Lassaad Ammari , Abbas Jamalipour , Paul Fortier
The transceiver design for multi-hop multiple-input multiple-output (MIMO) relay is very challenging, and for a large scale network, it is not economical to send the signal through all possible links. Instead, we can find the best path from source-to-destination that gives the highest end-to-end signal-to-noise ratio (SNR). In this paper, we provide a linear minimum mean squared error (MMSE) based multi-hop multi-terminal MIMO non-regenerative half-duplex amplify-and-forward (AF) parallel relay design for a wireless sensor network (WSN) in an underground mines. The transceiver design of such a network becomes very complex. We can simplify a complex multi-terminal parallel relay system into a series of links using selection relaying, where transmission from the source to the relay, relay to relay, and finally relay to the destination will take place using the best relay that provides the best link performance among others. The best relay selection using the traditional technique in our case is not easy, and we need a strategy to find the best path from a large number of hidden paths. We first find the set of simplified series multi-hop MIMO best relays from source to destination using the optimum path selection technique found in the literature. Then we develop a joint optimum design of the source precoder, the relay amplifier, and the receiver matrices using the full channel diagonalizing technique followed by the Lagrange strong duality principle with known channel state information (CSI). Finally, simulation results show an excellent agreement with numerical analysis demonstrating the effectiveness of the proposed framework.
{"title":"Energy-efficient optimal relay design for wireless sensor network in underground mines","authors":"Md Zahangir Alam , Mohamed Lassaad Ammari , Abbas Jamalipour , Paul Fortier","doi":"10.1016/j.jnca.2025.104303","DOIUrl":"10.1016/j.jnca.2025.104303","url":null,"abstract":"<div><div>The transceiver design for multi-hop multiple-input multiple-output (MIMO) relay is very challenging, and for a large scale network, it is not economical to send the signal through all possible links. Instead, we can find the best path from source-to-destination that gives the highest end-to-end signal-to-noise ratio (SNR). In this paper, we provide a linear minimum mean squared error (MMSE) based multi-hop multi-terminal MIMO non-regenerative half-duplex amplify-and-forward (AF) parallel relay design for a wireless sensor network (WSN) in an underground mines. The transceiver design of such a network becomes very complex. We can simplify a complex multi-terminal parallel relay system into a series of links using selection relaying, where transmission from the source to the relay, relay to relay, and finally relay to the destination will take place using the best relay that provides the best link performance among others. The best relay selection using the traditional technique in our case is not easy, and we need a strategy to find the best path from a large number of hidden paths. We first find the set of simplified series multi-hop MIMO best relays from source to destination using the optimum path selection technique found in the literature. Then we develop a joint optimum design of the source precoder, the relay amplifier, and the receiver matrices using the full channel diagonalizing technique followed by the Lagrange strong duality principle with known channel state information (CSI). Finally, simulation results show an excellent agreement with numerical analysis demonstrating the effectiveness of the proposed framework.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104303"},"PeriodicalIF":8.0,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-02DOI: 10.1016/j.jnca.2025.104317
Md Razon Hossain , Alistair Barros , Colin Fidge
Breakthroughs in edge computing offer new prospects for businesses to extend Industrial Internet of Things (IIoT) networks beyond analytics to actionable processing. In particular, cloud-based business processes, which provide administrative actions and rules through workflow-sequenced activities, can be streamlined on the edge for low-latency access in physical spaces. Although this advances business controls, particularly for critical events of industrial applications, it faces operational barriers. Edge devices, which support high volume and competing demands from a large number of sensors, vary in capacity, reliability, and proximity to sensors and cloud gateways. This warrants a highly efficient placement of process activities, from cloud to edge, given a variety of constraints, including resource demand, capacity, and compatibility, to satisfy timeliness constraints. In contrast to the related IIoT optimization research underway, including those of singleton service placements, business processes pose new challenges. Not only do sets of dependent activities have to be considered for co-deployment, but the meaning of timing constraints needs to be respected, given alternative, parallel, and iterative control-flow paths in processes. In addition, instantiation (replication) to scale activities for increasing data volumes poses further deployment constraints, i.e., on sets of nodes supporting dynamic instantiation of order-dependent activities. Here we present an optimization strategy for business processes that addresses these challenges. We first conceptualize processes in coherent fragments to precisely derive both responsiveness and throughput execution time heuristics and formulate a multi-objective process placement problem. Next, we develop a genetic algorithm-based process placement procedure. To adapt to fluctuating event frequencies, we support an interplay between scaling algorithms for service instances and process placement optimization. Validation through an industrial safety monitoring use case drawn from the construction industry shows that our approach improves timeliness responses by almost one-third and more than doubles execution throughput compared to existing methods.
{"title":"From cloud to edge: dynamic placement optimization of business processes in IIoT networks","authors":"Md Razon Hossain , Alistair Barros , Colin Fidge","doi":"10.1016/j.jnca.2025.104317","DOIUrl":"10.1016/j.jnca.2025.104317","url":null,"abstract":"<div><div>Breakthroughs in edge computing offer new prospects for businesses to extend Industrial Internet of Things (IIoT) networks beyond analytics to actionable processing. In particular, cloud-based business processes, which provide administrative actions and rules through workflow-sequenced activities, can be streamlined on the edge for low-latency access in physical spaces. Although this advances business controls, particularly for critical events of industrial applications, it faces operational barriers. Edge devices, which support high volume and competing demands from a large number of sensors, vary in capacity, reliability, and proximity to sensors and cloud gateways. This warrants a highly efficient placement of process activities, from cloud to edge, given a variety of constraints, including resource demand, capacity, and compatibility, to satisfy timeliness constraints. In contrast to the related IIoT optimization research underway, including those of singleton service placements, business processes pose new challenges. Not only do sets of dependent activities have to be considered for co-deployment, but the meaning of timing constraints needs to be respected, given alternative, parallel, and iterative control-flow paths in processes. In addition, instantiation (replication) to scale activities for increasing data volumes poses further deployment constraints, i.e., on sets of nodes supporting dynamic instantiation of order-dependent activities. Here we present an optimization strategy for business processes that addresses these challenges. We first conceptualize processes in coherent fragments to precisely derive both responsiveness and throughput execution time heuristics and formulate a multi-objective process placement problem. Next, we develop a genetic algorithm-based process placement procedure. To adapt to fluctuating event frequencies, we support an interplay between scaling algorithms for service instances and process placement optimization. Validation through an industrial safety monitoring use case drawn from the construction industry shows that our approach improves timeliness responses by almost one-third and more than doubles execution throughput compared to existing methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104317"},"PeriodicalIF":8.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145009289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-02DOI: 10.1016/j.jnca.2025.104291
Fernando Román-García, Juan Hernández-Serrano, Oscar Esparza
This article introduces the Non-Repudiable Data Exchange (NoRDEx) protocol, designed to ensure non-repudiation in data exchanges. Unlike traditional non-repudiation and fair exchange protocols, NoRDEx can be considered decentralized as it eliminates the need for a centralized Trusted Third Party (TTP) by using a Distributed Ledger Technology (DLT) to store cryptographic proofs without revealing the exchanged message. NoRDEx is an optimistic non-repudiation protocol, as it only uses the DLT in case of a dispute. The protocol has been implemented and tested in real-world environments, with performance assessments covering cost, overhead, and execution time. A formal security analysis using the Syverson Van Oorschot (SVO) logical model demonstrates NoRDEx’s ability to resolve disputes securely.
本文介绍了不可抵赖数据交换(NoRDEx)协议,旨在确保数据交换中的不可抵赖性。与传统的不可否认和公平交换协议不同,NoRDEx可以被认为是去中心化的,因为它通过使用分布式账本技术(DLT)来存储加密证明而不泄露交换消息,从而消除了对中心化可信第三方(TTP)的需求。NoRDEx是一个乐观的不可否认协议,因为它只在发生争议的情况下使用DLT。该协议已经在实际环境中实现和测试,性能评估涵盖了成本、开销和执行时间。使用Syverson Van Oorschot (SVO)逻辑模型的正式安全性分析证明了NoRDEx安全解决争议的能力。
{"title":"NoRDEx: A decentralized optimistic non-repudiation protocol for data exchanges","authors":"Fernando Román-García, Juan Hernández-Serrano, Oscar Esparza","doi":"10.1016/j.jnca.2025.104291","DOIUrl":"10.1016/j.jnca.2025.104291","url":null,"abstract":"<div><div>This article introduces the Non-Repudiable Data Exchange (NoRDEx) protocol, designed to ensure non-repudiation in data exchanges. Unlike traditional non-repudiation and fair exchange protocols, NoRDEx can be considered decentralized as it eliminates the need for a centralized Trusted Third Party (TTP) by using a Distributed Ledger Technology (DLT) to store cryptographic proofs without revealing the exchanged message. NoRDEx is an optimistic non-repudiation protocol, as it only uses the DLT in case of a dispute. The protocol has been implemented and tested in real-world environments, with performance assessments covering cost, overhead, and execution time. A formal security analysis using the Syverson Van Oorschot (SVO) logical model demonstrates NoRDEx’s ability to resolve disputes securely.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104291"},"PeriodicalIF":8.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145059796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-02DOI: 10.1016/j.jnca.2025.104302
Marius Becherer , Omar K. Hussain , Frank den Hartog , Yu Zhang , Michael Zipperle
The Social Internet of Things (SIoT) enables cross-organisational collaboration for various industrial applications. However, evaluating trust models within such environments remains challenging due to context-dependent dynamics in SIoT environments. Existing evaluation platforms often rely on overly domain-specific or generic datasets, overlooking the inherent uncertainty and dynamicity of real-world SIoT settings. Additionally, there is a lack of practical platforms to assess the feasibility and effectiveness of trust models across diverse scenarios. In this study, we present the Realistic Trust Model Evaluation Platform for the Social Internet of Things (REACT-SIoT) to rigorously assess trust models in SIoT environments, thereby facilitating trustworthy collaboration for sustainable IoT transformations. REACT-SIoT addresses 21 identified requirements essential for simulating a realistic SIoT environment, including categories of heterogeneity, dynamicity, incompleteness, uncertainty, interdependency, and authentic real-world dynamics. We developed a configurable evaluation procedure that mitigates dataset bias and supports the assessment of both existing and newly developed trust models under various scenario-dependent settings. A real-world example demonstrates the platform’s capability to satisfy these requirements effectively. Our analysis reveals that REACT-SIoT meets all defined requirements and outperforms existing evaluation environments based on accuracy, trust convergence, and robustness criteria. The platform has been successfully applied to existing trust models, showcasing its applicability and enabling comparative assessments that were previously constrained by disparate evaluation settings and datasets. In conclusion, REACT-SIoT offers a highly- adaptable evaluation framework that ensures unbiased and comprehensive trust model assessments in SIoT environments. This platform bridges a critical gap in trust evaluation research, enabling the comparison and validation of trust models across diverse, realistic scenarios, thereby supporting the development of more resilient and trustworthy collaborative SIoT systems.
{"title":"A realistic trust model evaluation platform for the Social Internet of Things (REACT-SIoT)","authors":"Marius Becherer , Omar K. Hussain , Frank den Hartog , Yu Zhang , Michael Zipperle","doi":"10.1016/j.jnca.2025.104302","DOIUrl":"10.1016/j.jnca.2025.104302","url":null,"abstract":"<div><div>The Social Internet of Things (SIoT) enables cross-organisational collaboration for various industrial applications. However, evaluating trust models within such environments remains challenging due to context-dependent dynamics in SIoT environments. Existing evaluation platforms often rely on overly domain-specific or generic datasets, overlooking the inherent uncertainty and dynamicity of real-world SIoT settings. Additionally, there is a lack of practical platforms to assess the feasibility and effectiveness of trust models across diverse scenarios. In this study, we present the Realistic Trust Model Evaluation Platform for the Social Internet of Things (REACT-SIoT) to rigorously assess trust models in SIoT environments, thereby facilitating trustworthy collaboration for sustainable IoT transformations. REACT-SIoT addresses 21 identified requirements essential for simulating a realistic SIoT environment, including categories of heterogeneity, dynamicity, incompleteness, uncertainty, interdependency, and authentic real-world dynamics. We developed a configurable evaluation procedure that mitigates dataset bias and supports the assessment of both existing and newly developed trust models under various scenario-dependent settings. A real-world example demonstrates the platform’s capability to satisfy these requirements effectively. Our analysis reveals that REACT-SIoT meets all defined requirements and outperforms existing evaluation environments based on accuracy, trust convergence, and robustness criteria. The platform has been successfully applied to existing trust models, showcasing its applicability and enabling comparative assessments that were previously constrained by disparate evaluation settings and datasets. In conclusion, REACT-SIoT offers a highly- adaptable evaluation framework that ensures unbiased and comprehensive trust model assessments in SIoT environments. This platform bridges a critical gap in trust evaluation research, enabling the comparison and validation of trust models across diverse, realistic scenarios, thereby supporting the development of more resilient and trustworthy collaborative SIoT systems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104302"},"PeriodicalIF":8.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.jnca.2025.104286
Devi Priya V.S. , Sibi Chakkaravarthy Sethuraman , Muhammad Khurram Khan
Critical infrastructure and industrial systems are both becoming more and more networked and equipped with computing and communications tools. To manage processes and automate them where possible, Industrial Control Systems (ICS) manage a variety of components, including monitoring tools and software platforms. More complicated data is now being run on the networks, including data(past), money(present), and brains (future). In order to predictably detect specific services and patterns (deep learning) and automatically check authenticity and transfer value (blockchain), deep learning and blockchain are integrated into the ICS network. Hence, we conducted a thorough examination of the models published in the literature in order to comprehend how to integrate machine learning and blockchain efficiently and successfully for intrusion detection services. We also provide useful guidance for future research in this area by noting significant issues that must be addressed before substantial deployments of IDS models in ICS.
{"title":"Blockchain-based Deep Learning Models for Intrusion Detection in Industrial Control Systems: Frameworks and Open Issues","authors":"Devi Priya V.S. , Sibi Chakkaravarthy Sethuraman , Muhammad Khurram Khan","doi":"10.1016/j.jnca.2025.104286","DOIUrl":"10.1016/j.jnca.2025.104286","url":null,"abstract":"<div><div>Critical infrastructure and industrial systems are both becoming more and more networked and equipped with computing and communications tools. To manage processes and automate them where possible, Industrial Control Systems (ICS) manage a variety of components, including monitoring tools and software platforms. More complicated data is now being run on the networks, including data(past), money(present), and brains (future). In order to predictably detect specific services and patterns (deep learning) and automatically check authenticity and transfer value (blockchain), deep learning and blockchain are integrated into the ICS network. Hence, we conducted a thorough examination of the models published in the literature in order to comprehend how to integrate machine learning and blockchain efficiently and successfully for intrusion detection services. We also provide useful guidance for future research in this area by noting significant issues that must be addressed before substantial deployments of IDS models in ICS.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104286"},"PeriodicalIF":8.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145108681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.jnca.2025.104296
Tuba Arif , David Camacho , Jong Hyuk Park
The field of digital forensics is undergoing a paradigm shift because security breaches are now occurring outside of conventional domains such as mobile devices, databases, networks, multimedia, cloud platforms, and the Internet of Things (IoT) all require a complete approach. This study report reveals a high level of ambiguities and process redundancies within the subdomains of digital forensics through the completion of a Systematic Literature Review (SLR). To address this, we suggest a high-level theoretical metamodel that unifies tasks, operations, procedures, and methods of research across many subdomains that will help forensic investigators during digital investigations to organize and integrate evidence. The study also discusses the necessity of global perspectives in research on digital forensics and provides a qualitative evaluation of past surveys, highlighting similar difficulties, obstacles, and key issues across domains, whereas earlier surveys concentrated on domains. The findings through examination offer a multidimensional knowledge of the difficulties in digital forensics and suggested metamodels help to create a more cohesive and integrated approach to digital investigations, establishing an environment for further study and collaborations in this crucial domain.
{"title":"Unveiling cybersecurity mysteries: A comprehensive survey on digital forensics trends, threats, and solutions in network security","authors":"Tuba Arif , David Camacho , Jong Hyuk Park","doi":"10.1016/j.jnca.2025.104296","DOIUrl":"10.1016/j.jnca.2025.104296","url":null,"abstract":"<div><div>The field of digital forensics is undergoing a paradigm shift because security breaches are now occurring outside of conventional domains such as mobile devices, databases, networks, multimedia, cloud platforms, and the Internet of Things (IoT) all require a complete approach. This study report reveals a high level of ambiguities and process redundancies within the subdomains of digital forensics through the completion of a Systematic Literature Review (SLR). To address this, we suggest a high-level theoretical metamodel that unifies tasks, operations, procedures, and methods of research across many subdomains that will help forensic investigators during digital investigations to organize and integrate evidence. The study also discusses the necessity of global perspectives in research on digital forensics and provides a qualitative evaluation of past surveys, highlighting similar difficulties, obstacles, and key issues across domains, whereas earlier surveys concentrated on domains. The findings through examination offer a multidimensional knowledge of the difficulties in digital forensics and suggested metamodels help to create a more cohesive and integrated approach to digital investigations, establishing an environment for further study and collaborations in this crucial domain.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104296"},"PeriodicalIF":8.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}