Pub Date : 2025-11-01Epub Date: 2025-09-06DOI: 10.1016/j.jnca.2025.104303
Md Zahangir Alam , Mohamed Lassaad Ammari , Abbas Jamalipour , Paul Fortier
The transceiver design for multi-hop multiple-input multiple-output (MIMO) relay is very challenging, and for a large scale network, it is not economical to send the signal through all possible links. Instead, we can find the best path from source-to-destination that gives the highest end-to-end signal-to-noise ratio (SNR). In this paper, we provide a linear minimum mean squared error (MMSE) based multi-hop multi-terminal MIMO non-regenerative half-duplex amplify-and-forward (AF) parallel relay design for a wireless sensor network (WSN) in an underground mines. The transceiver design of such a network becomes very complex. We can simplify a complex multi-terminal parallel relay system into a series of links using selection relaying, where transmission from the source to the relay, relay to relay, and finally relay to the destination will take place using the best relay that provides the best link performance among others. The best relay selection using the traditional technique in our case is not easy, and we need a strategy to find the best path from a large number of hidden paths. We first find the set of simplified series multi-hop MIMO best relays from source to destination using the optimum path selection technique found in the literature. Then we develop a joint optimum design of the source precoder, the relay amplifier, and the receiver matrices using the full channel diagonalizing technique followed by the Lagrange strong duality principle with known channel state information (CSI). Finally, simulation results show an excellent agreement with numerical analysis demonstrating the effectiveness of the proposed framework.
{"title":"Energy-efficient optimal relay design for wireless sensor network in underground mines","authors":"Md Zahangir Alam , Mohamed Lassaad Ammari , Abbas Jamalipour , Paul Fortier","doi":"10.1016/j.jnca.2025.104303","DOIUrl":"10.1016/j.jnca.2025.104303","url":null,"abstract":"<div><div>The transceiver design for multi-hop multiple-input multiple-output (MIMO) relay is very challenging, and for a large scale network, it is not economical to send the signal through all possible links. Instead, we can find the best path from source-to-destination that gives the highest end-to-end signal-to-noise ratio (SNR). In this paper, we provide a linear minimum mean squared error (MMSE) based multi-hop multi-terminal MIMO non-regenerative half-duplex amplify-and-forward (AF) parallel relay design for a wireless sensor network (WSN) in an underground mines. The transceiver design of such a network becomes very complex. We can simplify a complex multi-terminal parallel relay system into a series of links using selection relaying, where transmission from the source to the relay, relay to relay, and finally relay to the destination will take place using the best relay that provides the best link performance among others. The best relay selection using the traditional technique in our case is not easy, and we need a strategy to find the best path from a large number of hidden paths. We first find the set of simplified series multi-hop MIMO best relays from source to destination using the optimum path selection technique found in the literature. Then we develop a joint optimum design of the source precoder, the relay amplifier, and the receiver matrices using the full channel diagonalizing technique followed by the Lagrange strong duality principle with known channel state information (CSI). Finally, simulation results show an excellent agreement with numerical analysis demonstrating the effectiveness of the proposed framework.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104303"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-08-19DOI: 10.1016/j.jnca.2025.104288
Matta Krishna Kumari , Nikhil Tripathi
The TCP/IP architecture has been the backbone of the Internet for decades, but its host-centric design is becoming less suitable for the data-centric communication demands of today. As the demand for efficient content distribution and retrieval grows, Named Data Networking (NDN) emerges as a promising alternative. NDN shifts the focus from host-centric to data-centric networking, with packets routed based on content names rather than IP addresses. A key feature of NDN is in-network caching, which attempts to reduce latency, alleviate network congestion and enhance content availability. However, the known NDN caching schemes do not consider the dynamic content demand that changes with respect to time and location. This causes the end users to encounter relatively higher content access latency. To address this challenge, in this paper, we propose a novel dynamic behaviour strategy that can be integrated into the known NDN caching schemes. This strategy can enable the NDN routers to make cooperative decisions and move the content copy to an edge router that requests the content most frequently. We comprehensively evaluate the performance of state-of-the-art NDN caching schemes with and without our proposed dynamic strategy using several real-world topologies. Our experimental results show that incorporating dynamic behaviour into these schemes leads to significantly better outcomes in terms of CHR, content latency, and path stretch. Specifically, the best improvements include a threefold increase in CHR, an 80% reduction in content access latency, and nearly a 45% decrease in path stretch. As an aside, we also develop a framework for the Icarus simulator to automate the process of performance assessment of different NDN caching schemes on a large number of real-world topologies.
{"title":"Adaptive NDN caching: Leveraging dynamic behaviour for enhanced efficiency","authors":"Matta Krishna Kumari , Nikhil Tripathi","doi":"10.1016/j.jnca.2025.104288","DOIUrl":"10.1016/j.jnca.2025.104288","url":null,"abstract":"<div><div>The TCP/IP architecture has been the backbone of the Internet for decades, but its host-centric design is becoming less suitable for the data-centric communication demands of today. As the demand for efficient content distribution and retrieval grows, Named Data Networking (NDN) emerges as a promising alternative. NDN shifts the focus from host-centric to data-centric networking, with packets routed based on content names rather than IP addresses. A key feature of NDN is in-network caching, which attempts to reduce latency, alleviate network congestion and enhance content availability. However, the known NDN caching schemes do not consider the dynamic content demand that changes with respect to time and location. This causes the end users to encounter relatively higher content access latency. To address this challenge, in this paper, we propose a novel dynamic behaviour strategy that can be integrated into the known NDN caching schemes. This strategy can enable the NDN routers to make cooperative decisions and move the content copy to an edge router that requests the content most frequently. We comprehensively evaluate the performance of state-of-the-art NDN caching schemes with and without our proposed dynamic strategy using several real-world topologies. Our experimental results show that incorporating dynamic behaviour into these schemes leads to significantly better outcomes in terms of CHR, content latency, and path stretch. Specifically, the best improvements include a threefold increase in CHR, an 80% reduction in content access latency, and nearly a 45% decrease in path stretch. As an aside, we also develop a framework for the Icarus simulator to automate the process of performance assessment of different NDN caching schemes on a large number of real-world topologies.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104288"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144889351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-17DOI: 10.1016/j.jnca.2025.104333
Kaixi Wang , Yunhe Cui , Guowei Shen , Chun Guo , Yi Chen , Qing Qian
The flow table overflow attack on SDN switches is considered to be a destructive attack in SDN. By exhausting the computing and storage resources of SDN switches, this attack severely disrupts the normal communication functions of SDN networks. Graph neural networks are now being employed to detect flow table overflow attacks in SDN. When a flow graph is constructed, flow features are commonly utilized as nodes to represent the characteristics of flow table overflow attacks. However, a graph solely relying on these nodes and attributes may not fully encompass all the nuances of the flow table overflow attack. Additionally, GNN model may be difficult in capturing the graph information between different flow graphs over time, thus decreasing the detection accuracy of packet flow graph. To address these issues, we introduce PRAETOR, a detection method for flow table overflow attacks that leverages a packet flow graph and a dynamic spatio-temporal graph neural network. More particularly, The PaFlo-Graph algorithm and the EGST model are introduced by PRAETOR. The PaFlo-Graph algorithm generates a packet flow graph for each flow. It utilizes packet information to construct the graph with more detail, thereby better reflecting the characteristics of flow table overflow attacks. The EGST model is a dynamic spatio-temporal graph convolutional network designed to detect flow table overflow attacks by analyzing packet flow graphs. Experiments were conducted under two network topologies, where we used tcpreplay to replay packets from the bigFlow dataset to simulate SDN network flow. We also employed sFlow to sample packet features. Based on the sampled data, two datasets were constructed, each containing 1,760 network flows. For each packet, eight key features were extracted to represent its characteristics. The evaluation metrics include TPR, TNR, accuracy, precision, recall, F1-score, confusion matrix, ROC curves, and PR curves. Experimental results show that the proposed PaFlo-Graph algorithm generates more detailed flow graphs compared to KNN and CRAM, resulting in an average improvement of 6.49% in accuracy and 8.7% in precision. Furthermore, the overall detection framework, PRAETOR, achieves detection accuracies of 99.66% and 99.44% on Topo1 and Topo2, respectively. The precision scores reach 99.32% and 99.72%, and the F1-scores are 99.57% and 100%, respectively, indicating superior detection performance compared to other methods.
{"title":"PRAETOR:Packet flow graph and dynamic spatio-temporal graph neural network-based flow table overflow attack detection method","authors":"Kaixi Wang , Yunhe Cui , Guowei Shen , Chun Guo , Yi Chen , Qing Qian","doi":"10.1016/j.jnca.2025.104333","DOIUrl":"10.1016/j.jnca.2025.104333","url":null,"abstract":"<div><div>The flow table overflow attack on SDN switches is considered to be a destructive attack in SDN. By exhausting the computing and storage resources of SDN switches, this attack severely disrupts the normal communication functions of SDN networks. Graph neural networks are now being employed to detect flow table overflow attacks in SDN. When a flow graph is constructed, flow features are commonly utilized as nodes to represent the characteristics of flow table overflow attacks. However, a graph solely relying on these nodes and attributes may not fully encompass all the nuances of the flow table overflow attack. Additionally, GNN model may be difficult in capturing the graph information between different flow graphs over time, thus decreasing the detection accuracy of packet flow graph. To address these issues, we introduce PRAETOR, a detection method for flow table overflow attacks that leverages a packet flow graph and a dynamic spatio-temporal graph neural network. More particularly, The PaFlo-Graph algorithm and the EGST model are introduced by PRAETOR. The PaFlo-Graph algorithm generates a packet flow graph for each flow. It utilizes packet information to construct the graph with more detail, thereby better reflecting the characteristics of flow table overflow attacks. The EGST model is a dynamic spatio-temporal graph convolutional network designed to detect flow table overflow attacks by analyzing packet flow graphs. Experiments were conducted under two network topologies, where we used tcpreplay to replay packets from the bigFlow dataset to simulate SDN network flow. We also employed sFlow to sample packet features. Based on the sampled data, two datasets were constructed, each containing 1,760 network flows. For each packet, eight key features were extracted to represent its characteristics. The evaluation metrics include TPR, TNR, accuracy, precision, recall, F1-score, confusion matrix, ROC curves, and PR curves. Experimental results show that the proposed PaFlo-Graph algorithm generates more detailed flow graphs compared to KNN and CRAM, resulting in an average improvement of 6.49% in accuracy and 8.7% in precision. Furthermore, the overall detection framework, PRAETOR, achieves detection accuracies of 99.66% and 99.44% on Topo1 and Topo2, respectively. The precision scores reach 99.32% and 99.72%, and the F1-scores are 99.57% and 100%, respectively, indicating superior detection performance compared to other methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104333"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-17DOI: 10.1016/j.jnca.2025.104329
Zhiyuan Li , Yujie Jin
Traffic classification is essential for effective intrusion detection and network management. However, with the pervasive use of encryption technologies, traditional machine learning-based and deep learning-based methods often fall short in capturing the fine-grained details in encrypted traffic. To address these limitations, we propose a memory-enhanced LSTM model based on Swin Transformer for multi-class encrypted traffic classification. Our approach first reconstructs raw encrypted traffic by converting each flow into single-channel images. A hierarchical attention network, incorporating both byte-level and packet-level attention, then performs comprehensive feature extraction on these traffic images. The resulting feature maps are subsequently classified to identify traffic flow categories. By combining the long-term dependency capabilities of LSTM with the Swin Transformer’s strengths in feature extraction, our model effectively captures global features across diverse traffic types. Furthermore, we enhance LSTM with memory attention, enabling the model to focus on more fine-grained information. Experimental results on three public datasets—USTC-TFC2016, ISCX-VPN2016, and CIC-IoT2022 show that our model, ST-MemA, improves the classification accuracy to 99.43%, 98.96% and 98.21% and -score to 0.9936, 0.9826 and 0.9746, respectively. The results also demonstrate that our proposed model outperforms current state-of-the-art models in classification accuracy and computational efficiency.
{"title":"ST-MemA: Leveraging Swin Transformer and memory-enhanced LSTM for encrypted traffic classification","authors":"Zhiyuan Li , Yujie Jin","doi":"10.1016/j.jnca.2025.104329","DOIUrl":"10.1016/j.jnca.2025.104329","url":null,"abstract":"<div><div>Traffic classification is essential for effective intrusion detection and network management. However, with the pervasive use of encryption technologies, traditional machine learning-based and deep learning-based methods often fall short in capturing the fine-grained details in encrypted traffic. To address these limitations, we propose a memory-enhanced LSTM model based on Swin Transformer for multi-class encrypted traffic classification. Our approach first reconstructs raw encrypted traffic by converting each flow into single-channel images. A hierarchical attention network, incorporating both byte-level and packet-level attention, then performs comprehensive feature extraction on these traffic images. The resulting feature maps are subsequently classified to identify traffic flow categories. By combining the long-term dependency capabilities of LSTM with the Swin Transformer’s strengths in feature extraction, our model effectively captures global features across diverse traffic types. Furthermore, we enhance LSTM with memory attention, enabling the model to focus on more fine-grained information. Experimental results on three public datasets—USTC-TFC2016, ISCX-VPN2016, and CIC-IoT2022 show that our model, ST-MemA, improves the classification accuracy to 99.43%, 98.96% and 98.21% and <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-score to 0.9936, 0.9826 and 0.9746, respectively. The results also demonstrate that our proposed model outperforms current state-of-the-art models in classification accuracy and computational efficiency.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104329"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-18DOI: 10.1016/j.jnca.2025.104274
Rufino Cabrera , Jon Montalban , Orlando Landrove , Erick Jimenez , Eneko Iradier , Pablo Angueira , Sung-Ik Park , Sunhyoung Kwon , Namho Hur
The current architecture of terrestrial broadcast network limits the evolution of terrestrial audiovisual services (broadcasting). A drastic change is required before the unavoidable future convergence with 5G/6G networks for more advanced services, including interactivity, 360°video, etc. Recent studies have explored the feasibility of incorporating a higher entity called a Broadcast Core Network (BCN) based on the equivalent model of mobile communication networks. This new paradigm requires upgrading the existing access network to manage both the modules of the new core network and those corresponding to the radio access technology. The present work proposes an intelligent software-based node (bcNode) that manages the Radio Access Network (RAN) and the BCN instructions after analyzing different aspects, such as the state of the art of CN-based RAN architectures and the challenges and limitations of the current broadcasting network. This manuscript details the main blocks of the Broadcast Node in relationship with transmitting facilities. The proposal also explains the necessary extensions to the ATSC 3.0 ALP protocol to support new services. Eventually, the paper presents numerical results to evaluate the performance of the proposal based on standard KPI parameters and compares it with the legacy infrastructure.
{"title":"An enhanced RAN for future converged audiovisual services: The bcNode","authors":"Rufino Cabrera , Jon Montalban , Orlando Landrove , Erick Jimenez , Eneko Iradier , Pablo Angueira , Sung-Ik Park , Sunhyoung Kwon , Namho Hur","doi":"10.1016/j.jnca.2025.104274","DOIUrl":"10.1016/j.jnca.2025.104274","url":null,"abstract":"<div><div>The current architecture of terrestrial broadcast network limits the evolution of terrestrial audiovisual services (broadcasting). A drastic change is required before the unavoidable future convergence with 5G/6G networks for more advanced services, including interactivity, 360°video, etc. Recent studies have explored the feasibility of incorporating a higher entity called a Broadcast Core Network (BCN) based on the equivalent model of mobile communication networks. This new paradigm requires upgrading the existing access network to manage both the modules of the new core network and those corresponding to the radio access technology. The present work proposes an intelligent software-based node (bcNode) that manages the Radio Access Network (RAN) and the BCN instructions after analyzing different aspects, such as the state of the art of CN-based RAN architectures and the challenges and limitations of the current broadcasting network. This manuscript details the main blocks of the Broadcast Node in relationship with transmitting facilities. The proposal also explains the necessary extensions to the ATSC 3.0 ALP protocol to support new services. Eventually, the paper presents numerical results to evaluate the performance of the proposal based on standard KPI parameters and compares it with the legacy infrastructure.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104274"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144664872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-13DOI: 10.1016/j.jnca.2025.104298
André Perdigão, José Quevedo, Rui L. Aguiar
There has been extensive discussion on the benefits and improvements that 5G networks can bring to industry operations, particularly with network slicing. However, to fully realize network slices, it is essential to thoroughly understand the mechanisms available within a 5G network that can be used to adapt network performance. This paper surveys and describes existing 5G network configurations and assesses the performance impact of several configurations using a real-world commercial standalone (SA) 5G network, bringing the challenges between purely theoretical mathematical models into realizations with existing equipment. The paper discusses how these features impact communication performance according to industrial requirements.
The survey describes and demonstrates the performance impact of various 5G configurations, enabling readers to understand the capabilities of current 5G networks and learn how to leverage 5G technology to enhance industrial operations. This knowledge is also crucial to fully realize network slices tailored to industrial requirements.
{"title":"5G slicing under the hood: An in-depth analysis of 5G RAN features and configurations","authors":"André Perdigão, José Quevedo, Rui L. Aguiar","doi":"10.1016/j.jnca.2025.104298","DOIUrl":"10.1016/j.jnca.2025.104298","url":null,"abstract":"<div><div>There has been extensive discussion on the benefits and improvements that 5G networks can bring to industry operations, particularly with network slicing. However, to fully realize network slices, it is essential to thoroughly understand the mechanisms available within a 5G network that can be used to adapt network performance. This paper surveys and describes existing 5G network configurations and assesses the performance impact of several configurations using a real-world commercial standalone (SA) 5G network, bringing the challenges between purely theoretical mathematical models into realizations with existing equipment. The paper discusses how these features impact communication performance according to industrial requirements.</div><div>The survey describes and demonstrates the performance impact of various 5G configurations, enabling readers to understand the capabilities of current 5G networks and learn how to leverage 5G technology to enhance industrial operations. This knowledge is also crucial to fully realize network slices tailored to industrial requirements.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104298"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145059766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-08-25DOI: 10.1016/j.jnca.2025.104300
Yufeng Wang , Hao Xu , Jianhua Ma , Qun jin
Nowadays, Network Intrusion Detection System (NIDS) is essential for identifying and mitigating network threats in increasingly complex and dynamic network environments. Due to the benefits of automatic feature extraction and powerful expressive capability, Deep Neural Networks (DNN) based NIDS has witnessed great deployment. Considering the extremely high annotation cost, i.e., the extreme difficulty of labeling anomalous samples in supervised DNN based NIDS schemes, practically, many NIDS schemes are unsupervised. which either use generative-based approaches, such as encoder-decoder structure to identify deviated samples without the labeled intrusion data, or employ discriminative-based methods by designing pretext tasks to construct additional supervisory signals from the given data. However, the former only generates a single reconstruction version for each input sample, lacking a holistic view of the latent distribution of input sample, while the latter focuses on learning the global perspective of samples, often neglecting internal structures. To address these issues, this paper proposes a novel self-supervised NIDS framework based on multiple Transformers enabled data reconstruction with contrastive learning, MTRC, through combining generative-based and discriminative-based paradigms. In detail, our paper's contributions are threefold. First, a cross-feature correlation module is proposed to convert each tabular network traffic record into an original data view that effectively captures the cross-feature correlations. Second, inspired by the idea of the multiple-view reconstruction and contrastive learning, multiple Encoder-Decoder structured Transformers are used to generate different views for each original data view, which intentionally make each reconstructed view semantically similar to the original data view, and while these reconstructed views diversified between each other, aiming to holistically capture the latent features of normal data samples. Experimental results on multiple real network traffic datasets demonstrate that MTRC outperforms state-of-the-art unsupervised and self-supervised NIDS schemes, achieving superior performance in terms of AUC-ROC, AUC-PR, and F1-score metrics. The MTRC source code is publicly available at: https://github.com/sunyifen/MTRC.
{"title":"MTRC: A self-supervised network intrusion detection framework based on multiple Transformers enabled data reconstruction with contrastive learning","authors":"Yufeng Wang , Hao Xu , Jianhua Ma , Qun jin","doi":"10.1016/j.jnca.2025.104300","DOIUrl":"10.1016/j.jnca.2025.104300","url":null,"abstract":"<div><div>Nowadays, Network Intrusion Detection System (NIDS) is essential for identifying and mitigating network threats in increasingly complex and dynamic network environments. Due to the benefits of automatic feature extraction and powerful expressive capability, Deep Neural Networks (DNN) based NIDS has witnessed great deployment. Considering the extremely high annotation cost, i.e., the extreme difficulty of labeling anomalous samples in supervised DNN based NIDS schemes, practically, many NIDS schemes are unsupervised. which either use generative-based approaches, such as encoder-decoder structure to identify deviated samples without the labeled intrusion data, or employ discriminative-based methods by designing pretext tasks to construct additional supervisory signals from the given data. However, the former only generates a single reconstruction version for each input sample, lacking a holistic view of the latent distribution of input sample, while the latter focuses on learning the global perspective of samples, often neglecting internal structures. To address these issues, this paper proposes a novel self-supervised NIDS framework based on multiple Transformers enabled data reconstruction with contrastive learning, MTRC, through combining generative-based and discriminative-based paradigms. In detail, our paper's contributions are threefold. First, a cross-feature correlation module is proposed to convert each tabular network traffic record into an original data view that effectively captures the cross-feature correlations. Second, inspired by the idea of the multiple-view reconstruction and contrastive learning, multiple Encoder-Decoder structured Transformers are used to generate different views for each original data view, which intentionally make each reconstructed view semantically similar to the original data view, and while these reconstructed views diversified between each other, aiming to holistically capture the latent features of normal data samples. Experimental results on multiple real network traffic datasets demonstrate that MTRC outperforms state-of-the-art unsupervised and self-supervised NIDS schemes, achieving superior performance in terms of AUC-ROC, AUC-PR, and F1-score metrics. The MTRC source code is publicly available at: <span><span>https://github.com/sunyifen/MTRC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104300"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144908813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-13DOI: 10.1016/j.jnca.2025.104301
Jinglin Li , Haoran Wang , Sen Zhang , Peng-Yong Kong , Wendong Xiao
Mobile charging provides a new way for energy replenishment in Wireless Rechargeable Sensor Network (WRSN), where the Mobile Charger (MC) is employed for charging sensor nodes sequentially according to the mobile charging sequence scheduling result. Event detection is an essential application of WRSN, but when the events occur stochastically, Mobile Charging Sequence Scheduling for Optimal Stochastic Event Detection (MCSS-OSED) is difficult and challenging, and the non-deterministic detection property of the sensor makes MCSS-OSED complicated further. This paper proposes a novel Multistage Exploration Q-learning Algorithm (MEQA) for MCSS-OSED based on reinforcement learning. In MEQA, MC is taken as the agent to explore the space of the mobile charging sequences via the interactions with the environment for the optimal Quality of Event Detection (QED) evaluated by both considering the sensing probability of the sensor and the probability that events may occur in the monitoring region. Particularly, a new multistage exploration policy is designed for MC to improve the exploration efficiency by selecting the current suboptimal actions with a certain probability, and a novel reward function is presented to evaluate the MC charging action according to the real-time detection contribution of the sensor. Simulation results show that MEQA is efficient for MCSS-OSED and superior to the existing classical algorithms.
{"title":"Reinforcement learning based mobile charging sequence scheduling algorithm for optimal stochastic event detection in wireless rechargeable sensor networks","authors":"Jinglin Li , Haoran Wang , Sen Zhang , Peng-Yong Kong , Wendong Xiao","doi":"10.1016/j.jnca.2025.104301","DOIUrl":"10.1016/j.jnca.2025.104301","url":null,"abstract":"<div><div>Mobile charging provides a new way for energy replenishment in Wireless Rechargeable Sensor Network (WRSN), where the Mobile Charger (MC) is employed for charging sensor nodes sequentially according to the mobile charging sequence scheduling result. Event detection is an essential application of WRSN, but when the events occur stochastically, Mobile Charging Sequence Scheduling for Optimal Stochastic Event Detection (MCSS-OSED) is difficult and challenging, and the non-deterministic detection property of the sensor makes MCSS-OSED complicated further. This paper proposes a novel Multistage Exploration Q-learning Algorithm (MEQA) for MCSS-OSED based on reinforcement learning. In MEQA, MC is taken as the agent to explore the space of the mobile charging sequences via the interactions with the environment for the optimal Quality of Event Detection (QED) evaluated by both considering the sensing probability of the sensor and the probability that events may occur in the monitoring region. Particularly, a new multistage exploration policy is designed for MC to improve the exploration efficiency by selecting the current suboptimal actions with a certain probability, and a novel reward function is presented to evaluate the MC charging action according to the real-time detection contribution of the sensor. Simulation results show that MEQA is efficient for MCSS-OSED and superior to the existing classical algorithms.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104301"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-01DOI: 10.1016/j.jnca.2025.104286
Devi Priya V.S. , Sibi Chakkaravarthy Sethuraman , Muhammad Khurram Khan
Critical infrastructure and industrial systems are both becoming more and more networked and equipped with computing and communications tools. To manage processes and automate them where possible, Industrial Control Systems (ICS) manage a variety of components, including monitoring tools and software platforms. More complicated data is now being run on the networks, including data(past), money(present), and brains (future). In order to predictably detect specific services and patterns (deep learning) and automatically check authenticity and transfer value (blockchain), deep learning and blockchain are integrated into the ICS network. Hence, we conducted a thorough examination of the models published in the literature in order to comprehend how to integrate machine learning and blockchain efficiently and successfully for intrusion detection services. We also provide useful guidance for future research in this area by noting significant issues that must be addressed before substantial deployments of IDS models in ICS.
{"title":"Blockchain-based Deep Learning Models for Intrusion Detection in Industrial Control Systems: Frameworks and Open Issues","authors":"Devi Priya V.S. , Sibi Chakkaravarthy Sethuraman , Muhammad Khurram Khan","doi":"10.1016/j.jnca.2025.104286","DOIUrl":"10.1016/j.jnca.2025.104286","url":null,"abstract":"<div><div>Critical infrastructure and industrial systems are both becoming more and more networked and equipped with computing and communications tools. To manage processes and automate them where possible, Industrial Control Systems (ICS) manage a variety of components, including monitoring tools and software platforms. More complicated data is now being run on the networks, including data(past), money(present), and brains (future). In order to predictably detect specific services and patterns (deep learning) and automatically check authenticity and transfer value (blockchain), deep learning and blockchain are integrated into the ICS network. Hence, we conducted a thorough examination of the models published in the literature in order to comprehend how to integrate machine learning and blockchain efficiently and successfully for intrusion detection services. We also provide useful guidance for future research in this area by noting significant issues that must be addressed before substantial deployments of IDS models in ICS.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104286"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145108681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-02DOI: 10.1016/j.jnca.2025.104291
Fernando Román-García, Juan Hernández-Serrano, Oscar Esparza
This article introduces the Non-Repudiable Data Exchange (NoRDEx) protocol, designed to ensure non-repudiation in data exchanges. Unlike traditional non-repudiation and fair exchange protocols, NoRDEx can be considered decentralized as it eliminates the need for a centralized Trusted Third Party (TTP) by using a Distributed Ledger Technology (DLT) to store cryptographic proofs without revealing the exchanged message. NoRDEx is an optimistic non-repudiation protocol, as it only uses the DLT in case of a dispute. The protocol has been implemented and tested in real-world environments, with performance assessments covering cost, overhead, and execution time. A formal security analysis using the Syverson Van Oorschot (SVO) logical model demonstrates NoRDEx’s ability to resolve disputes securely.
本文介绍了不可抵赖数据交换(NoRDEx)协议,旨在确保数据交换中的不可抵赖性。与传统的不可否认和公平交换协议不同,NoRDEx可以被认为是去中心化的,因为它通过使用分布式账本技术(DLT)来存储加密证明而不泄露交换消息,从而消除了对中心化可信第三方(TTP)的需求。NoRDEx是一个乐观的不可否认协议,因为它只在发生争议的情况下使用DLT。该协议已经在实际环境中实现和测试,性能评估涵盖了成本、开销和执行时间。使用Syverson Van Oorschot (SVO)逻辑模型的正式安全性分析证明了NoRDEx安全解决争议的能力。
{"title":"NoRDEx: A decentralized optimistic non-repudiation protocol for data exchanges","authors":"Fernando Román-García, Juan Hernández-Serrano, Oscar Esparza","doi":"10.1016/j.jnca.2025.104291","DOIUrl":"10.1016/j.jnca.2025.104291","url":null,"abstract":"<div><div>This article introduces the Non-Repudiable Data Exchange (NoRDEx) protocol, designed to ensure non-repudiation in data exchanges. Unlike traditional non-repudiation and fair exchange protocols, NoRDEx can be considered decentralized as it eliminates the need for a centralized Trusted Third Party (TTP) by using a Distributed Ledger Technology (DLT) to store cryptographic proofs without revealing the exchanged message. NoRDEx is an optimistic non-repudiation protocol, as it only uses the DLT in case of a dispute. The protocol has been implemented and tested in real-world environments, with performance assessments covering cost, overhead, and execution time. A formal security analysis using the Syverson Van Oorschot (SVO) logical model demonstrates NoRDEx’s ability to resolve disputes securely.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104291"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145059796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}