Pub Date : 2025-10-20DOI: 10.1016/j.jnca.2025.104367
Junhao Li, Qiang Nong, Ziyu Liu
Vehicular Fog Computing (VFC) extends the fog computing paradigms to empower the Internet of Vehicles (IoV) by delivering ubiquitous computing and ultra-low latency-features critical to applications such as autonomous driving and collision avoidance. However, the dynamic and open nature of this architecture presents significant challenges in implementing robust security measures, ensuring the integrity of data, and safeguarding user privacy. Furthermore, most existing solutions fail to adequately prioritize the distinct requirements of safety-critical and non-safety-critical IoV services, thereby limiting their adaptability across heterogeneous application scenarios. Consequently, there is a growing need to develop flexible and resilient dynamic security mechanisms that optimize resource utilization in latency-sensitive and computationally intensive IoV systems. Additionally, IoVs systems must be equipped with defenses against evolving threats, including the emerging risk of quantum computing attacks. To address these challenges, this paper proposes a Quantum-resistant Blockchain-Assisted Generalized Signcryption (QBGS) protocol for vehicular fog computing. It synergizes post-quantum cryptography with adaptive trust orchestration, tailored specifically for next-generation IoV systems that require decentralized trust management and service-differentiated security. Unlike conventional static security methods, QBGS dynamically adjusts cryptographic operations such as encryption, signature, and signcryption to evolving environmental factors such as traffic density and threat severity. This enables context-aware security adjustments that enhance both efficiency and resilience. Moreover, QBGS incorporates a blockchain-integrated fog layer that supports lightweight protocols designed to curb the dissemination of false information. Through extensive theoretical analysis and systematic simulations based on an urban traffic case study, we demonstrate the practicality of QBGS for post-quantum secure IoV.
{"title":"A resilient fog-enabled IoV architecture: Adaptive post-quantum security framework with generalized signcryption and blockchain-enhanced trust management","authors":"Junhao Li, Qiang Nong, Ziyu Liu","doi":"10.1016/j.jnca.2025.104367","DOIUrl":"10.1016/j.jnca.2025.104367","url":null,"abstract":"<div><div>Vehicular Fog Computing (VFC) extends the fog computing paradigms to empower the Internet of Vehicles (IoV) by delivering ubiquitous computing and ultra-low latency-features critical to applications such as autonomous driving and collision avoidance. However, the dynamic and open nature of this architecture presents significant challenges in implementing robust security measures, ensuring the integrity of data, and safeguarding user privacy. Furthermore, most existing solutions fail to adequately prioritize the distinct requirements of safety-critical and non-safety-critical IoV services, thereby limiting their adaptability across heterogeneous application scenarios. Consequently, there is a growing need to develop flexible and resilient dynamic security mechanisms that optimize resource utilization in latency-sensitive and computationally intensive IoV systems. Additionally, IoVs systems must be equipped with defenses against evolving threats, including the emerging risk of quantum computing attacks. To address these challenges, this paper proposes a Quantum-resistant Blockchain-Assisted Generalized Signcryption (QBGS) protocol for vehicular fog computing. It synergizes post-quantum cryptography with adaptive trust orchestration, tailored specifically for next-generation IoV systems that require decentralized trust management and service-differentiated security. Unlike conventional static security methods, QBGS dynamically adjusts cryptographic operations such as encryption, signature, and signcryption to evolving environmental factors such as traffic density and threat severity. This enables context-aware security adjustments that enhance both efficiency and resilience. Moreover, QBGS incorporates a blockchain-integrated fog layer that supports lightweight protocols designed to curb the dissemination of false information. Through extensive theoretical analysis and systematic simulations based on an urban traffic case study, we demonstrate the practicality of QBGS for post-quantum secure IoV.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104367"},"PeriodicalIF":8.0,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-18DOI: 10.1016/j.jnca.2025.104365
Mu Liang , Chen Zhang , Tao Huang
Flexible Ethernet (FlexE) technology represents a groundbreaking solution for addressing diverse service requirements and network slicing demands in 5G networks, enabling high-bandwidth, low-latency, and efficient multi-service transmission. However, the current FlexE technology suffers from inefficient bandwidth adjustment, primarily due to its slow overhead insertion mechanism, particularly evident in metro transport networks (MTNs). This inefficiency not only prolongs service reconfiguration time but also leads to significant bandwidth resource wastage along end-to-end network paths. Furthermore, the latency overhead configuration necessitates substantial buffer capacity at network nodes to store pending data, imposing considerable storage pressure on network equipment. In this study, we propose an innovative overhead frame insertion mechanism that addresses these critical limitations while maintaining full compliance with FlexE standards. The proposed method features a streamlined overhead block structure that enables simultaneous and continuous transmission of all overhead information, significantly accelerating service-to-timeslot mapping and reducing link establishment time. Moreover, the proposed mechanism seamlessly integrates with the alignment marker insertion in Physical Coding Sublayer (PCS) and maintains full compatibility with IEEE 802.3 standard, simplifying overhead block extraction and data processing at the receiving end. Simulation results demonstrate that compared to existing FlexE technology, our solution achieves up to a 20-fold improvement in bandwidth adjustment time while substantially reducing buffer requirements and optimizing bandwidth utilization across the entire network infrastructure.
{"title":"Enhancement and optimization of FlexE technology within metro transport networks","authors":"Mu Liang , Chen Zhang , Tao Huang","doi":"10.1016/j.jnca.2025.104365","DOIUrl":"10.1016/j.jnca.2025.104365","url":null,"abstract":"<div><div>Flexible Ethernet (FlexE) technology represents a groundbreaking solution for addressing diverse service requirements and network slicing demands in 5G networks, enabling high-bandwidth, low-latency, and efficient multi-service transmission. However, the current FlexE technology suffers from inefficient bandwidth adjustment, primarily due to its slow overhead insertion mechanism, particularly evident in metro transport networks (MTNs). This inefficiency not only prolongs service reconfiguration time but also leads to significant bandwidth resource wastage along end-to-end network paths. Furthermore, the latency overhead configuration necessitates substantial buffer capacity at network nodes to store pending data, imposing considerable storage pressure on network equipment. In this study, we propose an innovative overhead frame insertion mechanism that addresses these critical limitations while maintaining full compliance with FlexE standards. The proposed method features a streamlined overhead block structure that enables simultaneous and continuous transmission of all overhead information, significantly accelerating service-to-timeslot mapping and reducing link establishment time. Moreover, the proposed mechanism seamlessly integrates with the alignment marker insertion in Physical Coding Sublayer (PCS) and maintains full compatibility with IEEE 802.3 standard, simplifying overhead block extraction and data processing at the receiving end. Simulation results demonstrate that compared to existing FlexE technology, our solution achieves up to a 20-fold improvement in bandwidth adjustment time while substantially reducing buffer requirements and optimizing bandwidth utilization across the entire network infrastructure.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104365"},"PeriodicalIF":8.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17DOI: 10.1016/j.jnca.2025.104366
Pankaj Kumar, Binayak Kar, Shan-Hsiang Shen
Quantum networks have the potential to revolutionize communication and computation by outperforming their classical counterparts. Many quantum applications depend on the reliable distribution of high-fidelity entangled pairs between distant nodes. However, due to decoherence and channel noise, entanglement fidelity degrades exponentially with distance, posing a significant challenge to maintaining robust quantum communication. To address this, we propose two strategies to enhance end-to-end (E2E) fidelity and information preservation in quantum networks. First, we employ closeness centrality to identify optimal intermediary nodes that minimize average path length. Second, we introduce the Trace-Distance based Path Purification (TDPP) algorithm, which fuses topological and quantum state information to support fidelity-aware routing decisions. TDPP leverages closeness centrality and trace-distance to identify paths that optimize both network efficiency and entanglement fidelity. Simulation results demonstrate that our approach significantly improves network throughput and E2E entanglement fidelity, outperforming existing routing methods while enhancing information preservation.
{"title":"Trace-distance based end-to-end entanglement fidelity with information preservation in quantum networks","authors":"Pankaj Kumar, Binayak Kar, Shan-Hsiang Shen","doi":"10.1016/j.jnca.2025.104366","DOIUrl":"10.1016/j.jnca.2025.104366","url":null,"abstract":"<div><div>Quantum networks have the potential to revolutionize communication and computation by outperforming their classical counterparts. Many quantum applications depend on the reliable distribution of high-fidelity entangled pairs between distant nodes. However, due to decoherence and channel noise, entanglement fidelity degrades exponentially with distance, posing a significant challenge to maintaining robust quantum communication. To address this, we propose two strategies to enhance end-to-end (E2E) fidelity and information preservation in quantum networks. First, we employ closeness centrality to identify optimal intermediary nodes that minimize average path length. Second, we introduce the Trace-Distance based Path Purification (TDPP) algorithm, which fuses topological and quantum state information to support fidelity-aware routing decisions. TDPP leverages closeness centrality and trace-distance to identify paths that optimize both network efficiency and entanglement fidelity. Simulation results demonstrate that our approach significantly improves network throughput and E2E entanglement fidelity, outperforming existing routing methods while enhancing information preservation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104366"},"PeriodicalIF":8.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17DOI: 10.1016/j.jnca.2025.104359
Hongjian Li , Yu Tian , Yuzheng Cui , Xiaolin Duan
Recent advancements in containerization and Kubernetes have solidified their status as mainstream paradigms for service delivery. However, existing Kubernetes scaling mechanisms often suffer from limitations, such as suboptimal utilization of multi-dimensional resources, reliance on historical workload patterns, and inability to adapt quickly to real-time workload fluctuations. To overcome these limitations, this study introduces two cost-effective resource scheduling strategies. First, a hybrid control-theoretic vertical scaling algorithm is proposed, operating under multi-resource constraints. This algorithm leverages Prometheus monitoring data encompassing diverse resource metrics. It facilitates dynamic resource optimization through a hierarchical decision-making model that combines feedforward prediction with feedback correction mechanisms. Second, a synergistic vertical–horizontal elastic scaling framework, namely the MR-CEHA framework proposed in this work, is developed. This framework classifies resource states using multi-level thresholds and integrates a cost-sensitive optimization model to balance instance-level resource allocation with cluster-level scaling operations. Experimental evaluations demonstrate substantial improvements: under surge load conditions, the SLA violation rate decreased by 16.5%; during load reduction scenarios, energy consumption dropped by 39.4%; and in mixed workload environments, energy usage declined by 16.6% while simultaneously achieving a 37.8% reduction in SLA violation rate. These findings contribute both to the theoretical understanding and the practical advancement of efficient resource utilization and service stability in Kubernetes-based cloud deployments, offering meaningful value for academic exploration and industrial implementation.
{"title":"Cost-effective container elastic scaling and scheduling under multi-resource constraints","authors":"Hongjian Li , Yu Tian , Yuzheng Cui , Xiaolin Duan","doi":"10.1016/j.jnca.2025.104359","DOIUrl":"10.1016/j.jnca.2025.104359","url":null,"abstract":"<div><div>Recent advancements in containerization and Kubernetes have solidified their status as mainstream paradigms for service delivery. However, existing Kubernetes scaling mechanisms often suffer from limitations, such as suboptimal utilization of multi-dimensional resources, reliance on historical workload patterns, and inability to adapt quickly to real-time workload fluctuations. To overcome these limitations, this study introduces two cost-effective resource scheduling strategies. First, a hybrid control-theoretic vertical scaling algorithm is proposed, operating under multi-resource constraints. This algorithm leverages Prometheus monitoring data encompassing diverse resource metrics. It facilitates dynamic resource optimization through a hierarchical decision-making model that combines feedforward prediction with feedback correction mechanisms. Second, a synergistic vertical–horizontal elastic scaling framework, namely the MR-CEHA framework proposed in this work, is developed. This framework classifies resource states using multi-level thresholds and integrates a cost-sensitive optimization model to balance instance-level resource allocation with cluster-level scaling operations. Experimental evaluations demonstrate substantial improvements: under surge load conditions, the SLA violation rate decreased by 16.5%; during load reduction scenarios, energy consumption dropped by 39.4%; and in mixed workload environments, energy usage declined by 16.6% while simultaneously achieving a 37.8% reduction in SLA violation rate. These findings contribute both to the theoretical understanding and the practical advancement of efficient resource utilization and service stability in Kubernetes-based cloud deployments, offering meaningful value for academic exploration and industrial implementation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104359"},"PeriodicalIF":8.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1016/j.jnca.2025.104360
Swathika R., Dilip Kumar S.M.
With spread spectrum modulation, LoRa (Long-Range), an Internet of Things (IoT) communication method, enables ultra-long-distance transmission in the recent times. Data conflicts occur frequently in networks with many nodes, and the equivalent rate often suffers in ultra-long-distance transmissions. This work examines several kinds of data collisions in LoRa wireless networks, most of which are influenced by the assignment of the Spreading Factor (SF). The study also explores the integration of Membership based Tuna Swarm Optimization (MTSO) with LoRa modulation into Backscatter Communications (BackCom). An analytical structure is established to examine the error rate efficiency of the network simulation under consideration. With restricted network resources, MTSO is employed to implement an SF redistribution mechanism, thereby increasing the terminal capacity of the LoRa gateway. Without increasing network or gateway capacity, the proposed technique reduces the frequency of data collisions. This paper addresses the reallocation of SF as the number of terminals increases, presenting an SF selection mechanism and an iterative SF inspection method to ensure independent data rates for each communication link. Specifically, assuming canceled Radio-Frequency Interference (RFI), this paper derives new precise and estimated closed-form equations for the Bit Error Rate (BER), Symbol Error Rate (SER), and Frame Error Rate (FER). The findings show that as the Signal-To-Noise Ratio (SNR) increases, the system’s BER, FER, and SER efficiency also improve when the SF variables are tuned.
{"title":"Enhanced spreading factor allocation and backscatter communication via membership based tuna swarm optimization for LoRa protocol","authors":"Swathika R., Dilip Kumar S.M.","doi":"10.1016/j.jnca.2025.104360","DOIUrl":"10.1016/j.jnca.2025.104360","url":null,"abstract":"<div><div>With spread spectrum modulation, LoRa (Long-Range), an Internet of Things (IoT) communication method, enables ultra-long-distance transmission in the recent times. Data conflicts occur frequently in networks with many nodes, and the equivalent rate often suffers in ultra-long-distance transmissions. This work examines several kinds of data collisions in LoRa wireless networks, most of which are influenced by the assignment of the Spreading Factor (SF). The study also explores the integration of Membership based Tuna Swarm Optimization (MTSO) with LoRa modulation into Backscatter Communications (BackCom). An analytical structure is established to examine the error rate efficiency of the network simulation under consideration. With restricted network resources, MTSO is employed to implement an SF redistribution mechanism, thereby increasing the terminal capacity of the LoRa gateway. Without increasing network or gateway capacity, the proposed technique reduces the frequency of data collisions. This paper addresses the reallocation of SF as the number of terminals increases, presenting an SF selection mechanism and an iterative SF inspection method to ensure independent data rates for each communication link. Specifically, assuming canceled Radio-Frequency Interference (RFI), this paper derives new precise and estimated closed-form equations for the Bit Error Rate (BER), Symbol Error Rate (SER), and Frame Error Rate (FER). The findings show that as the Signal-To-Noise Ratio (SNR) increases, the system’s BER, FER, and SER efficiency also improve when the SF variables are tuned.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104360"},"PeriodicalIF":8.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-11DOI: 10.1016/j.jnca.2025.104361
Tassawar Ali , Hikmat Ullah Khan , Babar Nazir , Fawaz Khaled Alarfaj , Mohammed Alreshoodi
Cloud computing is expanding rapidly due to the increasing demand for scalable and efficient services. This growth necessitates more extensive physical infrastructure to accommodate the growing workload. However, managing these workloads effectively presents issues, particularly in optimizing virtual machine (VM) scheduling. Traditional reactive scheduling methods respond to workload changes only after they occur. These approaches struggle in dynamic cloud environments, leading to performance inefficiencies, frequent VM migrations, and service-level agreement (SLA) violations. The purpose of this study is to introduce IntelliSchNet, a novel VM scheduling approach designed to address these challenges. IntelliSchNet uses a deep learning model in which the feature weights of its neurons are optimized using agglomerative clustering-based differential evolution to accurately predict future workloads. Based on these predictions, an intelligent scheduling plan is created to allocate VMs to suitable hosts. The strategy prioritizes non-overloaded hosts to maximize resource utilization and reduce VM migrations, and hence minimizes SLA violations. The basic methodology includes integrating a clustered adaptation of the differential evolution algorithm to fine-tune deep neural network parameters. Real-world data from Google's datacenters is used for training, consisting of traces collected from a production cluster with over 11,000 machines and more than 650,000 jobs, ensuring reliable and practical workload predictions. The effectiveness of IntelliSchNet is evaluated using nine different performance metrics on actual cloud workload datasets. The major findings highlight a significant improvement in VM scheduling efficiency. IntelliSchNet reduces SLA violations by up to 44 %, ensuring more stable and reliable cloud services. This reduction enhances service dependability and increases customer satisfaction. In conclusion, IntelliSchNet outperforms traditional scheduling methods by optimizing workload placement and resource allocation. Its proactive approach enhances cloud system stability, efficiency, and scalability. These improvements contribute to a more sustainable and high-performing cloud computing environment.
{"title":"Optimizing service level agreement in cloud computing with smart virtual machine scheduling using clustered differential evolution and deep learning","authors":"Tassawar Ali , Hikmat Ullah Khan , Babar Nazir , Fawaz Khaled Alarfaj , Mohammed Alreshoodi","doi":"10.1016/j.jnca.2025.104361","DOIUrl":"10.1016/j.jnca.2025.104361","url":null,"abstract":"<div><div>Cloud computing is expanding rapidly due to the increasing demand for scalable and efficient services. This growth necessitates more extensive physical infrastructure to accommodate the growing workload. However, managing these workloads effectively presents issues, particularly in optimizing virtual machine (VM) scheduling. Traditional reactive scheduling methods respond to workload changes only after they occur. These approaches struggle in dynamic cloud environments, leading to performance inefficiencies, frequent VM migrations, and service-level agreement (SLA) violations. The purpose of this study is to introduce IntelliSchNet, a novel VM scheduling approach designed to address these challenges. IntelliSchNet uses a deep learning model in which the feature weights of its neurons are optimized using agglomerative clustering-based differential evolution to accurately predict future workloads. Based on these predictions, an intelligent scheduling plan is created to allocate VMs to suitable hosts. The strategy prioritizes non-overloaded hosts to maximize resource utilization and reduce VM migrations, and hence minimizes SLA violations. The basic methodology includes integrating a clustered adaptation of the differential evolution algorithm to fine-tune deep neural network parameters. Real-world data from Google's datacenters is used for training, consisting of traces collected from a production cluster with over 11,000 machines and more than 650,000 jobs, ensuring reliable and practical workload predictions. The effectiveness of IntelliSchNet is evaluated using nine different performance metrics on actual cloud workload datasets. The major findings highlight a significant improvement in VM scheduling efficiency. IntelliSchNet reduces SLA violations by up to 44 %, ensuring more stable and reliable cloud services. This reduction enhances service dependability and increases customer satisfaction. In conclusion, IntelliSchNet outperforms traditional scheduling methods by optimizing workload placement and resource allocation. Its proactive approach enhances cloud system stability, efficiency, and scalability. These improvements contribute to a more sustainable and high-performing cloud computing environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104361"},"PeriodicalIF":8.0,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.jnca.2025.104362
Jiali Zheng, Jinhui Chen, Shuainan Liu
Sharding is an effective strategy to improve the scalability of blockchain, especially in the context of massive data processing in Industrial Internet of Things (IIoT) scenarios. However, existing sharding schemes often overlook factors such as node reputation, resource capacity, and historical behavior, leading to imbalanced resource allocation, which in turn causes delays in real-time data processing and compromises system security. The blockchain consensus mechanism determines how nodes reach consensus, serving as the core of system efficiency and security. However, traditional consensus mechanisms lack effective detection of malicious nodes and insufficient supervision of consensus nodes, making the system vulnerable to attacks and malicious actions. To address these issues, this paper proposes DS-RAM (Dynamic Sharding and Reputation-based Auditing Mechanism), a dynamic sharding mechanism based on the weighted K-Medoids and Canopy algorithms. It comprehensively considers factors such as node geographical location, reputation, interaction frequency, and historical behavior to optimize node allocation, ensuring balanced distribution of sharding resources, thus improving system throughput and security. Additionally, DS-RAM introduces an auditing node module, which provides additional supervision of consensus nodes based on the reputation mechanism, enabling timely detection and isolation of potential malicious nodes, thereby effectively enhancing the fault tolerance of the consensus mechanism and system security. Simulation results demonstrate that, compared to traditional sharding schemes and reputation-based blockchains, the proposed method can effectively improve sharding security and blockchain sharding performance.
分片是提高区块链可扩展性的有效策略,特别是在工业物联网(IIoT)场景下的海量数据处理。然而,现有的分片方案往往忽略了节点信誉、资源容量和历史行为等因素,导致资源分配不均衡,从而导致实时数据处理延迟,影响系统安全性。区块链共识机制决定了节点如何达成共识,是系统效率和安全性的核心。然而,传统的共识机制缺乏对恶意节点的有效检测和对共识节点的监督,使得系统容易受到攻击和恶意行为的攻击。为了解决这些问题,本文提出了一种基于加权k - mediids和Canopy算法的动态分片机制DS-RAM (Dynamic Sharding and Reputation-based Auditing Mechanism)。它综合考虑节点的地理位置、声誉、交互频率、历史行为等因素,优化节点分配,保证分片资源的均衡分配,从而提高系统吞吐量和安全性。此外,DS-RAM还引入了审计节点模块,基于信誉机制对共识节点进行额外监督,及时发现和隔离潜在的恶意节点,从而有效增强共识机制的容错能力和系统安全性。仿真结果表明,与传统的分片方案和基于信誉的区块链相比,本文提出的方法能够有效提高分片安全性和区块链分片性能。
{"title":"DS-RAM: A dynamic sharding and reputation-based auditing mechanisms for blockchain consensus in IIoT","authors":"Jiali Zheng, Jinhui Chen, Shuainan Liu","doi":"10.1016/j.jnca.2025.104362","DOIUrl":"10.1016/j.jnca.2025.104362","url":null,"abstract":"<div><div>Sharding is an effective strategy to improve the scalability of blockchain, especially in the context of massive data processing in Industrial Internet of Things (IIoT) scenarios. However, existing sharding schemes often overlook factors such as node reputation, resource capacity, and historical behavior, leading to imbalanced resource allocation, which in turn causes delays in real-time data processing and compromises system security. The blockchain consensus mechanism determines how nodes reach consensus, serving as the core of system efficiency and security. However, traditional consensus mechanisms lack effective detection of malicious nodes and insufficient supervision of consensus nodes, making the system vulnerable to attacks and malicious actions. To address these issues, this paper proposes DS-RAM (Dynamic Sharding and Reputation-based Auditing Mechanism), a dynamic sharding mechanism based on the weighted K-Medoids and Canopy algorithms. It comprehensively considers factors such as node geographical location, reputation, interaction frequency, and historical behavior to optimize node allocation, ensuring balanced distribution of sharding resources, thus improving system throughput and security. Additionally, DS-RAM introduces an auditing node module, which provides additional supervision of consensus nodes based on the reputation mechanism, enabling timely detection and isolation of potential malicious nodes, thereby effectively enhancing the fault tolerance of the consensus mechanism and system security. Simulation results demonstrate that, compared to traditional sharding schemes and reputation-based blockchains, the proposed method can effectively improve sharding security and blockchain sharding performance.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104362"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.jnca.2025.104356
Guan-Yan Yang , Tzu-Yu Cheng , Ya-Wen Teng , Farn Wang , Kuo-Hui Yeh
The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM’s recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework’s real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure’s content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks.
Content Warning: This paper includes potentially harmful and offensive model outputs.
{"title":"ArtPerception: ASCII art-based jailbreak on LLMs with recognition pre-test","authors":"Guan-Yan Yang , Tzu-Yu Cheng , Ya-Wen Teng , Farn Wang , Kuo-Hui Yeh","doi":"10.1016/j.jnca.2025.104356","DOIUrl":"10.1016/j.jnca.2025.104356","url":null,"abstract":"<div><div>The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM’s recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework’s real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure’s content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks.</div><div>Content Warning: This paper includes potentially harmful and offensive model outputs.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104356"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.jnca.2025.104358
Muhammad Hasnain , Nadeem Javaid , Abdul Khader Jilani Saudagar , Neeraj Kumar
<div><div>Intrusion Detection (ID) in the Internet of Secure Things (IoST) has become increasingly critical due to the rising frequency and sophistication of cyber-attacks, which can lead to severe consequences such as data breaches, financial losses, and service disruptions. These risks are further intensified in computationally limited environments, where limited computational capacity and rapidly evolving threats make accurate and efficient detection challenging. In this study, a data-efficient ID framework tailored for resource-constrained environments is proposed by leveraging active learning and meta-heuristic optimization techniques. The proposed framework systematically addresses three critical limitations commonly observed in traditional models: data imbalance, inefficient hyperparameter tuning, and dependency on large labeled datasets. Initially, to mitigate class imbalance, adaptive synthetic sampling generates synthetic instances for minority classes, thereby enhancing learning in complex regions of the feature space. Next, for hyperparameter optimization, the Sandpiper Optimization (SO) algorithm fine-tunes the regularization parameter of Logistic Regression (LR), yielding significant improvements in model generalization. Finally, the challenge of limited labeled data is addressed through two active learning strategies: Active Learning Uncertainty-based (ALU) and Active Learning Entropy-based (ALE). These strategies selectively query the most informative samples from the unlabeled pool, ensuring maximum learning with minimal annotation effort. The performance of the proposed models is evaluated on two benchmark datasets: the wireless sensor networks and network intrusion detection datasets. Simulation results demonstrate that proposed models outperform base model LR. LRALE achieves improvements of 10.48% and 3.16% in accuracy, 19.48% and 3.16% in recall, and 7.23% and 1.04% in F1-score on WSN-DS and CIC-IDS-DS datasets, respectively. LRALU shows improvements of 18.18% and 2.11% in accuracy, 18.18% and 2.11% in recall, and 14.63% and 2.08% in Receiver Operating Characteristic-Area Under the Curve (ROC-AUC). Similarly, LRSO achieves improvements of 9.09% and 2.11% in accuracy, 9.09% and 1.05% in recall, and 9.76% and 3.12% in ROC-AUC on WSN-DS and CIC-IDS-DS datasets, respectively. To ensure model generalization and stability across different data partitions, a rigorous 10-fold cross-validation is conducted. Model interpretability is further enhanced using eXplainable artificial intelligence techniques, including Local interpretable model-agnostic explanations and Shapley additive explanations, to elucidate feature contributions and improve transparency. Additionally, statistical significance testing through paired <em>t</em>-tests confirms the robustness and reliability of the proposed models. Overall, this framework introduces a comprehensive, annotation-efficient, and transparent ID solution that significantly advances the domain, m
{"title":"An intelligent and explainable intrusion detection framework for Internet of Sensor Things using generalizable optimized active Machine Learning","authors":"Muhammad Hasnain , Nadeem Javaid , Abdul Khader Jilani Saudagar , Neeraj Kumar","doi":"10.1016/j.jnca.2025.104358","DOIUrl":"10.1016/j.jnca.2025.104358","url":null,"abstract":"<div><div>Intrusion Detection (ID) in the Internet of Secure Things (IoST) has become increasingly critical due to the rising frequency and sophistication of cyber-attacks, which can lead to severe consequences such as data breaches, financial losses, and service disruptions. These risks are further intensified in computationally limited environments, where limited computational capacity and rapidly evolving threats make accurate and efficient detection challenging. In this study, a data-efficient ID framework tailored for resource-constrained environments is proposed by leveraging active learning and meta-heuristic optimization techniques. The proposed framework systematically addresses three critical limitations commonly observed in traditional models: data imbalance, inefficient hyperparameter tuning, and dependency on large labeled datasets. Initially, to mitigate class imbalance, adaptive synthetic sampling generates synthetic instances for minority classes, thereby enhancing learning in complex regions of the feature space. Next, for hyperparameter optimization, the Sandpiper Optimization (SO) algorithm fine-tunes the regularization parameter of Logistic Regression (LR), yielding significant improvements in model generalization. Finally, the challenge of limited labeled data is addressed through two active learning strategies: Active Learning Uncertainty-based (ALU) and Active Learning Entropy-based (ALE). These strategies selectively query the most informative samples from the unlabeled pool, ensuring maximum learning with minimal annotation effort. The performance of the proposed models is evaluated on two benchmark datasets: the wireless sensor networks and network intrusion detection datasets. Simulation results demonstrate that proposed models outperform base model LR. LRALE achieves improvements of 10.48% and 3.16% in accuracy, 19.48% and 3.16% in recall, and 7.23% and 1.04% in F1-score on WSN-DS and CIC-IDS-DS datasets, respectively. LRALU shows improvements of 18.18% and 2.11% in accuracy, 18.18% and 2.11% in recall, and 14.63% and 2.08% in Receiver Operating Characteristic-Area Under the Curve (ROC-AUC). Similarly, LRSO achieves improvements of 9.09% and 2.11% in accuracy, 9.09% and 1.05% in recall, and 9.76% and 3.12% in ROC-AUC on WSN-DS and CIC-IDS-DS datasets, respectively. To ensure model generalization and stability across different data partitions, a rigorous 10-fold cross-validation is conducted. Model interpretability is further enhanced using eXplainable artificial intelligence techniques, including Local interpretable model-agnostic explanations and Shapley additive explanations, to elucidate feature contributions and improve transparency. Additionally, statistical significance testing through paired <em>t</em>-tests confirms the robustness and reliability of the proposed models. Overall, this framework introduces a comprehensive, annotation-efficient, and transparent ID solution that significantly advances the domain, m","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104358"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145384600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.jnca.2025.104363
Mohammad Abrar Shakil Sejan , Md Habibur Rahman , Md Abdul Aziz , Rana Tabassum , Iqra Hameed , Nidal Nasser , Hyoung-Kyu Song
Internet of Things (IoT) has profoundly impacted human life by providing ubiquitous connectivity and unique advantages. As the demand for IoT applications continues to grow, the number of connected devices is increasing at a rapid pace. This growth poses challenges in identifying data sources and managing data in large networks. The graph data structure offers a meaningful way to represent IoT networks, where nodes represent devices and edges represent their connections. In this study, we convert IoT networks into graph representations, considering two approaches: fully connected node graphs and randomly connected node graphs. Graph neural networks (GNNs) are highly effective for processing graph data, as they can capture relationships within graph structures based on their topological properties. We utilize GNNs to perform node classification tasks for IoT networks. Seven different GNN models were investigated to perform node classification tasks on both complete and random graphs. The experimental results indicate that the SAGEConv model achieves high classification accuracy under dense network conditions. Additionally, the CHEBYSHEVConv model performs well with fully connected graphs, while the TAGConv model demonstrates strong performance with randomly connected graphs.
{"title":"Graph neural network enhanced Internet of Things node classification with different node connections","authors":"Mohammad Abrar Shakil Sejan , Md Habibur Rahman , Md Abdul Aziz , Rana Tabassum , Iqra Hameed , Nidal Nasser , Hyoung-Kyu Song","doi":"10.1016/j.jnca.2025.104363","DOIUrl":"10.1016/j.jnca.2025.104363","url":null,"abstract":"<div><div>Internet of Things (IoT) has profoundly impacted human life by providing ubiquitous connectivity and unique advantages. As the demand for IoT applications continues to grow, the number of connected devices is increasing at a rapid pace. This growth poses challenges in identifying data sources and managing data in large networks. The graph data structure offers a meaningful way to represent IoT networks, where nodes represent devices and edges represent their connections. In this study, we convert IoT networks into graph representations, considering two approaches: fully connected node graphs and randomly connected node graphs. Graph neural networks (GNNs) are highly effective for processing graph data, as they can capture relationships within graph structures based on their topological properties. We utilize GNNs to perform node classification tasks for IoT networks. Seven different GNN models were investigated to perform node classification tasks on both complete and random graphs. The experimental results indicate that the SAGEConv model achieves high classification accuracy under dense network conditions. Additionally, the CHEBYSHEVConv model performs well with fully connected graphs, while the TAGConv model demonstrates strong performance with randomly connected graphs.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104363"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}