首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
A resilient fog-enabled IoV architecture: Adaptive post-quantum security framework with generalized signcryption and blockchain-enhanced trust management 一个有弹性的雾支持的车联网架构:具有广义签名加密和区块链增强信任管理的自适应后量子安全框架
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-20 DOI: 10.1016/j.jnca.2025.104367
Junhao Li, Qiang Nong, Ziyu Liu
Vehicular Fog Computing (VFC) extends the fog computing paradigms to empower the Internet of Vehicles (IoV) by delivering ubiquitous computing and ultra-low latency-features critical to applications such as autonomous driving and collision avoidance. However, the dynamic and open nature of this architecture presents significant challenges in implementing robust security measures, ensuring the integrity of data, and safeguarding user privacy. Furthermore, most existing solutions fail to adequately prioritize the distinct requirements of safety-critical and non-safety-critical IoV services, thereby limiting their adaptability across heterogeneous application scenarios. Consequently, there is a growing need to develop flexible and resilient dynamic security mechanisms that optimize resource utilization in latency-sensitive and computationally intensive IoV systems. Additionally, IoVs systems must be equipped with defenses against evolving threats, including the emerging risk of quantum computing attacks. To address these challenges, this paper proposes a Quantum-resistant Blockchain-Assisted Generalized Signcryption (QBGS) protocol for vehicular fog computing. It synergizes post-quantum cryptography with adaptive trust orchestration, tailored specifically for next-generation IoV systems that require decentralized trust management and service-differentiated security. Unlike conventional static security methods, QBGS dynamically adjusts cryptographic operations such as encryption, signature, and signcryption to evolving environmental factors such as traffic density and threat severity. This enables context-aware security adjustments that enhance both efficiency and resilience. Moreover, QBGS incorporates a blockchain-integrated fog layer that supports lightweight protocols designed to curb the dissemination of false information. Through extensive theoretical analysis and systematic simulations based on an urban traffic case study, we demonstrate the practicality of QBGS for post-quantum secure IoV.
车辆雾计算(VFC)扩展了雾计算范式,通过提供无处不在的计算和超低延迟(对自动驾驶和避撞等应用至关重要的功能)来增强车联网(IoV)。然而,这种体系结构的动态性和开放性在实现健壮的安全措施、确保数据完整性和保护用户隐私方面提出了重大挑战。此外,大多数现有解决方案未能充分区分安全关键型和非安全关键型车联网服务的不同需求,从而限制了它们在异构应用场景中的适应性。因此,越来越需要开发灵活和有弹性的动态安全机制,以优化延迟敏感和计算密集型车联网系统的资源利用。此外,iov系统必须具备防御不断发展的威胁的能力,包括新兴的量子计算攻击风险。为了解决这些挑战,本文提出了一种用于车载雾计算的抗量子区块链辅助广义签名加密(QBGS)协议。它将后量子加密与自适应信任编排相结合,专门为需要分散信任管理和服务差异化安全性的下一代车联网系统量身定制。与传统的静态安全方法不同,QBGS可以根据流量密度、威胁严重程度等不断变化的环境因素动态调整加密、签名、签名加密等加密操作。这支持上下文感知的安全调整,从而提高效率和弹性。此外,QBGS集成了一个区块链集成雾层,支持旨在遏制虚假信息传播的轻量级协议。通过广泛的理论分析和基于城市交通案例研究的系统模拟,我们证明了QBGS在后量子安全车联网中的实用性。
{"title":"A resilient fog-enabled IoV architecture: Adaptive post-quantum security framework with generalized signcryption and blockchain-enhanced trust management","authors":"Junhao Li,&nbsp;Qiang Nong,&nbsp;Ziyu Liu","doi":"10.1016/j.jnca.2025.104367","DOIUrl":"10.1016/j.jnca.2025.104367","url":null,"abstract":"<div><div>Vehicular Fog Computing (VFC) extends the fog computing paradigms to empower the Internet of Vehicles (IoV) by delivering ubiquitous computing and ultra-low latency-features critical to applications such as autonomous driving and collision avoidance. However, the dynamic and open nature of this architecture presents significant challenges in implementing robust security measures, ensuring the integrity of data, and safeguarding user privacy. Furthermore, most existing solutions fail to adequately prioritize the distinct requirements of safety-critical and non-safety-critical IoV services, thereby limiting their adaptability across heterogeneous application scenarios. Consequently, there is a growing need to develop flexible and resilient dynamic security mechanisms that optimize resource utilization in latency-sensitive and computationally intensive IoV systems. Additionally, IoVs systems must be equipped with defenses against evolving threats, including the emerging risk of quantum computing attacks. To address these challenges, this paper proposes a Quantum-resistant Blockchain-Assisted Generalized Signcryption (QBGS) protocol for vehicular fog computing. It synergizes post-quantum cryptography with adaptive trust orchestration, tailored specifically for next-generation IoV systems that require decentralized trust management and service-differentiated security. Unlike conventional static security methods, QBGS dynamically adjusts cryptographic operations such as encryption, signature, and signcryption to evolving environmental factors such as traffic density and threat severity. This enables context-aware security adjustments that enhance both efficiency and resilience. Moreover, QBGS incorporates a blockchain-integrated fog layer that supports lightweight protocols designed to curb the dissemination of false information. Through extensive theoretical analysis and systematic simulations based on an urban traffic case study, we demonstrate the practicality of QBGS for post-quantum secure IoV.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104367"},"PeriodicalIF":8.0,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement and optimization of FlexE technology within metro transport networks 城域运输网络中FlexE技术的增强和优化
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-18 DOI: 10.1016/j.jnca.2025.104365
Mu Liang , Chen Zhang , Tao Huang
Flexible Ethernet (FlexE) technology represents a groundbreaking solution for addressing diverse service requirements and network slicing demands in 5G networks, enabling high-bandwidth, low-latency, and efficient multi-service transmission. However, the current FlexE technology suffers from inefficient bandwidth adjustment, primarily due to its slow overhead insertion mechanism, particularly evident in metro transport networks (MTNs). This inefficiency not only prolongs service reconfiguration time but also leads to significant bandwidth resource wastage along end-to-end network paths. Furthermore, the latency overhead configuration necessitates substantial buffer capacity at network nodes to store pending data, imposing considerable storage pressure on network equipment. In this study, we propose an innovative overhead frame insertion mechanism that addresses these critical limitations while maintaining full compliance with FlexE standards. The proposed method features a streamlined overhead block structure that enables simultaneous and continuous transmission of all overhead information, significantly accelerating service-to-timeslot mapping and reducing link establishment time. Moreover, the proposed mechanism seamlessly integrates with the alignment marker insertion in Physical Coding Sublayer (PCS) and maintains full compatibility with IEEE 802.3 standard, simplifying overhead block extraction and data processing at the receiving end. Simulation results demonstrate that compared to existing FlexE technology, our solution achieves up to a 20-fold improvement in bandwidth adjustment time while substantially reducing buffer requirements and optimizing bandwidth utilization across the entire network infrastructure.
柔性以太网(FlexE)技术是解决5G网络中多样化业务需求和网络切片需求的突破性解决方案,可实现高带宽、低延迟、高效的多业务传输。然而,目前的FlexE技术带宽调整效率低下,主要是由于其缓慢的架空插入机制,特别是在城域传输网络(mtn)中。这种低效率不仅延长了业务重新配置的时间,而且还导致端到端网络路径上的大量带宽资源浪费。此外,延迟开销配置需要网络节点上的大量缓冲容量来存储挂起的数据,这给网络设备带来了相当大的存储压力。在这项研究中,我们提出了一种创新的架空框架插入机制,以解决这些关键限制,同时保持完全符合FlexE标准。该方法采用流线型架空块结构,能够同时连续传输所有架空信息,显著加快了服务到时隙的映射,缩短了链路建立时间。此外,该机制与物理编码子层(PCS)中的对齐标记插入无缝集成,并保持与IEEE 802.3标准的完全兼容性,简化了接收端的开销块提取和数据处理。仿真结果表明,与现有的FlexE技术相比,我们的解决方案在带宽调整时间上实现了高达20倍的改进,同时大大减少了缓冲需求并优化了整个网络基础设施的带宽利用率。
{"title":"Enhancement and optimization of FlexE technology within metro transport networks","authors":"Mu Liang ,&nbsp;Chen Zhang ,&nbsp;Tao Huang","doi":"10.1016/j.jnca.2025.104365","DOIUrl":"10.1016/j.jnca.2025.104365","url":null,"abstract":"<div><div>Flexible Ethernet (FlexE) technology represents a groundbreaking solution for addressing diverse service requirements and network slicing demands in 5G networks, enabling high-bandwidth, low-latency, and efficient multi-service transmission. However, the current FlexE technology suffers from inefficient bandwidth adjustment, primarily due to its slow overhead insertion mechanism, particularly evident in metro transport networks (MTNs). This inefficiency not only prolongs service reconfiguration time but also leads to significant bandwidth resource wastage along end-to-end network paths. Furthermore, the latency overhead configuration necessitates substantial buffer capacity at network nodes to store pending data, imposing considerable storage pressure on network equipment. In this study, we propose an innovative overhead frame insertion mechanism that addresses these critical limitations while maintaining full compliance with FlexE standards. The proposed method features a streamlined overhead block structure that enables simultaneous and continuous transmission of all overhead information, significantly accelerating service-to-timeslot mapping and reducing link establishment time. Moreover, the proposed mechanism seamlessly integrates with the alignment marker insertion in Physical Coding Sublayer (PCS) and maintains full compatibility with IEEE 802.3 standard, simplifying overhead block extraction and data processing at the receiving end. Simulation results demonstrate that compared to existing FlexE technology, our solution achieves up to a 20-fold improvement in bandwidth adjustment time while substantially reducing buffer requirements and optimizing bandwidth utilization across the entire network infrastructure.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104365"},"PeriodicalIF":8.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trace-distance based end-to-end entanglement fidelity with information preservation in quantum networks 量子网络中基于跟踪距离的端到端纠缠保真度与信息保存
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-17 DOI: 10.1016/j.jnca.2025.104366
Pankaj Kumar, Binayak Kar, Shan-Hsiang Shen
Quantum networks have the potential to revolutionize communication and computation by outperforming their classical counterparts. Many quantum applications depend on the reliable distribution of high-fidelity entangled pairs between distant nodes. However, due to decoherence and channel noise, entanglement fidelity degrades exponentially with distance, posing a significant challenge to maintaining robust quantum communication. To address this, we propose two strategies to enhance end-to-end (E2E) fidelity and information preservation in quantum networks. First, we employ closeness centrality to identify optimal intermediary nodes that minimize average path length. Second, we introduce the Trace-Distance based Path Purification (TDPP) algorithm, which fuses topological and quantum state information to support fidelity-aware routing decisions. TDPP leverages closeness centrality and trace-distance to identify paths that optimize both network efficiency and entanglement fidelity. Simulation results demonstrate that our approach significantly improves network throughput and E2E entanglement fidelity, outperforming existing routing methods while enhancing information preservation.
量子网络有可能通过超越经典网络来彻底改变通信和计算。许多量子应用依赖于远距离节点间高保真纠缠对的可靠分布。然而,由于退相干和信道噪声,纠缠保真度随距离呈指数级下降,对保持鲁棒量子通信提出了重大挑战。为了解决这个问题,我们提出了两种策略来增强量子网络中的端到端(E2E)保真度和信息保存。首先,我们采用接近中心性来识别使平均路径长度最小的最优中间节点。其次,我们介绍了基于跟踪距离的路径净化(TDPP)算法,该算法融合了拓扑和量子态信息,以支持保真度感知路由决策。TDPP利用接近中心性和跟踪距离来确定优化网络效率和纠缠保真度的路径。仿真结果表明,该方法显著提高了网络吞吐量和端到端纠缠保真度,在增强信息保存的同时优于现有的路由方法。
{"title":"Trace-distance based end-to-end entanglement fidelity with information preservation in quantum networks","authors":"Pankaj Kumar,&nbsp;Binayak Kar,&nbsp;Shan-Hsiang Shen","doi":"10.1016/j.jnca.2025.104366","DOIUrl":"10.1016/j.jnca.2025.104366","url":null,"abstract":"<div><div>Quantum networks have the potential to revolutionize communication and computation by outperforming their classical counterparts. Many quantum applications depend on the reliable distribution of high-fidelity entangled pairs between distant nodes. However, due to decoherence and channel noise, entanglement fidelity degrades exponentially with distance, posing a significant challenge to maintaining robust quantum communication. To address this, we propose two strategies to enhance end-to-end (E2E) fidelity and information preservation in quantum networks. First, we employ closeness centrality to identify optimal intermediary nodes that minimize average path length. Second, we introduce the Trace-Distance based Path Purification (TDPP) algorithm, which fuses topological and quantum state information to support fidelity-aware routing decisions. TDPP leverages closeness centrality and trace-distance to identify paths that optimize both network efficiency and entanglement fidelity. Simulation results demonstrate that our approach significantly improves network throughput and E2E entanglement fidelity, outperforming existing routing methods while enhancing information preservation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104366"},"PeriodicalIF":8.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-effective container elastic scaling and scheduling under multi-resource constraints 多资源约束下具有成本效益的容器弹性伸缩与调度
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-17 DOI: 10.1016/j.jnca.2025.104359
Hongjian Li , Yu Tian , Yuzheng Cui , Xiaolin Duan
Recent advancements in containerization and Kubernetes have solidified their status as mainstream paradigms for service delivery. However, existing Kubernetes scaling mechanisms often suffer from limitations, such as suboptimal utilization of multi-dimensional resources, reliance on historical workload patterns, and inability to adapt quickly to real-time workload fluctuations. To overcome these limitations, this study introduces two cost-effective resource scheduling strategies. First, a hybrid control-theoretic vertical scaling algorithm is proposed, operating under multi-resource constraints. This algorithm leverages Prometheus monitoring data encompassing diverse resource metrics. It facilitates dynamic resource optimization through a hierarchical decision-making model that combines feedforward prediction with feedback correction mechanisms. Second, a synergistic vertical–horizontal elastic scaling framework, namely the MR-CEHA framework proposed in this work, is developed. This framework classifies resource states using multi-level thresholds and integrates a cost-sensitive optimization model to balance instance-level resource allocation with cluster-level scaling operations. Experimental evaluations demonstrate substantial improvements: under surge load conditions, the SLA violation rate decreased by 16.5%; during load reduction scenarios, energy consumption dropped by 39.4%; and in mixed workload environments, energy usage declined by 16.6% while simultaneously achieving a 37.8% reduction in SLA violation rate. These findings contribute both to the theoretical understanding and the practical advancement of efficient resource utilization and service stability in Kubernetes-based cloud deployments, offering meaningful value for academic exploration and industrial implementation.
容器化和Kubernetes的最新进展巩固了它们作为服务交付主流范例的地位。然而,现有的Kubernetes扩展机制经常受到限制,例如多维资源的次优利用率,对历史工作负载模式的依赖,以及无法快速适应实时工作负载波动。为了克服这些限制,本研究引入了两种具有成本效益的资源调度策略。首先,提出了一种多资源约束下的混合控制理论垂直缩放算法。该算法利用Prometheus监控包含各种资源指标的数据。它通过将前馈预测与反馈修正机制相结合的分层决策模型,促进资源的动态优化。其次,提出了一种协同垂直水平弹性标度框架,即本文提出的MR-CEHA框架。该框架使用多级阈值对资源状态进行分类,并集成了一个成本敏感的优化模型,以平衡实例级资源分配和集群级扩展操作。实验结果表明:在浪涌工况下,SLA违例率降低了16.5%;在减负荷情景下,能耗下降39.4%;在混合工作负载环境中,能源使用下降了16.6%,同时SLA违规率降低了37.8%。这些发现有助于对基于kubernetes的云部署中高效资源利用和服务稳定性的理论理解和实践推进,为学术探索和工业实施提供有意义的价值。
{"title":"Cost-effective container elastic scaling and scheduling under multi-resource constraints","authors":"Hongjian Li ,&nbsp;Yu Tian ,&nbsp;Yuzheng Cui ,&nbsp;Xiaolin Duan","doi":"10.1016/j.jnca.2025.104359","DOIUrl":"10.1016/j.jnca.2025.104359","url":null,"abstract":"<div><div>Recent advancements in containerization and Kubernetes have solidified their status as mainstream paradigms for service delivery. However, existing Kubernetes scaling mechanisms often suffer from limitations, such as suboptimal utilization of multi-dimensional resources, reliance on historical workload patterns, and inability to adapt quickly to real-time workload fluctuations. To overcome these limitations, this study introduces two cost-effective resource scheduling strategies. First, a hybrid control-theoretic vertical scaling algorithm is proposed, operating under multi-resource constraints. This algorithm leverages Prometheus monitoring data encompassing diverse resource metrics. It facilitates dynamic resource optimization through a hierarchical decision-making model that combines feedforward prediction with feedback correction mechanisms. Second, a synergistic vertical–horizontal elastic scaling framework, namely the MR-CEHA framework proposed in this work, is developed. This framework classifies resource states using multi-level thresholds and integrates a cost-sensitive optimization model to balance instance-level resource allocation with cluster-level scaling operations. Experimental evaluations demonstrate substantial improvements: under surge load conditions, the SLA violation rate decreased by 16.5%; during load reduction scenarios, energy consumption dropped by 39.4%; and in mixed workload environments, energy usage declined by 16.6% while simultaneously achieving a 37.8% reduction in SLA violation rate. These findings contribute both to the theoretical understanding and the practical advancement of efficient resource utilization and service stability in Kubernetes-based cloud deployments, offering meaningful value for academic exploration and industrial implementation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104359"},"PeriodicalIF":8.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced spreading factor allocation and backscatter communication via membership based tuna swarm optimization for LoRa protocol 基于隶属度的金枪鱼群优化LoRa协议增强扩频因子分配和反向散射通信
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-13 DOI: 10.1016/j.jnca.2025.104360
Swathika R., Dilip Kumar S.M.
With spread spectrum modulation, LoRa (Long-Range), an Internet of Things (IoT) communication method, enables ultra-long-distance transmission in the recent times. Data conflicts occur frequently in networks with many nodes, and the equivalent rate often suffers in ultra-long-distance transmissions. This work examines several kinds of data collisions in LoRa wireless networks, most of which are influenced by the assignment of the Spreading Factor (SF). The study also explores the integration of Membership based Tuna Swarm Optimization (MTSO) with LoRa modulation into Backscatter Communications (BackCom). An analytical structure is established to examine the error rate efficiency of the network simulation under consideration. With restricted network resources, MTSO is employed to implement an SF redistribution mechanism, thereby increasing the terminal capacity of the LoRa gateway. Without increasing network or gateway capacity, the proposed technique reduces the frequency of data collisions. This paper addresses the reallocation of SF as the number of terminals increases, presenting an SF selection mechanism and an iterative SF inspection method to ensure independent data rates for each communication link. Specifically, assuming canceled Radio-Frequency Interference (RFI), this paper derives new precise and estimated closed-form equations for the Bit Error Rate (BER), Symbol Error Rate (SER), and Frame Error Rate (FER). The findings show that as the Signal-To-Noise Ratio (SNR) increases, the system’s BER, FER, and SER efficiency also improve when the SF variables are tuned.
最近,物联网(IoT)通信方式LoRa(远程)通过扩频调制实现了超远距离传输。在多节点网络中,数据冲突频繁发生,在超远距离传输中,数据的等效速率经常受到影响。本文研究了LoRa无线网络中几种类型的数据冲突,其中大多数冲突受扩展因子(SF)分配的影响。该研究还探讨了基于LoRa调制的基于成员的金枪鱼群优化(MTSO)与后向散射通信(BackCom)的集成。建立了一个分析结构来检验所考虑的网络仿真的错误率效率。在网络资源有限的情况下,利用MTSO实现了一种SF重分配机制,从而增加了LoRa网关的终端容量。在不增加网络或网关容量的情况下,该技术降低了数据冲突的频率。本文针对终端数量增加时顺丰的再分配问题,提出了一种顺丰选择机制和一种迭代的顺丰检测方法,以确保每个通信链路的数据速率独立。具体地说,假设消除射频干扰(RFI),本文导出了新的精确估计的误码率(BER)、符号误码率(SER)和帧误码率(FER)的封闭形式方程。研究结果表明,随着信噪比(SNR)的增加,系统的误码率(BER)、误码率(FER)和误码率(SER)效率也随着SF变量的调整而提高。
{"title":"Enhanced spreading factor allocation and backscatter communication via membership based tuna swarm optimization for LoRa protocol","authors":"Swathika R.,&nbsp;Dilip Kumar S.M.","doi":"10.1016/j.jnca.2025.104360","DOIUrl":"10.1016/j.jnca.2025.104360","url":null,"abstract":"<div><div>With spread spectrum modulation, LoRa (Long-Range), an Internet of Things (IoT) communication method, enables ultra-long-distance transmission in the recent times. Data conflicts occur frequently in networks with many nodes, and the equivalent rate often suffers in ultra-long-distance transmissions. This work examines several kinds of data collisions in LoRa wireless networks, most of which are influenced by the assignment of the Spreading Factor (SF). The study also explores the integration of Membership based Tuna Swarm Optimization (MTSO) with LoRa modulation into Backscatter Communications (BackCom). An analytical structure is established to examine the error rate efficiency of the network simulation under consideration. With restricted network resources, MTSO is employed to implement an SF redistribution mechanism, thereby increasing the terminal capacity of the LoRa gateway. Without increasing network or gateway capacity, the proposed technique reduces the frequency of data collisions. This paper addresses the reallocation of SF as the number of terminals increases, presenting an SF selection mechanism and an iterative SF inspection method to ensure independent data rates for each communication link. Specifically, assuming canceled Radio-Frequency Interference (RFI), this paper derives new precise and estimated closed-form equations for the Bit Error Rate (BER), Symbol Error Rate (SER), and Frame Error Rate (FER). The findings show that as the Signal-To-Noise Ratio (SNR) increases, the system’s BER, FER, and SER efficiency also improve when the SF variables are tuned.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104360"},"PeriodicalIF":8.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing service level agreement in cloud computing with smart virtual machine scheduling using clustered differential evolution and deep learning 基于聚类差分进化和深度学习的智能虚拟机调度优化云计算服务水平协议
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-11 DOI: 10.1016/j.jnca.2025.104361
Tassawar Ali , Hikmat Ullah Khan , Babar Nazir , Fawaz Khaled Alarfaj , Mohammed Alreshoodi
Cloud computing is expanding rapidly due to the increasing demand for scalable and efficient services. This growth necessitates more extensive physical infrastructure to accommodate the growing workload. However, managing these workloads effectively presents issues, particularly in optimizing virtual machine (VM) scheduling. Traditional reactive scheduling methods respond to workload changes only after they occur. These approaches struggle in dynamic cloud environments, leading to performance inefficiencies, frequent VM migrations, and service-level agreement (SLA) violations. The purpose of this study is to introduce IntelliSchNet, a novel VM scheduling approach designed to address these challenges. IntelliSchNet uses a deep learning model in which the feature weights of its neurons are optimized using agglomerative clustering-based differential evolution to accurately predict future workloads. Based on these predictions, an intelligent scheduling plan is created to allocate VMs to suitable hosts. The strategy prioritizes non-overloaded hosts to maximize resource utilization and reduce VM migrations, and hence minimizes SLA violations. The basic methodology includes integrating a clustered adaptation of the differential evolution algorithm to fine-tune deep neural network parameters. Real-world data from Google's datacenters is used for training, consisting of traces collected from a production cluster with over 11,000 machines and more than 650,000 jobs, ensuring reliable and practical workload predictions. The effectiveness of IntelliSchNet is evaluated using nine different performance metrics on actual cloud workload datasets. The major findings highlight a significant improvement in VM scheduling efficiency. IntelliSchNet reduces SLA violations by up to 44 %, ensuring more stable and reliable cloud services. This reduction enhances service dependability and increases customer satisfaction. In conclusion, IntelliSchNet outperforms traditional scheduling methods by optimizing workload placement and resource allocation. Its proactive approach enhances cloud system stability, efficiency, and scalability. These improvements contribute to a more sustainable and high-performing cloud computing environment.
由于对可伸缩和高效服务的需求不断增加,云计算正在迅速扩展。这种增长需要更广泛的物理基础设施来适应不断增长的工作量。然而,有效地管理这些工作负载存在一些问题,特别是在优化虚拟机(VM)调度方面。传统的响应式调度方法只在工作负载发生变化之后才对其作出响应。这些方法在动态云环境中存在问题,导致性能低下、频繁的VM迁移和服务水平协议(SLA)违反。本研究的目的是介绍IntelliSchNet,一种新颖的虚拟机调度方法,旨在解决这些挑战。IntelliSchNet使用深度学习模型,其中神经元的特征权重使用基于聚集聚类的差分进化进行优化,以准确预测未来的工作负载。根据这些预测,创建智能调度计划,将虚拟机分配到合适的主机。该策略优先考虑未过载的主机,以最大限度地提高资源利用率,减少虚拟机迁移,从而最大限度地减少SLA违规。基本的方法包括整合微分进化算法的聚类适应来微调深度神经网络参数。来自b谷歌数据中心的真实数据用于培训,包括从拥有超过11,000台机器和超过65万个工作岗位的生产集群收集的痕迹,确保可靠和实用的工作负载预测。IntelliSchNet的有效性在实际的云工作负载数据集上使用九个不同的性能指标进行评估。主要发现突出了虚拟机调度效率的显著提高。IntelliSchNet减少了高达44%的SLA违规,确保了更稳定和可靠的云服务。这种减少提高了服务的可靠性并提高了客户满意度。总之,IntelliSchNet通过优化工作负载布局和资源分配,优于传统的调度方法。它的主动方式增强了云系统的稳定性、效率和可扩展性。这些改进有助于构建更具可持续性和高性能的云计算环境。
{"title":"Optimizing service level agreement in cloud computing with smart virtual machine scheduling using clustered differential evolution and deep learning","authors":"Tassawar Ali ,&nbsp;Hikmat Ullah Khan ,&nbsp;Babar Nazir ,&nbsp;Fawaz Khaled Alarfaj ,&nbsp;Mohammed Alreshoodi","doi":"10.1016/j.jnca.2025.104361","DOIUrl":"10.1016/j.jnca.2025.104361","url":null,"abstract":"<div><div>Cloud computing is expanding rapidly due to the increasing demand for scalable and efficient services. This growth necessitates more extensive physical infrastructure to accommodate the growing workload. However, managing these workloads effectively presents issues, particularly in optimizing virtual machine (VM) scheduling. Traditional reactive scheduling methods respond to workload changes only after they occur. These approaches struggle in dynamic cloud environments, leading to performance inefficiencies, frequent VM migrations, and service-level agreement (SLA) violations. The purpose of this study is to introduce IntelliSchNet, a novel VM scheduling approach designed to address these challenges. IntelliSchNet uses a deep learning model in which the feature weights of its neurons are optimized using agglomerative clustering-based differential evolution to accurately predict future workloads. Based on these predictions, an intelligent scheduling plan is created to allocate VMs to suitable hosts. The strategy prioritizes non-overloaded hosts to maximize resource utilization and reduce VM migrations, and hence minimizes SLA violations. The basic methodology includes integrating a clustered adaptation of the differential evolution algorithm to fine-tune deep neural network parameters. Real-world data from Google's datacenters is used for training, consisting of traces collected from a production cluster with over 11,000 machines and more than 650,000 jobs, ensuring reliable and practical workload predictions. The effectiveness of IntelliSchNet is evaluated using nine different performance metrics on actual cloud workload datasets. The major findings highlight a significant improvement in VM scheduling efficiency. IntelliSchNet reduces SLA violations by up to 44 %, ensuring more stable and reliable cloud services. This reduction enhances service dependability and increases customer satisfaction. In conclusion, IntelliSchNet outperforms traditional scheduling methods by optimizing workload placement and resource allocation. Its proactive approach enhances cloud system stability, efficiency, and scalability. These improvements contribute to a more sustainable and high-performing cloud computing environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104361"},"PeriodicalIF":8.0,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DS-RAM: A dynamic sharding and reputation-based auditing mechanisms for blockchain consensus in IIoT DS-RAM:用于工业物联网区块链共识的动态分片和基于声誉的审计机制
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-10 DOI: 10.1016/j.jnca.2025.104362
Jiali Zheng, Jinhui Chen, Shuainan Liu
Sharding is an effective strategy to improve the scalability of blockchain, especially in the context of massive data processing in Industrial Internet of Things (IIoT) scenarios. However, existing sharding schemes often overlook factors such as node reputation, resource capacity, and historical behavior, leading to imbalanced resource allocation, which in turn causes delays in real-time data processing and compromises system security. The blockchain consensus mechanism determines how nodes reach consensus, serving as the core of system efficiency and security. However, traditional consensus mechanisms lack effective detection of malicious nodes and insufficient supervision of consensus nodes, making the system vulnerable to attacks and malicious actions. To address these issues, this paper proposes DS-RAM (Dynamic Sharding and Reputation-based Auditing Mechanism), a dynamic sharding mechanism based on the weighted K-Medoids and Canopy algorithms. It comprehensively considers factors such as node geographical location, reputation, interaction frequency, and historical behavior to optimize node allocation, ensuring balanced distribution of sharding resources, thus improving system throughput and security. Additionally, DS-RAM introduces an auditing node module, which provides additional supervision of consensus nodes based on the reputation mechanism, enabling timely detection and isolation of potential malicious nodes, thereby effectively enhancing the fault tolerance of the consensus mechanism and system security. Simulation results demonstrate that, compared to traditional sharding schemes and reputation-based blockchains, the proposed method can effectively improve sharding security and blockchain sharding performance.
分片是提高区块链可扩展性的有效策略,特别是在工业物联网(IIoT)场景下的海量数据处理。然而,现有的分片方案往往忽略了节点信誉、资源容量和历史行为等因素,导致资源分配不均衡,从而导致实时数据处理延迟,影响系统安全性。区块链共识机制决定了节点如何达成共识,是系统效率和安全性的核心。然而,传统的共识机制缺乏对恶意节点的有效检测和对共识节点的监督,使得系统容易受到攻击和恶意行为的攻击。为了解决这些问题,本文提出了一种基于加权k - mediids和Canopy算法的动态分片机制DS-RAM (Dynamic Sharding and Reputation-based Auditing Mechanism)。它综合考虑节点的地理位置、声誉、交互频率、历史行为等因素,优化节点分配,保证分片资源的均衡分配,从而提高系统吞吐量和安全性。此外,DS-RAM还引入了审计节点模块,基于信誉机制对共识节点进行额外监督,及时发现和隔离潜在的恶意节点,从而有效增强共识机制的容错能力和系统安全性。仿真结果表明,与传统的分片方案和基于信誉的区块链相比,本文提出的方法能够有效提高分片安全性和区块链分片性能。
{"title":"DS-RAM: A dynamic sharding and reputation-based auditing mechanisms for blockchain consensus in IIoT","authors":"Jiali Zheng,&nbsp;Jinhui Chen,&nbsp;Shuainan Liu","doi":"10.1016/j.jnca.2025.104362","DOIUrl":"10.1016/j.jnca.2025.104362","url":null,"abstract":"<div><div>Sharding is an effective strategy to improve the scalability of blockchain, especially in the context of massive data processing in Industrial Internet of Things (IIoT) scenarios. However, existing sharding schemes often overlook factors such as node reputation, resource capacity, and historical behavior, leading to imbalanced resource allocation, which in turn causes delays in real-time data processing and compromises system security. The blockchain consensus mechanism determines how nodes reach consensus, serving as the core of system efficiency and security. However, traditional consensus mechanisms lack effective detection of malicious nodes and insufficient supervision of consensus nodes, making the system vulnerable to attacks and malicious actions. To address these issues, this paper proposes DS-RAM (Dynamic Sharding and Reputation-based Auditing Mechanism), a dynamic sharding mechanism based on the weighted K-Medoids and Canopy algorithms. It comprehensively considers factors such as node geographical location, reputation, interaction frequency, and historical behavior to optimize node allocation, ensuring balanced distribution of sharding resources, thus improving system throughput and security. Additionally, DS-RAM introduces an auditing node module, which provides additional supervision of consensus nodes based on the reputation mechanism, enabling timely detection and isolation of potential malicious nodes, thereby effectively enhancing the fault tolerance of the consensus mechanism and system security. Simulation results demonstrate that, compared to traditional sharding schemes and reputation-based blockchains, the proposed method can effectively improve sharding security and blockchain sharding performance.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104362"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArtPerception: ASCII art-based jailbreak on LLMs with recognition pre-test ArtPerception:基于ASCII艺术的llm越狱与识别预测试
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-10 DOI: 10.1016/j.jnca.2025.104356
Guan-Yan Yang , Tzu-Yu Cheng , Ya-Wen Teng , Farn Wang , Kuo-Hui Yeh
The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM’s recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework’s real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure’s content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks.
Content Warning: This paper includes potentially harmful and offensive model outputs.
将大型语言模型(llm)集成到计算机应用程序中带来了变革性的能力,但也带来了重大的安全挑战。现有的安全校准主要关注语义解释,这使得llm容易受到使用非标准数据表示的攻击。本文介绍了ArtPerception,一个新颖的黑盒越狱框架,战略性地利用ASCII艺术绕过最先进的(SOTA) llm的安全措施。与之前依赖于迭代、暴力攻击的方法不同,ArtPerception引入了一种系统的两阶段方法。阶段1进行一次性的、特定于模型的预测试,以经验确定ASCII艺术识别的最佳参数。阶段2利用这些洞见发起高效的一次性恶意越狱攻击。我们提出了一个改进的Levenshtein距离(MLD)度量来更细致地评估LLM的识别能力。通过在四个SOTA开源llm上的综合实验,我们展示了优越的越狱性能。我们进一步验证了我们的框架与现实世界的相关性,展示了其成功转移到领先的商业模型,包括gpt - 40、Claude Sonnet 3.7和DeepSeek-V3,并对潜在的防御(如LLaMA Guard和Azure的内容过滤器)进行了严格的有效性分析。我们的研究结果强调,真正的法学硕士安全需要防御多模态的解释空间,即使在纯文本输入中也是如此,并强调了基于侦察的战略攻击的有效性。
{"title":"ArtPerception: ASCII art-based jailbreak on LLMs with recognition pre-test","authors":"Guan-Yan Yang ,&nbsp;Tzu-Yu Cheng ,&nbsp;Ya-Wen Teng ,&nbsp;Farn Wang ,&nbsp;Kuo-Hui Yeh","doi":"10.1016/j.jnca.2025.104356","DOIUrl":"10.1016/j.jnca.2025.104356","url":null,"abstract":"<div><div>The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM’s recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework’s real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure’s content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks.</div><div>Content Warning: This paper includes potentially harmful and offensive model outputs.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104356"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intelligent and explainable intrusion detection framework for Internet of Sensor Things using generalizable optimized active Machine Learning 一个智能的、可解释的传感器物联网入侵检测框架,使用可推广的优化主动机器学习
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-10 DOI: 10.1016/j.jnca.2025.104358
Muhammad Hasnain , Nadeem Javaid , Abdul Khader Jilani Saudagar , Neeraj Kumar
<div><div>Intrusion Detection (ID) in the Internet of Secure Things (IoST) has become increasingly critical due to the rising frequency and sophistication of cyber-attacks, which can lead to severe consequences such as data breaches, financial losses, and service disruptions. These risks are further intensified in computationally limited environments, where limited computational capacity and rapidly evolving threats make accurate and efficient detection challenging. In this study, a data-efficient ID framework tailored for resource-constrained environments is proposed by leveraging active learning and meta-heuristic optimization techniques. The proposed framework systematically addresses three critical limitations commonly observed in traditional models: data imbalance, inefficient hyperparameter tuning, and dependency on large labeled datasets. Initially, to mitigate class imbalance, adaptive synthetic sampling generates synthetic instances for minority classes, thereby enhancing learning in complex regions of the feature space. Next, for hyperparameter optimization, the Sandpiper Optimization (SO) algorithm fine-tunes the regularization parameter of Logistic Regression (LR), yielding significant improvements in model generalization. Finally, the challenge of limited labeled data is addressed through two active learning strategies: Active Learning Uncertainty-based (ALU) and Active Learning Entropy-based (ALE). These strategies selectively query the most informative samples from the unlabeled pool, ensuring maximum learning with minimal annotation effort. The performance of the proposed models is evaluated on two benchmark datasets: the wireless sensor networks and network intrusion detection datasets. Simulation results demonstrate that proposed models outperform base model LR. LRALE achieves improvements of 10.48% and 3.16% in accuracy, 19.48% and 3.16% in recall, and 7.23% and 1.04% in F1-score on WSN-DS and CIC-IDS-DS datasets, respectively. LRALU shows improvements of 18.18% and 2.11% in accuracy, 18.18% and 2.11% in recall, and 14.63% and 2.08% in Receiver Operating Characteristic-Area Under the Curve (ROC-AUC). Similarly, LRSO achieves improvements of 9.09% and 2.11% in accuracy, 9.09% and 1.05% in recall, and 9.76% and 3.12% in ROC-AUC on WSN-DS and CIC-IDS-DS datasets, respectively. To ensure model generalization and stability across different data partitions, a rigorous 10-fold cross-validation is conducted. Model interpretability is further enhanced using eXplainable artificial intelligence techniques, including Local interpretable model-agnostic explanations and Shapley additive explanations, to elucidate feature contributions and improve transparency. Additionally, statistical significance testing through paired <em>t</em>-tests confirms the robustness and reliability of the proposed models. Overall, this framework introduces a comprehensive, annotation-efficient, and transparent ID solution that significantly advances the domain, m
由于网络攻击的频率和复杂性不断上升,入侵检测(ID)在安全物联网(IoST)中变得越来越重要,这可能导致严重的后果,如数据泄露、经济损失和服务中断。在计算能力有限的环境中,这些风险进一步加剧,在这些环境中,有限的计算能力和快速发展的威胁使准确有效的检测变得困难。在本研究中,通过利用主动学习和元启发式优化技术,提出了一个针对资源受限环境量身定制的数据高效ID框架。提出的框架系统地解决了传统模型中常见的三个关键限制:数据不平衡、低效的超参数调优以及对大型标记数据集的依赖。首先,为了缓解类不平衡,自适应合成采样为少数类生成合成实例,从而增强特征空间复杂区域的学习。接下来,对于超参数优化,Sandpiper optimization (SO)算法对Logistic Regression (LR)的正则化参数进行微调,显著提高了模型泛化能力。最后,通过两种主动学习策略:基于不确定性的主动学习(ALU)和基于熵的主动学习(ALE)来解决有限标记数据的挑战。这些策略有选择地从未标记池中查询最有信息的样本,确保以最小的注释工作量获得最大的学习。在无线传感器网络和网络入侵检测两个基准数据集上对所提模型的性能进行了评估。仿真结果表明,所提模型优于基本模型LR。LRALE在WSN-DS和CIC-IDS-DS数据集上的准确率提高了10.48%和3.16%,召回率提高了19.48%和3.16%,f1评分提高了7.23%和1.04%。LRALU的准确率分别提高18.18%和2.11%,召回率分别提高18.18%和2.11%,接受者工作特征曲线下面积(ROC-AUC)分别提高14.63%和2.08%。同样,LRSO在WSN-DS和CIC-IDS-DS数据集上的准确率分别提高9.09%和2.11%,召回率分别提高9.09%和1.05%,ROC-AUC分别提高9.76%和3.12%。为了确保模型在不同数据分区之间的泛化和稳定性,进行了严格的10倍交叉验证。使用可解释的人工智能技术,包括局部可解释的模型不可知论解释和Shapley加性解释,进一步增强了模型的可解释性,以阐明特征贡献并提高透明度。此外,通过配对t检验的统计显著性检验证实了所提出模型的稳健性和可靠性。总的来说,这个框架引入了一个全面的、注释高效的、透明的ID解决方案,极大地推进了这个领域,使其非常适合在IoSTs环境中进行实际部署。
{"title":"An intelligent and explainable intrusion detection framework for Internet of Sensor Things using generalizable optimized active Machine Learning","authors":"Muhammad Hasnain ,&nbsp;Nadeem Javaid ,&nbsp;Abdul Khader Jilani Saudagar ,&nbsp;Neeraj Kumar","doi":"10.1016/j.jnca.2025.104358","DOIUrl":"10.1016/j.jnca.2025.104358","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Intrusion Detection (ID) in the Internet of Secure Things (IoST) has become increasingly critical due to the rising frequency and sophistication of cyber-attacks, which can lead to severe consequences such as data breaches, financial losses, and service disruptions. These risks are further intensified in computationally limited environments, where limited computational capacity and rapidly evolving threats make accurate and efficient detection challenging. In this study, a data-efficient ID framework tailored for resource-constrained environments is proposed by leveraging active learning and meta-heuristic optimization techniques. The proposed framework systematically addresses three critical limitations commonly observed in traditional models: data imbalance, inefficient hyperparameter tuning, and dependency on large labeled datasets. Initially, to mitigate class imbalance, adaptive synthetic sampling generates synthetic instances for minority classes, thereby enhancing learning in complex regions of the feature space. Next, for hyperparameter optimization, the Sandpiper Optimization (SO) algorithm fine-tunes the regularization parameter of Logistic Regression (LR), yielding significant improvements in model generalization. Finally, the challenge of limited labeled data is addressed through two active learning strategies: Active Learning Uncertainty-based (ALU) and Active Learning Entropy-based (ALE). These strategies selectively query the most informative samples from the unlabeled pool, ensuring maximum learning with minimal annotation effort. The performance of the proposed models is evaluated on two benchmark datasets: the wireless sensor networks and network intrusion detection datasets. Simulation results demonstrate that proposed models outperform base model LR. LRALE achieves improvements of 10.48% and 3.16% in accuracy, 19.48% and 3.16% in recall, and 7.23% and 1.04% in F1-score on WSN-DS and CIC-IDS-DS datasets, respectively. LRALU shows improvements of 18.18% and 2.11% in accuracy, 18.18% and 2.11% in recall, and 14.63% and 2.08% in Receiver Operating Characteristic-Area Under the Curve (ROC-AUC). Similarly, LRSO achieves improvements of 9.09% and 2.11% in accuracy, 9.09% and 1.05% in recall, and 9.76% and 3.12% in ROC-AUC on WSN-DS and CIC-IDS-DS datasets, respectively. To ensure model generalization and stability across different data partitions, a rigorous 10-fold cross-validation is conducted. Model interpretability is further enhanced using eXplainable artificial intelligence techniques, including Local interpretable model-agnostic explanations and Shapley additive explanations, to elucidate feature contributions and improve transparency. Additionally, statistical significance testing through paired &lt;em&gt;t&lt;/em&gt;-tests confirms the robustness and reliability of the proposed models. Overall, this framework introduces a comprehensive, annotation-efficient, and transparent ID solution that significantly advances the domain, m","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104358"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145384600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural network enhanced Internet of Things node classification with different node connections 图神经网络增强了不同节点连接的物联网节点分类
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-10 DOI: 10.1016/j.jnca.2025.104363
Mohammad Abrar Shakil Sejan , Md Habibur Rahman , Md Abdul Aziz , Rana Tabassum , Iqra Hameed , Nidal Nasser , Hyoung-Kyu Song
Internet of Things (IoT) has profoundly impacted human life by providing ubiquitous connectivity and unique advantages. As the demand for IoT applications continues to grow, the number of connected devices is increasing at a rapid pace. This growth poses challenges in identifying data sources and managing data in large networks. The graph data structure offers a meaningful way to represent IoT networks, where nodes represent devices and edges represent their connections. In this study, we convert IoT networks into graph representations, considering two approaches: fully connected node graphs and randomly connected node graphs. Graph neural networks (GNNs) are highly effective for processing graph data, as they can capture relationships within graph structures based on their topological properties. We utilize GNNs to perform node classification tasks for IoT networks. Seven different GNN models were investigated to perform node classification tasks on both complete and random graphs. The experimental results indicate that the SAGEConv model achieves high classification accuracy under dense network conditions. Additionally, the CHEBYSHEVConv model performs well with fully connected graphs, while the TAGConv model demonstrates strong performance with randomly connected graphs.
物联网(IoT)通过提供无处不在的连接和独特的优势,深刻影响着人类的生活。随着对物联网应用的需求不断增长,连接设备的数量正在快速增加。这种增长对识别数据源和管理大型网络中的数据提出了挑战。图数据结构提供了一种有意义的方式来表示物联网网络,其中节点表示设备,边表示它们的连接。在本研究中,我们将物联网网络转换为图表示,考虑了两种方法:完全连接节点图和随机连接节点图。图神经网络(gnn)在处理图数据方面非常有效,因为它们可以根据图结构的拓扑属性捕获图结构内部的关系。我们利用gnn来执行物联网网络的节点分类任务。研究了7种不同的GNN模型在完全图和随机图上执行节点分类任务。实验结果表明,SAGEConv模型在密集网络条件下具有较高的分类精度。此外,CHEBYSHEVConv模型在完全连接图上表现良好,而TAGConv模型在随机连接图上表现良好。
{"title":"Graph neural network enhanced Internet of Things node classification with different node connections","authors":"Mohammad Abrar Shakil Sejan ,&nbsp;Md Habibur Rahman ,&nbsp;Md Abdul Aziz ,&nbsp;Rana Tabassum ,&nbsp;Iqra Hameed ,&nbsp;Nidal Nasser ,&nbsp;Hyoung-Kyu Song","doi":"10.1016/j.jnca.2025.104363","DOIUrl":"10.1016/j.jnca.2025.104363","url":null,"abstract":"<div><div>Internet of Things (IoT) has profoundly impacted human life by providing ubiquitous connectivity and unique advantages. As the demand for IoT applications continues to grow, the number of connected devices is increasing at a rapid pace. This growth poses challenges in identifying data sources and managing data in large networks. The graph data structure offers a meaningful way to represent IoT networks, where nodes represent devices and edges represent their connections. In this study, we convert IoT networks into graph representations, considering two approaches: fully connected node graphs and randomly connected node graphs. Graph neural networks (GNNs) are highly effective for processing graph data, as they can capture relationships within graph structures based on their topological properties. We utilize GNNs to perform node classification tasks for IoT networks. Seven different GNN models were investigated to perform node classification tasks on both complete and random graphs. The experimental results indicate that the SAGEConv model achieves high classification accuracy under dense network conditions. Additionally, the CHEBYSHEVConv model performs well with fully connected graphs, while the TAGConv model demonstrates strong performance with randomly connected graphs.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104363"},"PeriodicalIF":8.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1