Pub Date : 2025-12-08DOI: 10.1016/j.jnca.2025.104409
Khaoula Hidawi , Sabrine Ennaji , Elena Ferrari
This paper introduces BlackoutADR, a novel adversarial attack exploiting LoRaWAN’s Adaptive Data Rate (ADR) mechanism in cellular-connected UAV networks, with applicability to other IoT systems as well. By subtly manipulating Received Signal Strength Indicator (RSSI) and Signal-to-Noise Ratio (SNR), BlackoutADR increases UAV transmission power, causing 45% faster battery depletion within 100 s of simulation time and disrupting network operations. Using NS-3 simulations with a 20-UAV FANET, we evaluate its evasion of multiple ML-based IDSs (CNN, LSTM, BiLSTM, FNN, LoRaWAN-specific). Results show BlackoutADR remains undetected due to its subtle manipulations evading even dynamic thresholds, outperforming traditional jamming attacks. To address the identified vulnerability, we outline reactive measures, including dynamic threshold-based IDSs, secure ADR mechanisms, and recommendations for drone manufacturers.
{"title":"BlackoutADR: Exploiting adaptive data rate vulnerabilities in LoRaWAN-based FANETs","authors":"Khaoula Hidawi , Sabrine Ennaji , Elena Ferrari","doi":"10.1016/j.jnca.2025.104409","DOIUrl":"10.1016/j.jnca.2025.104409","url":null,"abstract":"<div><div>This paper introduces <em>BlackoutADR</em>, a novel adversarial attack exploiting LoRaWAN’s Adaptive Data Rate (ADR) mechanism in cellular-connected UAV networks, with applicability to other IoT systems as well. By subtly manipulating Received Signal Strength Indicator (RSSI) and Signal-to-Noise Ratio (SNR), <em>BlackoutADR</em> increases UAV transmission power, causing 45% faster battery depletion within 100 s of simulation time and disrupting network operations. Using NS-3 simulations with a 20-UAV FANET, we evaluate its evasion of multiple ML-based IDSs (CNN, LSTM, BiLSTM, FNN, LoRaWAN-specific). Results show <em>BlackoutADR</em> remains undetected due to its subtle manipulations evading even dynamic thresholds, outperforming traditional jamming attacks. To address the identified vulnerability, we outline reactive measures, including dynamic threshold-based IDSs, secure ADR mechanisms, and recommendations for drone manufacturers.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"246 ","pages":"Article 104409"},"PeriodicalIF":8.0,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.jnca.2025.104401
Yong Liu , Guisheng Liu , Tianyi Yu , Qian Meng
Software-Defined Networking (SDN) is a network architecture that separates the control plane and data plane of the traditional data center network, resulting in enhanced network scalability and flexibility. The conventional Equal Cost MultiPath (ECMP) load balancing algorithm, which relies on static hash mapping, has limitations when applied to data center networks, leading to issues such as hash conflicts and congestion between mouse and elephant flows. Therefore, load balancing based on flowlet granularity has been proposed. This approach divides flows into flowlets, leveraging the burstiness of traffic to enhance load balancing capabilities. However, these approaches encounter several challenges, such as the lack of real-time feedback on network load situations, the inability of static flowlet timeouts to adapt to dynamic changes in the network, and inadequate consideration of load distribution. To address these challenges, we propose a novel load balancing strategy called Self-Evolution Load Balancing (SELB) based on Temporal Graph Convolutional Network (T-GCN). SELB utilizes the T-GCN to dynamically predict the network load state for real-time feedback. Meanwhile, the adaptive flow splitting algorithm is employed to dynamically adjust the timeout of flowlets, effectively adapting to changes in network dynamics. Moreover, SELB incorporates a load-aware route planning strategy that considers the overall network load distribution. By doing so, it can intelligently route flowlets along equivalent multipaths, enhancing load balancing capabilities. The simulation results demonstrate that SELB effectively reduces Flow Completion Time (FCT), enhances average throughput, and improves load balancing performance in comparison to existing schemes.
SDN (Software-Defined Networking)是一种将传统数据中心网络的控制平面和数据平面分离开来的网络架构,增强了网络的可扩展性和灵活性。传统的等成本多路径(Equal Cost MultiPath, ECMP)负载平衡算法依赖于静态哈希映射,在应用于数据中心网络时存在局限性,会导致诸如哈希冲突和象流之间的拥塞等问题。因此,提出了基于流粒度的负载均衡。这种方法将流分成小流,利用流量的突发性来增强负载平衡能力。然而,这些方法遇到了一些挑战,例如缺乏对网络负载情况的实时反馈,静态流超时不能适应网络的动态变化,以及对负载分布的考虑不足。为了解决这些挑战,我们提出了一种新的负载平衡策略,称为基于时间图卷积网络(T-GCN)的自进化负载平衡(SELB)。SELB利用T-GCN动态预测网络负载状态,进行实时反馈。同时,采用自适应流分割算法动态调整小流超时,有效适应网络动态变化。此外,SELB还结合了考虑整个网络负载分布的负载感知路由规划策略。通过这样做,它可以沿着等效的多路径智能地路由流,增强负载平衡能力。仿真结果表明,与现有算法相比,SELB算法有效地缩短了流完成时间(Flow Completion Time, FCT),提高了平均吞吐量,改善了负载均衡性能。
{"title":"SELB: Self-Evolution Load Balancing Based on Temporal Graph Convolutional Network in Software-Defined Data Center Networks","authors":"Yong Liu , Guisheng Liu , Tianyi Yu , Qian Meng","doi":"10.1016/j.jnca.2025.104401","DOIUrl":"10.1016/j.jnca.2025.104401","url":null,"abstract":"<div><div>Software-Defined Networking (SDN) is a network architecture that separates the control plane and data plane of the traditional data center network, resulting in enhanced network scalability and flexibility. The conventional Equal Cost MultiPath (ECMP) load balancing algorithm, which relies on static hash mapping, has limitations when applied to data center networks, leading to issues such as hash conflicts and congestion between mouse and elephant flows. Therefore, load balancing based on flowlet granularity has been proposed. This approach divides flows into flowlets, leveraging the burstiness of traffic to enhance load balancing capabilities. However, these approaches encounter several challenges, such as the lack of real-time feedback on network load situations, the inability of static flowlet timeouts to adapt to dynamic changes in the network, and inadequate consideration of load distribution. To address these challenges, we propose a novel load balancing strategy called Self-Evolution Load Balancing (SELB) based on Temporal Graph Convolutional Network (T-GCN). SELB utilizes the T-GCN to dynamically predict the network load state for real-time feedback. Meanwhile, the adaptive flow splitting algorithm is employed to dynamically adjust the timeout of flowlets, effectively adapting to changes in network dynamics. Moreover, SELB incorporates a load-aware route planning strategy that considers the overall network load distribution. By doing so, it can intelligently route flowlets along equivalent multipaths, enhancing load balancing capabilities. The simulation results demonstrate that SELB effectively reduces Flow Completion Time (FCT), enhances average throughput, and improves load balancing performance in comparison to existing schemes.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"246 ","pages":"Article 104401"},"PeriodicalIF":8.0,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145657738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1016/j.jnca.2025.104398
Jithu Vijay V.P., Shahanas I.N., Sabu M. Thampi, Aiswarya S. Nair
The use of drones is rapidly increasing in areas such as surveillance, defense, and emergency services. As a result, ensuring secure communication and proper authentication has become a major concern in the Internet of Drones, where drones must share data and coordinate their actions in real time. One of the biggest challenges in drone networks is maintaining secure and reliable communication between drones. The dynamic and distributed nature of these networks increases the risk of security breaches. Existing systems mostly rely on cryptographic methods like RSA and ECC. These methods will not remain secure in the future because of advancements in quantum computing. These systems also depend on static data storage and centralized credential management, which make them vulnerable to attacks such as impersonation, replay, and man-in-the-middle. To address these issues, we propose a quantum-secure drone-to-drone authentication and secure communication protocol that utilizes Post Quantum Cryptographic (PQC) algorithms such as, Kyber for encryption and Dilithium for digital signatures. Both are lattice-based lightweight cryptographic algorithms that offer strong resistance against quantum attacks. Instead of storing secret data on drones and to prevent cloning, we use Physical Unclonable Functions (PUF) to generate device specific seeds for authentication and key generation during each session. A Hyperledger Fabric Blockchain is used at the Ground Control Station (GCS) to store drone credentials securely and avoid single point failure. We conducted the formal security analysis using the Burrows–Abadi–Needham (BAN) logic for trust validation and the Scyther tool to formally analyze and verify resistance against classical and quantum-era attacks. In addition to formal proofs, informal analysis confirms that the protocol maintains data integrity and authentication even under active network threats. We implemented the protocol using Raspberry Pi drones and a Linux-based GCS. Performance results show a low computation time of 0.08 s for authentication and 0.12 s for secure communication on Raspberry Pi 5, with minimal memory usage and acceptable communication cost suitable for implementation on resource-constrained drones.
无人机在监视、防御和应急服务等领域的使用正在迅速增加。因此,确保安全通信和适当的身份验证已成为无人机互联网的主要关注点,无人机必须实时共享数据并协调其行动。无人机网络面临的最大挑战之一是保持无人机之间安全可靠的通信。这些网络的动态和分布式特性增加了安全漏洞的风险。现有的系统主要依赖于RSA和ECC等加密方法。由于量子计算的进步,这些方法在未来将不会保持安全。这些系统还依赖于静态数据存储和集中式凭证管理,这使得它们容易受到诸如模拟、重放和中间人攻击等攻击。为了解决这些问题,我们提出了一种量子安全无人机对无人机身份验证和安全通信协议,该协议利用后量子加密(PQC)算法,如Kyber加密和Dilithium数字签名。两者都是基于格的轻量级加密算法,可提供强大的抗量子攻击能力。而不是在无人机上存储秘密数据,以防止克隆,我们使用物理不可克隆功能(PUF)来生成设备特定的种子认证和密钥生成在每个会话期间。地面控制站(GCS)使用Hyperledger Fabric区块链来安全地存储无人机凭证并避免单点故障。我们使用Burrows-Abadi-Needham (BAN)逻辑进行了正式的安全分析,用于信任验证,并使用Scyther工具正式分析和验证对经典和量子时代攻击的抵抗力。除了正式的证明之外,非正式的分析证实,即使在活跃的网络威胁下,该协议也能保持数据完整性和身份验证。我们使用树莓派无人机和基于linux的GCS实现了该协议。性能结果表明,该算法在Raspberry Pi 5上的认证计算时间为0.08 s,安全通信计算时间为0.12 s,具有最小的内存使用和可接受的通信成本,适合在资源受限的无人机上实现。
{"title":"A quantum-secure digital signature-based communication protocol for the Internet of Drones (IoD)","authors":"Jithu Vijay V.P., Shahanas I.N., Sabu M. Thampi, Aiswarya S. Nair","doi":"10.1016/j.jnca.2025.104398","DOIUrl":"10.1016/j.jnca.2025.104398","url":null,"abstract":"<div><div>The use of drones is rapidly increasing in areas such as surveillance, defense, and emergency services. As a result, ensuring secure communication and proper authentication has become a major concern in the Internet of Drones, where drones must share data and coordinate their actions in real time. One of the biggest challenges in drone networks is maintaining secure and reliable communication between drones. The dynamic and distributed nature of these networks increases the risk of security breaches. Existing systems mostly rely on cryptographic methods like RSA and ECC. These methods will not remain secure in the future because of advancements in quantum computing. These systems also depend on static data storage and centralized credential management, which make them vulnerable to attacks such as impersonation, replay, and man-in-the-middle. To address these issues, we propose a quantum-secure drone-to-drone authentication and secure communication protocol that utilizes Post Quantum Cryptographic (PQC) algorithms such as, Kyber for encryption and Dilithium for digital signatures. Both are lattice-based lightweight cryptographic algorithms that offer strong resistance against quantum attacks. Instead of storing secret data on drones and to prevent cloning, we use Physical Unclonable Functions (PUF) to generate device specific seeds for authentication and key generation during each session. A Hyperledger Fabric Blockchain is used at the Ground Control Station (GCS) to store drone credentials securely and avoid single point failure. We conducted the formal security analysis using the Burrows–Abadi–Needham (BAN) logic for trust validation and the Scyther tool to formally analyze and verify resistance against classical and quantum-era attacks. In addition to formal proofs, informal analysis confirms that the protocol maintains data integrity and authentication even under active network threats. We implemented the protocol using Raspberry Pi drones and a Linux-based GCS. Performance results show a low computation time of 0.08 s for authentication and 0.12 s for secure communication on Raspberry Pi 5, with minimal memory usage and acceptable communication cost suitable for implementation on resource-constrained drones.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104398"},"PeriodicalIF":8.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145609218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.jnca.2025.104395
Yadi Wu , Lina Wang , Rongwei Yu , Xiuwen Huang , Jiatong Liu
JointCloud Computing (JCC) supports a collaborative model of multiple cloud service providers to provide users with robust performance and adequate services. Reputation is an important aspect for the stable development of the JCC system, affecting the cooperation among service providers and users’ choice of services. Most of the existing reputation assessment solutions only consider a single factor of user feedback or service quality, and cannot provide an accurate reputation assessment for the complex system of JCC. In addition, JointCloud services are provided by multiple service providers in cooperation, and existing solutions do not consider this service characteristic, making it difficult to accurately measure the reputation of the service. In order to provide a comprehensive reputation assessment for JCC, we proposed a reputation assessment model based on digital twins. A reputation calculation module is embedded in the digital twin, and a hybrid subjective–objective-based reputation assessment method and a split-integration-based reputation assessment method are designed for different JointCloud subjects to achieve a comprehensive and accurate reputation assessment. We conducted a series of experiments to evaluate the performance of the proposed reputation evaluation model and present the experimental results. The proposed method achieves a reputation assessment bias of 0.0112, which reduces the average bias by 0.2184 compared to existing researches. In real-world scenarios, the proposed model incurs a communication overhead of 93.7735 ms, with a digital twin data acquisition frequency of 36.4273 ms. The evaluation results show that our reputation evaluation model is feasible in terms of performance and accuracy.
{"title":"A digital twin-based reputation assessment model for JointCloud computing","authors":"Yadi Wu , Lina Wang , Rongwei Yu , Xiuwen Huang , Jiatong Liu","doi":"10.1016/j.jnca.2025.104395","DOIUrl":"10.1016/j.jnca.2025.104395","url":null,"abstract":"<div><div>JointCloud Computing (JCC) supports a collaborative model of multiple cloud service providers to provide users with robust performance and adequate services. Reputation is an important aspect for the stable development of the JCC system, affecting the cooperation among service providers and users’ choice of services. Most of the existing reputation assessment solutions only consider a single factor of user feedback or service quality, and cannot provide an accurate reputation assessment for the complex system of JCC. In addition, JointCloud services are provided by multiple service providers in cooperation, and existing solutions do not consider this service characteristic, making it difficult to accurately measure the reputation of the service. In order to provide a comprehensive reputation assessment for JCC, we proposed a reputation assessment model based on digital twins. A reputation calculation module is embedded in the digital twin, and a hybrid subjective–objective-based reputation assessment method and a split-integration-based reputation assessment method are designed for different JointCloud subjects to achieve a comprehensive and accurate reputation assessment. We conducted a series of experiments to evaluate the performance of the proposed reputation evaluation model and present the experimental results. The proposed method achieves a reputation assessment bias of 0.0112, which reduces the average bias by 0.2184 compared to existing researches. In real-world scenarios, the proposed model incurs a communication overhead of 93.7735 ms, with a digital twin data acquisition frequency of 36.4273 ms. The evaluation results show that our reputation evaluation model is feasible in terms of performance and accuracy.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"246 ","pages":"Article 104395"},"PeriodicalIF":8.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.jnca.2025.104397
Xiujun Wang, Wenlong Dong, Wenjie Hu, Juyan Li
With the rapid development of Internet of Things (IoT) technology, smart home systems have significantly enhanced the convenience and automation level of users’ daily lives. However, as sensitive data is transmitted between smart devices over open channels, the security risks associated with data transmission have become increasingly prominent. Authentication and key exchange (AKE) protocols are designed to facilitate identity authentication and confidential communication between smart devices. However, existing AKE protocols often suffer from low efficiency and poor scalability. These limitations make them unsuitable for resource-constrained IoT devices and unable to provide secure mutual authentication. To tackle these challenges, this study introduces a blockchain-assisted lightweight authentication scheme for smart homes. The proposed scheme integrates biometric authentication and device credentials to achieve multi-factor authentication. Meanwhile, blockchain technology is employed to record and protect interactions between users and smart devices, thereby enhancing the security, transparency, and auditability of the communication process. Formal security analysis under the Random Oracle Model (ROM) confirms the scheme’s key confidentiality. Furthermore, informal analysis demonstrates its robustness against common threats, including replay, man-in-the-middle, impersonation, and device capture attacks. Benchmarks against existing protocols demonstrate that our design incurs the least computational, communication, and energy overhead. It achieves this efficiency while preserving robust security and scalability, making it ideal for resource-limited smart-home devices.
{"title":"A blockchain-assisted lightweight authentication scheme for smart home environments","authors":"Xiujun Wang, Wenlong Dong, Wenjie Hu, Juyan Li","doi":"10.1016/j.jnca.2025.104397","DOIUrl":"10.1016/j.jnca.2025.104397","url":null,"abstract":"<div><div>With the rapid development of Internet of Things (IoT) technology, smart home systems have significantly enhanced the convenience and automation level of users’ daily lives. However, as sensitive data is transmitted between smart devices over open channels, the security risks associated with data transmission have become increasingly prominent. Authentication and key exchange (AKE) protocols are designed to facilitate identity authentication and confidential communication between smart devices. However, existing AKE protocols often suffer from low efficiency and poor scalability. These limitations make them unsuitable for resource-constrained IoT devices and unable to provide secure mutual authentication. To tackle these challenges, this study introduces a blockchain-assisted lightweight authentication scheme for smart homes. The proposed scheme integrates biometric authentication and device credentials to achieve multi-factor authentication. Meanwhile, blockchain technology is employed to record and protect interactions between users and smart devices, thereby enhancing the security, transparency, and auditability of the communication process. Formal security analysis under the Random Oracle Model (ROM) confirms the scheme’s key confidentiality. Furthermore, informal analysis demonstrates its robustness against common threats, including replay, man-in-the-middle, impersonation, and device capture attacks. Benchmarks against existing protocols demonstrate that our design incurs the least computational, communication, and energy overhead. It achieves this efficiency while preserving robust security and scalability, making it ideal for resource-limited smart-home devices.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104397"},"PeriodicalIF":8.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.jnca.2025.104393
José Angel Sánchez Martín , Victor Mitrana , Mihaela Păun , José Ramó n Sánchez Couso
We continue the investigation regarding the simulation of different network topologies in networks having processors inspired by the DNA splicing which are located in their nodes. These networks are called networks of splicing processors. So far, it was shown how every network of splicing processors, no matter its topology, can be converted, by a direct construction, into an equivalent network with a desired topology, especially a common one like star, grid or complete (or full-mesh). A short discussion highlights the importance of wheel graph topology in relation to biology and DNA computing. This work completes this study by giving an effective construction of a wheel (ring-star) network of splicing processors that is equivalent to an arbitrary network. The size and time complexity of our construction is evaluated. Finally, we discuss a very preliminary simulation of the networks considered here by means of recent technologies and strategies that are suitable to handle the massive data and parallel processing requirements of these networks.
{"title":"Networks of splicing processors: Wheel graph topology simulation","authors":"José Angel Sánchez Martín , Victor Mitrana , Mihaela Păun , José Ramó n Sánchez Couso","doi":"10.1016/j.jnca.2025.104393","DOIUrl":"10.1016/j.jnca.2025.104393","url":null,"abstract":"<div><div>We continue the investigation regarding the simulation of different network topologies in networks having processors inspired by the DNA splicing which are located in their nodes. These networks are called networks of splicing processors. So far, it was shown how every network of splicing processors, no matter its topology, can be converted, by a direct construction, into an equivalent network with a desired topology, especially a common one like star, grid or complete (or full-mesh). A short discussion highlights the importance of wheel graph topology in relation to biology and DNA computing. This work completes this study by giving an effective construction of a wheel (ring-star) network of splicing processors that is equivalent to an arbitrary network. The size and time complexity of our construction is evaluated. Finally, we discuss a very preliminary simulation of the networks considered here by means of recent technologies and strategies that are suitable to handle the massive data and parallel processing requirements of these networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104393"},"PeriodicalIF":8.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Community detection is a vital task in social network analysis, enabling the extraction of hidden structures and relationships. However, existing diffusion-based local community detection algorithms often depend on similarity-based scoring, which frequently failing to identify influential core nodes for expanding label. To address these shortcomings, we propose the local detecting and structuring communities (LDSC) method that integrates structural and relational insights with graph-based metrics and deep learning for refined community detection. LDSC uniquely stands out by combining Local Influence (LI) and Adaptive Absorbing Strength (AAS) metrics with GraphSAGE-based boundary refinement and adaptive community merging, tackling persistent challenges like scalability, boundary ambiguity, and structural cohesion unmet by prior methods. The method unfolds in four key phases: (1) Core Node Detection, employing a distinctive metric fusing LI and AAS to identify structurally significant nodes; (2) Label Diffusion, dynamically propagating labels from core nodes to neighbors for precise community formation; (3) Boundary Node Reassignment, using GraphSAGE to resolve ambiguities; and (4) Adaptive Community Merging, using an innovative local merging strategy to enhance cohesion. Evaluations on synthetic LFR benchmarks and real-world networks (e.g., Karate, Dolphins, DBLP, Amazon, LiveJournal, Orkut) demonstrate LDSC's superiority over baseline methods (e.g., LPA, CNM, WalkTrap, Louvain) and state-of-the-art approaches (e.g., Leiden, Infomap, LSMD, CLD_GE, FluidC, LCDR, LS), achieving perfect NMI/ARI (1.0) in Karate and Dolphins, top NMI in LiveJournal (0.92) and Orkut (0.65), average scores of 0.85 NMI and 0.75 ARI, and >15 % NMI improvement in large-scale networks like DBLP, showcasing unmatched accuracy, stability, and efficiency.
{"title":"Community detection via core node identification and local label diffusion with GraphSAGE boundary refinement in complex networks","authors":"Asgarali Bouyer , Pouya Shahgholi , Bahman Arasteh , Amin Golzari Oskouei , Xiaoyang Liu","doi":"10.1016/j.jnca.2025.104399","DOIUrl":"10.1016/j.jnca.2025.104399","url":null,"abstract":"<div><div>Community detection is a vital task in social network analysis, enabling the extraction of hidden structures and relationships. However, existing diffusion-based local community detection algorithms often depend on similarity-based scoring, which frequently failing to identify influential core nodes for expanding label. To address these shortcomings, we propose the local detecting and structuring communities (LDSC) method that integrates structural and relational insights with graph-based metrics and deep learning for refined community detection. LDSC uniquely stands out by combining Local Influence (LI) and Adaptive Absorbing Strength (AAS) metrics with GraphSAGE-based boundary refinement and adaptive community merging, tackling persistent challenges like scalability, boundary ambiguity, and structural cohesion unmet by prior methods. The method unfolds in four key phases: (1) Core Node Detection, employing a distinctive metric fusing LI and AAS to identify structurally significant nodes; (2) Label Diffusion, dynamically propagating labels from core nodes to neighbors for precise community formation; (3) Boundary Node Reassignment, using GraphSAGE to resolve ambiguities; and (4) Adaptive Community Merging, using an innovative local merging strategy to enhance cohesion. Evaluations on synthetic LFR benchmarks and real-world networks (e.g., Karate, Dolphins, DBLP, Amazon, LiveJournal, Orkut) demonstrate LDSC's superiority over baseline methods (e.g., LPA, CNM, WalkTrap, Louvain) and state-of-the-art approaches (e.g., Leiden, Infomap, LSMD, CLD_GE, FluidC, LCDR, LS), achieving perfect NMI/ARI (1.0) in Karate and Dolphins, top NMI in LiveJournal (0.92) and Orkut (0.65), average scores of 0.85 NMI and 0.75 ARI, and >15 % NMI improvement in large-scale networks like DBLP, showcasing unmatched accuracy, stability, and efficiency.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104399"},"PeriodicalIF":8.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.jnca.2025.104396
Md Mizanur Rahman , Faycal Bouhafs , Sayed Amir Hoseini , Frank den Hartog
Smart homes are increasingly vulnerable to cyberattacks that lead to network instability, causing homeowners to lodge complaints with their Broadband Service Providers (BSPs). Therefore, effective and timely detection of cyberattacks is crucial for both customers and BSPs. Address Resolution Protocol (ARP) spoofing is one of the most common attacks that can facilitate larger and more severe follow-up attacks. Unfortunately, there are currently no methods that can effectively detect and mitigate ARP spoofing in smart homes from a BSP’s perspective. Current Machine Learning (ML)-based methods often rely on a single dataset from a controlled lab environment designed to mimic a single home, assuming that the results will generalize to all smart homes. Our findings indicate that this assumption is flawed. They are also unsuitable for smart homes from a BSP’s perspective, as they require custom applications, introduce additional overhead, and often rely on the injection of probing traffic into the network. To address these issues, we developed an algorithm that can detect ARP spoofing in smart home networks, regardless of the network structure or connected devices. It uses a cross-protocol strategy by correlating ARP packets with Dynamic Host Configuration Protocol (DHCP) messages to validate address bindings. We evaluated our method using four public datasets and two real-world testbeds, achieving 100% detection accuracy in all scenarios. In addition, the algorithm requires only little computational overhead, confirming its suitability for use by BSPs to detect and mitigate ARP spoofing attacks in smart homes.
{"title":"ARProof: A cross-protocol approach to detect and mitigate ARP-spoofing attacks in smart home networks","authors":"Md Mizanur Rahman , Faycal Bouhafs , Sayed Amir Hoseini , Frank den Hartog","doi":"10.1016/j.jnca.2025.104396","DOIUrl":"10.1016/j.jnca.2025.104396","url":null,"abstract":"<div><div>Smart homes are increasingly vulnerable to cyberattacks that lead to network instability, causing homeowners to lodge complaints with their Broadband Service Providers (BSPs). Therefore, effective and timely detection of cyberattacks is crucial for both customers and BSPs. Address Resolution Protocol (ARP) spoofing is one of the most common attacks that can facilitate larger and more severe follow-up attacks. Unfortunately, there are currently no methods that can effectively detect and mitigate ARP spoofing in smart homes from a BSP’s perspective. Current Machine Learning (ML)-based methods often rely on a single dataset from a controlled lab environment designed to mimic a single home, assuming that the results will generalize to all smart homes. Our findings indicate that this assumption is flawed. They are also unsuitable for smart homes from a BSP’s perspective, as they require custom applications, introduce additional overhead, and often rely on the injection of probing traffic into the network. To address these issues, we developed an algorithm that can detect ARP spoofing in smart home networks, regardless of the network structure or connected devices. It uses a cross-protocol strategy by correlating ARP packets with Dynamic Host Configuration Protocol (DHCP) messages to validate address bindings. We evaluated our method using four public datasets and two real-world testbeds, achieving 100% detection accuracy in all scenarios. In addition, the algorithm requires only little computational overhead, confirming its suitability for use by BSPs to detect and mitigate ARP spoofing attacks in smart homes.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"246 ","pages":"Article 104396"},"PeriodicalIF":8.0,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jnca.2025.104390
Xiaole Li , Yinghui Jiang , Xing Wang , Jiuru Wang , Lei Gao , Shanwen Yi
After some disaster occurs, rapid data evacuation among cloud data centers is of great importance. Data evacuation optimization is a two-stage process including destination selection and flow scheduling. These two stages are related to each other, while evacuation efficiency is affected by evacuation distance, bandwidth allocation ratio, and total amount of evacuation flow at the same time. The mutual constraints among multiple factors make it difficult to find or approximate the optimal solution via single-objective optimization. This paper proposes a new two-stage data evacuation strategy using multi-objective reinforcement learning, with evacuation flow optimization as the central optimization objective across both stages. In the first stage, it simultaneously minimizes total path length and maximizes the total available bandwidth to determine source–destination pair for every evacuation transfer. In the second stage, it simultaneously allocates proportional bandwidth and maximizes the total amount of evacuation flow to find path and allocate bandwidth for every evacuation transfer. Reward function is set based on classifying candidate sets to search for optimal solution while ensuring that feasible solutions are obtained. Chebyshev scalarization function is used to evaluate action rewards and optimize action selection process. Performance comparison is implemented with state-of-the-art algorithms based on different data volumes and network scales. Simulation result demonstrates that the new strategy outperforms other algorithms with higher evacuation efficiency, good convergence and robustness.
{"title":"Data evacuation optimization using multi-objective reinforcement learning","authors":"Xiaole Li , Yinghui Jiang , Xing Wang , Jiuru Wang , Lei Gao , Shanwen Yi","doi":"10.1016/j.jnca.2025.104390","DOIUrl":"10.1016/j.jnca.2025.104390","url":null,"abstract":"<div><div>After some disaster occurs, rapid data evacuation among cloud data centers is of great importance. Data evacuation optimization is a two-stage process including destination selection and flow scheduling. These two stages are related to each other, while evacuation efficiency is affected by evacuation distance, bandwidth allocation ratio, and total amount of evacuation flow at the same time. The mutual constraints among multiple factors make it difficult to find or approximate the optimal solution via single-objective optimization. This paper proposes a new two-stage data evacuation strategy using multi-objective reinforcement learning, with evacuation flow optimization as the central optimization objective across both stages. In the first stage, it simultaneously minimizes total path length and maximizes the total available bandwidth to determine source–destination pair for every evacuation transfer. In the second stage, it simultaneously allocates proportional bandwidth and maximizes the total amount of evacuation flow to find path and allocate bandwidth for every evacuation transfer. Reward function is set based on classifying candidate sets to search for optimal solution while ensuring that feasible solutions are obtained. Chebyshev scalarization function is used to evaluate action rewards and optimize action selection process. Performance comparison is implemented with state-of-the-art algorithms based on different data volumes and network scales. Simulation result demonstrates that the new strategy outperforms other algorithms with higher evacuation efficiency, good convergence and robustness.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104390"},"PeriodicalIF":8.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the application scope of blockchain technology continues to expand, challenges arise in the state verification of blockchain systems based on account models. Traditionally, Merkle Patricia Tries are used to maintain the state of the world, and the verification of a specific data block needs to be verified step by step up to the root node, which guarantees the data integrity, but in large-scale systems, problems such as low efficiency of verification and updating, insufficient security, and increased storage demand still occur, which affects the performance of blockchain networks. In this paper, we propose a Verkle-Accumulator-Based Multiple State Verifiable and Updatable (VA-MSVU) scheme for blockchain. The scheme integrates Verkle tree (VT), Verkle accumulator (VA), KZG polynomial commitment, and aggregated proofs to verify the integrity of multiple account states in batches. By mapping account states to the VT, our approach enhances security, reduces the size of state data, and improves both verification speed and update efficiency. Simulation results show that the VA-MSVU scheme has smaller proof size and faster verification speed than the existing stored data structure, demonstrating the advantages of the VA-MSVU scheme in terms of simplicity and efficiency. For verifying multiple account states, the aggregated proofs of the scheme have significant advantages over KZG polynomial commitment and single-point proof, excelling in proof size, verification and update rate. In addition, by adjusting the branching factor in Verkle tree, a trade-off between computational overhead and communication is achieved to improve the adaptability of the system to different network scenarios.
{"title":"Verkle-Accumulator-Based Multiple State Verifiable and Updatable (VA-MSVU) scheme for blockchain","authors":"Shangping Wang, Juanjuan Ma, Qi Huang, Xiaoling Xie","doi":"10.1016/j.jnca.2025.104392","DOIUrl":"10.1016/j.jnca.2025.104392","url":null,"abstract":"<div><div>As the application scope of blockchain technology continues to expand, challenges arise in the state verification of blockchain systems based on account models. Traditionally, Merkle Patricia Tries are used to maintain the state of the world, and the verification of a specific data block needs to be verified step by step up to the root node, which guarantees the data integrity, but in large-scale systems, problems such as low efficiency of verification and updating, insufficient security, and increased storage demand still occur, which affects the performance of blockchain networks. In this paper, we propose a Verkle-Accumulator-Based Multiple State Verifiable and Updatable (VA-MSVU) scheme for blockchain. The scheme integrates Verkle tree (VT), Verkle accumulator (VA), KZG polynomial commitment, and aggregated proofs to verify the integrity of multiple account states in batches. By mapping account states to the VT, our approach enhances security, reduces the size of state data, and improves both verification speed and update efficiency. Simulation results show that the VA-MSVU scheme has smaller proof size and faster verification speed than the existing stored data structure, demonstrating the advantages of the VA-MSVU scheme in terms of simplicity and efficiency. For verifying multiple account states, the aggregated proofs of the scheme have significant advantages over KZG polynomial commitment and single-point proof, excelling in proof size, verification and update rate. In addition, by adjusting the branching factor in Verkle tree, a trade-off between computational overhead and communication is achieved to improve the adaptability of the system to different network scenarios.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"245 ","pages":"Article 104392"},"PeriodicalIF":8.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}