Pub Date : 2025-10-08DOI: 10.1016/j.jnca.2025.104357
Badis Djamaa, Habib Yekhlef, Mohamed Amine Kouda, Abbas Bradai
Federated Learning (FL) empowers Internet-of-Things (IoT) devices to train intelligent models without sharing sensitive data, facilitating the transition to an Artificial Intelligence of Things (AIoT) ecosystem. However, FL demands significant storage, computation, and communication resources, which often exceed the capabilities of resource-constrained IoT devices. In this work, we introduce FedCoRE, an effective and practical FL architecture tailored for IoT environments. FedCoRE leverages standards for constrained RESTful environments, such as the Constrained Application Protocol (CoAP), to optimize communication and applies model quantization to address computation and storage limitations. FedCoRE has been implemented on resource-constrained IoT devices with 256 KB of RAM and evaluated on a human activity recognition task using a deep neural network. Extensive evaluations conducted in a real-world IoT environment, comprising 10 Thunderboard Sense 2 nodes, demonstrate the feasibility and effectiveness of our proposal. Notably, compared to FL, FedCoRE achieves up to a 60% reduction in communication cost, while maintaining model accuracy and requiring only approximately 75 KB of RAM and 438 KB of ROM.
联邦学习(FL)使物联网(IoT)设备能够在不共享敏感数据的情况下训练智能模型,从而促进向人工智能(AIoT)生态系统的过渡。然而,FL需要大量的存储、计算和通信资源,这往往超出了资源受限的物联网设备的能力。在这项工作中,我们介绍了FedCoRE,这是一种为物联网环境量身定制的有效实用的FL架构。FedCoRE利用约束rest环境的标准,例如约束应用协议(constrained Application Protocol, CoAP),来优化通信,并应用模型量化来解决计算和存储限制。FedCoRE已经在具有256 KB RAM的资源受限物联网设备上实现,并使用深度神经网络在人类活动识别任务上进行了评估。在真实的物联网环境中进行了广泛的评估,包括10个Thunderboard Sense 2节点,证明了我们提议的可行性和有效性。值得注意的是,与FL相比,FedCoRE实现了高达60%的通信成本降低,同时保持模型精度,只需要大约75KB的RAM和438KB的ROM。
{"title":"FedCoRE: Effective Federated Learning for constrained RESTful environments in the Artificial Intelligence of Things","authors":"Badis Djamaa, Habib Yekhlef, Mohamed Amine Kouda, Abbas Bradai","doi":"10.1016/j.jnca.2025.104357","DOIUrl":"10.1016/j.jnca.2025.104357","url":null,"abstract":"<div><div>Federated Learning (FL) empowers Internet-of-Things (IoT) devices to train intelligent models without sharing sensitive data, facilitating the transition to an Artificial Intelligence of Things (AIoT) ecosystem. However, FL demands significant storage, computation, and communication resources, which often exceed the capabilities of resource-constrained IoT devices. In this work, we introduce FedCoRE, an effective and practical FL architecture tailored for IoT environments. FedCoRE leverages standards for constrained RESTful environments, such as the Constrained Application Protocol (CoAP), to optimize communication and applies model quantization to address computation and storage limitations. FedCoRE has been implemented on resource-constrained IoT devices with 256 KB of RAM and evaluated on a human activity recognition task using a deep neural network. Extensive evaluations conducted in a real-world IoT environment, comprising 10 Thunderboard Sense 2 nodes, demonstrate the feasibility and effectiveness of our proposal. Notably, compared to FL, FedCoRE achieves up to a 60% reduction in communication cost, while maintaining model accuracy and requiring only approximately 75<!--> <!-->KB of RAM and 438<!--> <!-->KB of ROM.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104357"},"PeriodicalIF":8.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145311718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-08DOI: 10.1016/j.jnca.2025.104355
Abdullah Yousafzai , Muhammad Mohsan Sheeraz , Ganna Pogrebna , Jon Crowcroft , Ibrar Yaqoob
The metaverse is a shared virtual 3D space that combines immersive experiences with applications in gaming, social interactions, commerce, and more. It is rapidly becoming a reality, driven by advances in virtual reality, augmented reality, artificial intelligence, blockchain, and other emerging technologies. Among these, blockchain technology enables secure and decentralized ownership as well as seamless interoperability of virtual assets. Non-fungible tokens ensure verifiable ownership and fraud prevention, while smart contracts facilitate automated peer-to-peer transactions. Blockchain’s security and transparency promote trust and innovation, laying the foundation for a connected and user-driven metaverse ecosystem. In this paper, we explore the role of blockchain technology as a key enabler for the metaverse, providing solutions for decentralization, governance through decentralized autonomous organizations, interoperable mechanisms, digital asset ownership, traceability, auditing, and identity management. We present the key difference between traditional virtual worlds and the metaverse, and why blockchain is preferred over other decentralized technologies for the metaverse. We comprehensively review recent advances in metaverse system architectures, focusing on state-of-the-art solutions and lessons learned. We compare the existing literature based on key parameters; namely, contributions, advantages, limitations, and applications. We present key challenges, including deepfake threats, identity theft and brand infringement risks, mental health risks, digital safety and gambling risks, virtual world laws and regulations, and privacy and data security concerns. We outline future recommendations for enabling a sustainable and user-friendly metaverse ecosystem.
{"title":"Blockchain for the metaverse: Recent advances, taxonomy, and future challenges","authors":"Abdullah Yousafzai , Muhammad Mohsan Sheeraz , Ganna Pogrebna , Jon Crowcroft , Ibrar Yaqoob","doi":"10.1016/j.jnca.2025.104355","DOIUrl":"10.1016/j.jnca.2025.104355","url":null,"abstract":"<div><div>The metaverse is a shared virtual 3D space that combines immersive experiences with applications in gaming, social interactions, commerce, and more. It is rapidly becoming a reality, driven by advances in virtual reality, augmented reality, artificial intelligence, blockchain, and other emerging technologies. Among these, blockchain technology enables secure and decentralized ownership as well as seamless interoperability of virtual assets. Non-fungible tokens ensure verifiable ownership and fraud prevention, while smart contracts facilitate automated peer-to-peer transactions. Blockchain’s security and transparency promote trust and innovation, laying the foundation for a connected and user-driven metaverse ecosystem. In this paper, we explore the role of blockchain technology as a key enabler for the metaverse, providing solutions for decentralization, governance through decentralized autonomous organizations, interoperable mechanisms, digital asset ownership, traceability, auditing, and identity management. We present the key difference between traditional virtual worlds and the metaverse, and why blockchain is preferred over other decentralized technologies for the metaverse. We comprehensively review recent advances in metaverse system architectures, focusing on state-of-the-art solutions and lessons learned. We compare the existing literature based on key parameters; namely, contributions, advantages, limitations, and applications. We present key challenges, including deepfake threats, identity theft and brand infringement risks, mental health risks, digital safety and gambling risks, virtual world laws and regulations, and privacy and data security concerns. We outline future recommendations for enabling a sustainable and user-friendly metaverse ecosystem.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104355"},"PeriodicalIF":8.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-03DOI: 10.1016/j.jnca.2025.104326
Basharat Ali, Guihai Chen
The widespread adoption of DNS over HTTPS(DoH) has inaugurated a new paradigm of network privacy through the encryption of DNS queries; paradoxically, this very mechanism has been weaponized by malicious actors to orchestrate convert cyberattacks ranging from polymorphic malware delivery and data exfiltration to command-and-control (C2) operations. Classic signature-based solutions that rely on static security policies and packet-depth inspection are rendered useless in the face of encrypted DoH traffic, and today’s AI-driven defense solutions typically fail to achieve adversarial robustness, explainability, and real-time scalability. Bridging these gaps, this paper proposes an AI framework that integrates the best practices in machine learning together with secure execution environments to offer resilience, transparency, and low-latency DoH threat detection. Specifically, Capsule Networks (CapsNets) are used to learn hierarchical traffic flow patterns, Graph Transformers to uncover temporal anomalies, and Contrastive Self-Supervised Learning (CSSL) to leverage massive unlabeled datasets. Adversarial robustness is reinforced through perturbation-aware training and mutation-driven fuzzing simulations, while interpretability is enhanced via SHAP and LIME, rendering AI decision-making processes more intelligible to analysts. A distributed Apache Flink/Kafka pipeline enables real-time processing of DoH streams at scale, reducing detection latency by 50% compared to batch-oriented systems. Furthermore, Trusted Execution Environments(TEEs) safeguard model inference against tempering, mitigating insider threats and runtime exploitation. Empirical evaluation on the doh_real_world_2022 dataset demonstrates 99.1% detection accuracy with CapsNets, 98.8% with Graph Transformers, and an 80% improvement in adversarial resilience. These developments collectively propel the discipline of encrypted traffic analysis and establish a benchmark for safeguarding cybersecurity protocols such as QUIC and HTTP/3 that are gaining traction. The findings validate the feasibility of AI-driven, privacy-augmented security systems during an era of escalating cyber-attacks and demands algorithmic transparency.
通过对DNS查询进行加密,DNS over HTTPS(DoH)的广泛采用开创了一种新的网络隐私范式;矛盾的是,这种机制已经被恶意行为者武器化,以协调转换网络攻击,从多态恶意软件交付和数据泄露到指挥和控制(C2)操作。经典的基于签名的解决方案依赖于静态安全策略和包深度检测,在加密的DoH流量面前变得无用,而今天的人工智能驱动的防御解决方案通常无法实现对抗性的鲁棒性、可解释性和实时可扩展性。为了弥合这些差距,本文提出了一个人工智能框架,该框架将机器学习中的最佳实践与安全执行环境相结合,以提供弹性、透明度和低延迟DoH威胁检测。具体来说,胶囊网络(CapsNets)用于学习分层交通流模式,图形转换器用于发现时间异常,对比自监督学习(CSSL)用于利用大量未标记的数据集。对抗鲁棒性通过扰动感知训练和突变驱动的模糊模拟得到加强,而可解释性通过SHAP和LIME得到增强,使人工智能决策过程对分析师更容易理解。分布式Apache Flink/Kafka管道支持大规模实时处理DoH流,与面向批处理的系统相比,将检测延迟减少50%。此外,可信执行环境(tee)保护模型推理不受篡改,减轻内部威胁和运行时利用。对doh_real_world_2022数据集的实证评估表明,capnet的检测准确率为99.1%,Graph transformer的检测准确率为98.8%,对抗弹性提高了80%。这些发展共同推动了加密流量分析的发展,并为保护网络安全协议(如QUIC和HTTP/3)建立了基准。研究结果验证了在网络攻击不断升级的时代,人工智能驱动的、增强隐私的安全系统的可行性,并要求算法透明。
{"title":"Next-generation AI for advanced threat detection and security enhancement in DNS over HTTPS","authors":"Basharat Ali, Guihai Chen","doi":"10.1016/j.jnca.2025.104326","DOIUrl":"10.1016/j.jnca.2025.104326","url":null,"abstract":"<div><div>The widespread adoption of DNS over HTTPS(DoH) has inaugurated a new paradigm of network privacy through the encryption of DNS queries; paradoxically, this very mechanism has been weaponized by malicious actors to orchestrate convert cyberattacks ranging from polymorphic malware delivery and data exfiltration to command-and-control (C2) operations. Classic signature-based solutions that rely on static security policies and packet-depth inspection are rendered useless in the face of encrypted DoH traffic, and today’s AI-driven defense solutions typically fail to achieve adversarial robustness, explainability, and real-time scalability. Bridging these gaps, this paper proposes an AI framework that integrates the best practices in machine learning together with secure execution environments to offer resilience, transparency, and low-latency DoH threat detection. Specifically, Capsule Networks (CapsNets) are used to learn hierarchical traffic flow patterns, Graph Transformers to uncover temporal anomalies, and Contrastive Self-Supervised Learning (CSSL) to leverage massive unlabeled datasets. Adversarial robustness is reinforced through perturbation-aware training and mutation-driven fuzzing simulations, while interpretability is enhanced via SHAP and LIME, rendering AI decision-making processes more intelligible to analysts. A distributed Apache Flink/Kafka pipeline enables real-time processing of DoH streams at scale, reducing detection latency by 50% compared to batch-oriented systems. Furthermore, Trusted Execution Environments(TEEs) safeguard model inference against tempering, mitigating insider threats and runtime exploitation. Empirical evaluation on the doh_real_world_2022 dataset demonstrates 99.1% detection accuracy with CapsNets, 98.8% with Graph Transformers, and an 80% improvement in adversarial resilience. These developments collectively propel the discipline of encrypted traffic analysis and establish a benchmark for safeguarding cybersecurity protocols such as QUIC and HTTP/3 that are gaining traction. The findings validate the feasibility of AI-driven, privacy-augmented security systems during an era of escalating cyber-attacks and demands algorithmic transparency.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104326"},"PeriodicalIF":8.0,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145261665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart microgrid (SMG) communication networks face significant challenges in maintaining high Quality of Service (QoS) due to dynamic load variations, fluctuating network conditions, and potential component faults, which can increase latency, reduce throughput, and compromise fault recovery. The growing integration of distributed renewable energy resources demands adaptive and intelligent routing mechanisms capable of operating efficiently under such diverse and fault-prone conditions. This paper presents a Q-Reinforcement Learning-based Multi-Agent Bellman Routing (QRL-MABR) algorithm, which enhances the traditional MABR approach by embedding a Q-learning module within each network agent. Agents dynamically learn optimal routing policies, balance exploration and exploitation action selection with adaptive temperature scaling, and jointly optimize latency, throughput, jitter, convergence speed, and fault resilience.
Simulations on IEEE 9, 14, 34, 39, and 57 bus SMG testbeds demonstrate that QRL-MABR significantly outperforms conventional routing protocols (MABR, RIP, OLSR, OSPFv2) and advanced RL-based algorithms (SN-MAPPO, DDQL, MDDPG, SARSA-, TD3), achieving 16%–28% delay reduction, 14%–16% throughput gains, 17%–21% jitter improvement, and superior fault recovery. Thus, QRL-MABR provides a robust, scalable, and intelligent framework for next-generation smart microgrids.
{"title":"Reinforcement learning based multi-agent system for smart microgrid","authors":"Niharika Singh , Kishu Gupta , Ashutosh Kumar Singh , Perumal Nallagownden , Irraivan Elamvazuthi","doi":"10.1016/j.jnca.2025.104339","DOIUrl":"10.1016/j.jnca.2025.104339","url":null,"abstract":"<div><div>Smart microgrid (SMG) communication networks face significant challenges in maintaining high Quality of Service (QoS) due to dynamic load variations, fluctuating network conditions, and potential component faults, which can increase latency, reduce throughput, and compromise fault recovery. The growing integration of distributed renewable energy resources demands adaptive and intelligent routing mechanisms capable of operating efficiently under such diverse and fault-prone conditions. This paper presents a Q-Reinforcement Learning-based Multi-Agent Bellman Routing (QRL-MABR) algorithm, which enhances the traditional MABR approach by embedding a Q-learning module within each network agent. Agents dynamically learn optimal routing policies, balance exploration and exploitation action selection with adaptive temperature scaling, and jointly optimize latency, throughput, jitter, convergence speed, and fault resilience.</div><div>Simulations on IEEE 9, 14, 34, 39, and 57 bus SMG testbeds demonstrate that QRL-MABR significantly outperforms conventional routing protocols (MABR, RIP, OLSR, OSPFv2) and advanced RL-based algorithms (SN-MAPPO, DDQL, MDDPG, SARSA-<span><math><mi>λ</mi></math></span>, TD3), achieving 16%–28% delay reduction, 14%–16% throughput gains, 17%–21% jitter improvement, and superior fault recovery. Thus, QRL-MABR provides a robust, scalable, and intelligent framework for next-generation smart microgrids.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104339"},"PeriodicalIF":8.0,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In invitation-based systems, a new user can register only after obtaining a threshold number of invitations from existing members. The newcomer submits these invitations to the system administrator, who verifies their legitimacy. In doing so, the administrator inevitably learns who invited whom. This inviter–invitee relationship is itself privacy-sensitive information, since knowledge of it can enable inference attacks in which an invitee’s profile (e.g., political views or location) is deduced from the profiles of their inviters. To address this problem, we propose , an anonymous invitation-based system in which even a corrupted administrator, colluding with a subset of members, cannot determine inviter–invitee relationships. We formally define the notions of inviter anonymity and invitation unforgeability, and provide formal proofs that achieves both against a malicious and adaptive adversary. Our design ensures constant cost for authenticating new registrations, unlike existing approaches where invitation generation and verification incur overhead linear in the total number of members. Moreover, scales efficiently: once a user joins, the administrator can immediately issue credentials enabling the newcomer to act as an inviter without re-keying existing members. We also design , a cross-network extension that supports anonymous third-party authentication, allowing invitations issued in one system to be used for registration in another.
{"title":"Anonyma: Anonymous invitation-only registration in malicious adversarial model","authors":"Sanaz Taheri Boshrooyeh, Alpteki̇n Küpçü, Öznur Özkasap","doi":"10.1016/j.jnca.2025.104337","DOIUrl":"10.1016/j.jnca.2025.104337","url":null,"abstract":"<div><div>In invitation-based systems, a new user can register only after obtaining a threshold number of invitations from existing members. The newcomer submits these invitations to the system administrator, who verifies their legitimacy. In doing so, the administrator inevitably learns who invited whom. This inviter–invitee relationship is itself privacy-sensitive information, since knowledge of it can enable inference attacks in which an invitee’s profile (e.g., political views or location) is deduced from the profiles of their inviters. To address this problem, we propose <span><math><mrow><mi>A</mi><mi>n</mi><mi>o</mi><mi>n</mi><mi>y</mi><mi>m</mi><mi>a</mi></mrow></math></span>, an anonymous invitation-based system in which even a corrupted administrator, colluding with a subset of members, cannot determine inviter–invitee relationships. We formally define the notions of <em>inviter anonymity</em> and <em>invitation unforgeability</em>, and provide formal proofs that <span><math><mrow><mi>A</mi><mi>n</mi><mi>o</mi><mi>n</mi><mi>y</mi><mi>m</mi><mi>a</mi></mrow></math></span> achieves both against a <em>malicious</em> and <em>adaptive adversary</em>. Our design ensures constant cost for authenticating new registrations, unlike existing approaches where invitation generation and verification incur overhead linear in the total number of members. Moreover, <span><math><mrow><mi>A</mi><mi>n</mi><mi>o</mi><mi>n</mi><mi>y</mi><mi>m</mi><mi>a</mi></mrow></math></span> scales efficiently: once a user joins, the administrator can immediately issue credentials enabling the newcomer to act as an inviter without re-keying existing members. We also design <span><math><mrow><mi>A</mi><mi>n</mi><mi>o</mi><mi>n</mi><mi>y</mi><mi>m</mi><mi>a</mi><mi>X</mi></mrow></math></span>, a cross-network extension that supports anonymous third-party authentication, allowing invitations issued in one system to be used for registration in another.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104337"},"PeriodicalIF":8.0,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-28DOI: 10.1016/j.jnca.2025.104342
Umar Sa’ad , Demeke Shumeye Lakew , Nhu-Ngoc Dao , Sungrae Cho
The proliferation of encrypted Domain Name System (DNS) traffic through protocols like DNS over Hypertext Transfer Protocol Secure presents significant privacy advantages but creates new challenges for anomaly detection. Traditional security mechanisms that rely on payload inspection become ineffective, necessitating advanced strategies capable of detecting threats in encrypted traffic. This study introduces the Hybrid Ensemble Approach for Robust Anomaly Detection (HERALD), a novel framework designed to detect anomalies in encrypted DNS traffic. HERALD combines unsupervised base detectors, including Isolation Forest (IF), One-Class Support Vector Machine (OCSVM), and Local Outlier Factor (LOF), with a supervised Random Forest meta-model, leveraging the strengths of both paradigms. Our comprehensive evaluation demonstrates HERALD’s exceptional performance, achieving 99.99 percent accuracy, precision, recall, and F1-score on the CIRA-CIC-DoHBrw-2020 dataset, while maintaining competitive computational efficiency with 110s training time and 2.2ms inference time. HERALD also demonstrates superior generalization capabilities on cross-dataset evaluations, exhibiting minimal performance degradation of only 2-4 percent when tested on previously unseen attack patterns, outperforming purely supervised models, which showed 5-8 percent degradation. The interpretability analysis, incorporating feature importance, accumulated local effects, and local interpretable model-agnostic explanations, provides insights into the relative contributions of each base detector, with OCSVM emerging as the most influential component, followed by IF and LOF. This study advances the field of network security by offering a robust, interpretable, and adaptable solution for detecting anomalies in encrypted DNS traffic that balances a high detection rate with a low false-positive rate.
{"title":"HERALD: Hybrid Ensemble Approach for Robust Anomaly Detection in encrypted DNS traffic","authors":"Umar Sa’ad , Demeke Shumeye Lakew , Nhu-Ngoc Dao , Sungrae Cho","doi":"10.1016/j.jnca.2025.104342","DOIUrl":"10.1016/j.jnca.2025.104342","url":null,"abstract":"<div><div>The proliferation of encrypted Domain Name System (DNS) traffic through protocols like DNS over Hypertext Transfer Protocol Secure presents significant privacy advantages but creates new challenges for anomaly detection. Traditional security mechanisms that rely on payload inspection become ineffective, necessitating advanced strategies capable of detecting threats in encrypted traffic. This study introduces the Hybrid Ensemble Approach for Robust Anomaly Detection (HERALD), a novel framework designed to detect anomalies in encrypted DNS traffic. HERALD combines unsupervised base detectors, including Isolation Forest (IF), One-Class Support Vector Machine (OCSVM), and Local Outlier Factor (LOF), with a supervised Random Forest meta-model, leveraging the strengths of both paradigms. Our comprehensive evaluation demonstrates HERALD’s exceptional performance, achieving 99.99 percent accuracy, precision, recall, and F1-score on the CIRA-CIC-DoHBrw-2020 dataset, while maintaining competitive computational efficiency with 110s training time and 2.2ms inference time. HERALD also demonstrates superior generalization capabilities on cross-dataset evaluations, exhibiting minimal performance degradation of only 2-4 percent when tested on previously unseen attack patterns, outperforming purely supervised models, which showed 5-8 percent degradation. The interpretability analysis, incorporating feature importance, accumulated local effects, and local interpretable model-agnostic explanations, provides insights into the relative contributions of each base detector, with OCSVM emerging as the most influential component, followed by IF and LOF. This study advances the field of network security by offering a robust, interpretable, and adaptable solution for detecting anomalies in encrypted DNS traffic that balances a high detection rate with a low false-positive rate.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104342"},"PeriodicalIF":8.0,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1016/j.jnca.2025.104340
S. Sheeja Rani , Oruba Alfawaz , Ahmed M. Khedr
Due to the dynamic nature of cloud computing, maintaining fault-tolerance is essential to ensure the reliability and performance of virtualized environments. Failures in Virtual Machines (VMs) disrupt the seamless operation of cloud-based services, making it vital to implement a strong failure prediction system. As a solution, this work proposes a Segmented Regressive Learning-based Multivariate Raindrop Optimized Lottery Scheduling (SRL-MROLS) for dynamic cloud environments. Initially, the VM failure prediction is carried out using a Segmented Regressive Q-learning algorithm, where a set of VMs is provided as input. Segmented regression analyzes the average failure rate of VMs, while a reward-based framework guides the decision-making process for accurate failure prediction. Once a failure is predicted, a relocation process is triggered, involving the migration of workloads or tasks from the failing VM to an alternate VM. Next, a Multivariate Elitism Raindrop Optimization approach is employed to identify the optimal VM for task migration. Finally, a Deadline-Aware Stochastic Prioritized Lottery Scheduling is employed for efficient allocation of tasks to the selected VMs, maintaining seamless operations even in the event of VM failures. This process significantly improves task scheduling by maximizing throughput and minimizing response time in cloud environments. Experimental results demonstrate the superior performance of SRL-MROLS across different metrics. Specifically, it achieves an average improvement of 6.4% in failure prediction accuracy, 27.4% in throughput, and a 13% reduction in response time. Additionally, it reduces failure prediction time by 15%, migration cost by 14.3%, and makespan by 15%, significantly outperforming conventional techniques.
{"title":"A robust fault-tolerant framework for VM failure predication and efficient task scheduling in dynamic cloud environments","authors":"S. Sheeja Rani , Oruba Alfawaz , Ahmed M. Khedr","doi":"10.1016/j.jnca.2025.104340","DOIUrl":"10.1016/j.jnca.2025.104340","url":null,"abstract":"<div><div>Due to the dynamic nature of cloud computing, maintaining fault-tolerance is essential to ensure the reliability and performance of virtualized environments. Failures in Virtual Machines (VMs) disrupt the seamless operation of cloud-based services, making it vital to implement a strong failure prediction system. As a solution, this work proposes a Segmented Regressive Learning-based Multivariate Raindrop Optimized Lottery Scheduling (SRL-MROLS) for dynamic cloud environments. Initially, the VM failure prediction is carried out using a Segmented Regressive Q-learning algorithm, where a set of VMs is provided as input. Segmented regression analyzes the average failure rate of VMs, while a reward-based framework guides the decision-making process for accurate failure prediction. Once a failure is predicted, a relocation process is triggered, involving the migration of workloads or tasks from the failing VM to an alternate VM. Next, a Multivariate Elitism Raindrop Optimization approach is employed to identify the optimal VM for task migration. Finally, a Deadline-Aware Stochastic Prioritized Lottery Scheduling is employed for efficient allocation of tasks to the selected VMs, maintaining seamless operations even in the event of VM failures. This process significantly improves task scheduling by maximizing throughput and minimizing response time in cloud environments. Experimental results demonstrate the superior performance of SRL-MROLS across different metrics. Specifically, it achieves an average improvement of 6.4% in failure prediction accuracy, 27.4% in throughput, and a 13% reduction in response time. Additionally, it reduces failure prediction time by 15%, migration cost by 14.3%, and makespan by 15%, significantly outperforming conventional techniques.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104340"},"PeriodicalIF":8.0,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1016/j.jnca.2025.104341
Abeer Iftikhar , Faisal Bashir Hussain , Kashif Naseer Qureshi , Muhammad Shiraz , Mehdi Sookhak
Smart cities are rapidly evolving by adopting Internet of Things (IoT) devices, edge and cloud computing, and mobile connectivity. While these advancements enhance urban efficiency and connectivity, they also significantly increase the risk of cyber threats targeting critical infrastructure. Modern interdependent systems require flexible resilience, allowing them to adapt to changing conditions while maintaining core functions. Smart city networks, however, face unique security vulnerabilities due to their scale and heterogeneity. Altered to industry expectations and requirements, traditional security models are generally restrictive. With its "never trust, always verify' motto, the Zero Trust (ZT) security model starkly differs from traditional models. ZT builds on network design by mandating real time identity verification, giving minimum access permission and mandating respect for the principle of least privilege. Software Defined Networking (SDN) extends one step further by offering central control over the network, policy based autonomous application and immediate response to anomalies. To address these challenges, our proposed Trust-based Resilient Edge Networks (TREN) framework integrates ZT principles to enhance smart city security. Under the umbrella of SDN controllers, SPP, the underpinning component of TREN, performs real time trust analysis and autonomous policy enforcement, for instance, applying high level threat defense mechanisms. TREN dynamically defends against advanced threats like DDoS and Sybil attacks by isolating malicious nodes and adapting defense tactics based on real-time trust and traffic analysis. Trust analysis and policy control modules provide dynamic adaptive coverage, permitting effective proactive defense. Mininet-based simulations demonstrate TREN's efficacy, achieving 95 % detection accuracy, a 20 % latency reduction, and a 25 % increase in data throughput when compared to baseline models.
{"title":"Securing edge based smart city networks with software defined Networking and zero trust architecture","authors":"Abeer Iftikhar , Faisal Bashir Hussain , Kashif Naseer Qureshi , Muhammad Shiraz , Mehdi Sookhak","doi":"10.1016/j.jnca.2025.104341","DOIUrl":"10.1016/j.jnca.2025.104341","url":null,"abstract":"<div><div>Smart cities are rapidly evolving by adopting Internet of Things (IoT) devices, edge and cloud computing, and mobile connectivity. While these advancements enhance urban efficiency and connectivity, they also significantly increase the risk of cyber threats targeting critical infrastructure. Modern interdependent systems require flexible resilience, allowing them to adapt to changing conditions while maintaining core functions. Smart city networks, however, face unique security vulnerabilities due to their scale and heterogeneity. Altered to industry expectations and requirements, traditional security models are generally restrictive. With its \"never trust, always verify' motto, the Zero Trust (ZT) security model starkly differs from traditional models. ZT builds on network design by mandating real time identity verification, giving minimum access permission and mandating respect for the principle of least privilege. Software Defined Networking (SDN) extends one step further by offering central control over the network, policy based autonomous application and immediate response to anomalies. To address these challenges, our proposed Trust-based Resilient Edge Networks (TREN) framework integrates ZT principles to enhance smart city security. Under the umbrella of SDN controllers, SPP, the underpinning component of TREN, performs real time trust analysis and autonomous policy enforcement, for instance, applying high level threat defense mechanisms. TREN dynamically defends against advanced threats like DDoS and Sybil attacks by isolating malicious nodes and adapting defense tactics based on real-time trust and traffic analysis. Trust analysis and policy control modules provide dynamic adaptive coverage, permitting effective proactive defense. Mininet-based simulations demonstrate TREN's efficacy, achieving 95 % detection accuracy, a 20 % latency reduction, and a 25 % increase in data throughput when compared to baseline models.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104341"},"PeriodicalIF":8.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1016/j.jnca.2025.104338
Siyuan Liu , Li Pan , Shijun Liu
In recent years, edge computing services have continued to develop and have been better integrated with serverless computing, leading to the improvement of the performance and concurrent request handling capabilities of edge servers. Therefore, an increasing number of IoT devices are willing to pay a certain amount of service processing fees to offload some computing tasks to edge servers for execution, with the aim of meeting their latency requirements. However, the computing capacity and storage space of edge servers at a single base station are still limited. Therefore, base stations must decide which task images to cache for future execution and price these computing services to control the computing offloading of IoT devices, so as to maximize their expected profit under the constraints of limited computing capacity and memory space. In this paper, we stand from the perspective of base stations and formulate the caching and pricing of function images at a base station, as well as the function offloading process of IoT devices, as a Markov Decision Process (MDP). We adopt a Proximal Policy Optimization (PPO)-based function service pricing adjustment algorithm to optimize the profit of base stations. Finally, we evaluate our approach through simulation experiments and compare it with baseline methods. The results show that our approach can significantly improve base stations’ expected profit in various scenarios.
{"title":"A profit-effective function service pricing approach for serverless edge computing function offloading","authors":"Siyuan Liu , Li Pan , Shijun Liu","doi":"10.1016/j.jnca.2025.104338","DOIUrl":"10.1016/j.jnca.2025.104338","url":null,"abstract":"<div><div>In recent years, edge computing services have continued to develop and have been better integrated with serverless computing, leading to the improvement of the performance and concurrent request handling capabilities of edge servers. Therefore, an increasing number of IoT devices are willing to pay a certain amount of service processing fees to offload some computing tasks to edge servers for execution, with the aim of meeting their latency requirements. However, the computing capacity and storage space of edge servers at a single base station are still limited. Therefore, base stations must decide which task images to cache for future execution and price these computing services to control the computing offloading of IoT devices, so as to maximize their expected profit under the constraints of limited computing capacity and memory space. In this paper, we stand from the perspective of base stations and formulate the caching and pricing of function images at a base station, as well as the function offloading process of IoT devices, as a Markov Decision Process (MDP). We adopt a Proximal Policy Optimization (PPO)-based function service pricing adjustment algorithm to optimize the profit of base stations. Finally, we evaluate our approach through simulation experiments and compare it with baseline methods. The results show that our approach can significantly improve base stations’ expected profit in various scenarios.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104338"},"PeriodicalIF":8.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1016/j.jnca.2025.104330
Yamin Shen , Ping Wang , Chiou-Jye Huang , Shenxu Kuang , Song Li , Zihan Li
Digital transformation brings diverse applications along with varying Quality of Service (QoS) and isolation requirements. Network slicing, a key 5G technology anticipated to persist in 6G, aims to meet these heterogeneous requirements. However, due to conflicting usage of scarce resources among services, especially with multi-timescale Service Level Agreement (SLA) requirements including QoS and isolation, implementing slicing in the Radio Access Network (RAN) domain is a significant challenge. Therefore, this paper formulates the radio resource allocation problem posed by the coexistence of multiple URLLC (Ultra-Reliable and Low-Latency Communications) with varying delay requirements and eMBB (Enhanced Mobile Broadband) as a multi-timescale optimization problem. Consequently, a novel MPC (Model Predictive Control)-based RAN slicing resource allocation model called MPC-RSS is proposed. Specifically, MPC-RSS ensures elastic QoS through delay-tracking mechanism and far-sighted schemes. Meanwhile, it maintains elastic isolation by introducing logical and physical isolation constraint terms. Compared with the existing state-of-the-art approaches, simulation results show that MPC-RSS can achieve better and more elastic SLA performance. Our proposal provides a choice for 6G RAN to empower vertical industries achieving digital upgrades.
数字转换带来了不同的应用程序以及不同的服务质量(QoS)和隔离要求。网络切片是5G的一项关键技术,预计将在6G中持续存在,旨在满足这些异构需求。然而,由于服务之间对稀缺资源的冲突使用,特别是在包括QoS和隔离在内的多时间尺度服务水平协议(SLA)要求下,在无线接入网(RAN)域中实现切片是一个重大挑战。因此,本文将多个具有不同延迟需求的URLLC (Ultra-Reliable and Low-Latency Communications)和eMBB (Enhanced Mobile Broadband)共存所带来的无线电资源分配问题表述为一个多时标优化问题。因此,提出了一种新的基于模型预测控制(MPC)的RAN切片资源分配模型MPC- rss。具体来说,MPC-RSS通过延迟跟踪机制和前瞻性方案来保证弹性QoS。同时,通过引入逻辑隔离约束项和物理隔离约束项来保持弹性隔离。仿真结果表明,MPC-RSS可以获得更好的弹性SLA性能。我们的提案为6G RAN提供了一种选择,使垂直行业能够实现数字升级。
{"title":"Elastic RAN slicing technology with multi-timescale SLA assurances for heterogeneous services provision in 6G","authors":"Yamin Shen , Ping Wang , Chiou-Jye Huang , Shenxu Kuang , Song Li , Zihan Li","doi":"10.1016/j.jnca.2025.104330","DOIUrl":"10.1016/j.jnca.2025.104330","url":null,"abstract":"<div><div>Digital transformation brings diverse applications along with varying Quality of Service (QoS) and isolation requirements. Network slicing, a key 5G technology anticipated to persist in 6G, aims to meet these heterogeneous requirements. However, due to conflicting usage of scarce resources among services, especially with multi-timescale Service Level Agreement (SLA) requirements including QoS and isolation, implementing slicing in the Radio Access Network (RAN) domain is a significant challenge. Therefore, this paper formulates the radio resource allocation problem posed by the coexistence of multiple URLLC (Ultra-Reliable and Low-Latency Communications) with varying delay requirements and eMBB (Enhanced Mobile Broadband) as a multi-timescale optimization problem. Consequently, a novel MPC (Model Predictive Control)-based RAN slicing resource allocation model called MPC-RSS is proposed. Specifically, MPC-RSS ensures elastic QoS through delay-tracking mechanism and far-sighted schemes. Meanwhile, it maintains elastic isolation by introducing logical and physical isolation constraint terms. Compared with the existing state-of-the-art approaches, simulation results show that MPC-RSS can achieve better and more elastic SLA performance. Our proposal provides a choice for 6G RAN to empower vertical industries achieving digital upgrades.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104330"},"PeriodicalIF":8.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}