Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101765
Haowen Tan , Max Hashem Eiza , Sangman Moh , Kouichi Sakurai
Cloud–Fog assisted Industrial Internet of Things (IIoT) has emerged as a core enabling technology for Industry 5.0, driving innovations in smart manufacturing by facilitating real-time interactions among industrial devices, fog nodes, and cloud platforms. However, inherent limitations in computational power and adaptability of IIoT terminals pose significant challenges to data security protection. This special issue focuses on addressing critical data security issues in Cloud–Fog IIoT systems.
{"title":"Data security for cloud-fog-assisted Industrial Internet of Things (IIoT) in future Industry 5.0","authors":"Haowen Tan , Max Hashem Eiza , Sangman Moh , Kouichi Sakurai","doi":"10.1016/j.iot.2025.101765","DOIUrl":"10.1016/j.iot.2025.101765","url":null,"abstract":"<div><div>Cloud–Fog assisted Industrial Internet of Things (IIoT) has emerged as a core enabling technology for Industry 5.0, driving innovations in smart manufacturing by facilitating real-time interactions among industrial devices, fog nodes, and cloud platforms. However, inherent limitations in computational power and adaptability of IIoT terminals pose significant challenges to data security protection. This special issue focuses on addressing critical data security issues in Cloud–Fog IIoT systems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101765"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101811
Bhabendu Kumar Mohanta , Ali Ismail Awad , Tarek Elsaka , Hamza Kheddar , Ezedin Baraka
Intelligent devices with embedded technology have proliferated dramatically over the past decade. The Internet of Things (IoT) has emerged as a transformational force, advancing traditional systems to previously unattainable levels of intelligence. Smart cities, transportation, healthcare, supply-chain management, agriculture, water management, and smart grid (SG) systems are among the industries where the IoT has found applications. These developments are demonstrated by the integration of IoT systems into SG networks, offering significant improvements in sustainability, dependability, and efficiency. Such systems use various IoT devices to continuously monitor the environment and transmit data for processing and analysis. Nonetheless, the growth of the IoT has introduced security vulnerabilities, including concerns about user identification, data integrity, and trust, especially in SG applications. This study aims to resolve several security challenges in IoT-enabled SG applications to support sustainability. The proposed scheme effectively tackles critical security requirements such as data integrity, user anonymity, distributed storage, trust management, and decentralized architecture. The security concerns addressed by blockchain technology include preserving data integrity, fostering trust, providing secure communication, and enabling effective monitoring. Smart contracts automate system processes and are effective in maintaining user trust. The experimental findings support the viability of the proposed system, demonstrating a computational cost of 3.150 ms and a communication overhead of 992 bits, both representing improvements over various existing solutions. Additionally, the deployment cost for the smart contract is found to be 5.64 USD with a writing cost of 2.89 USD, both of which are lower than the costs associated with comparable approaches.
{"title":"Smart-contract-based blockchain-enabled decentralized scheme for improving smart-grid security","authors":"Bhabendu Kumar Mohanta , Ali Ismail Awad , Tarek Elsaka , Hamza Kheddar , Ezedin Baraka","doi":"10.1016/j.iot.2025.101811","DOIUrl":"10.1016/j.iot.2025.101811","url":null,"abstract":"<div><div>Intelligent devices with embedded technology have proliferated dramatically over the past decade. The Internet of Things (IoT) has emerged as a transformational force, advancing traditional systems to previously unattainable levels of intelligence. Smart cities, transportation, healthcare, supply-chain management, agriculture, water management, and smart grid (SG) systems are among the industries where the IoT has found applications. These developments are demonstrated by the integration of IoT systems into SG networks, offering significant improvements in sustainability, dependability, and efficiency. Such systems use various IoT devices to continuously monitor the environment and transmit data for processing and analysis. Nonetheless, the growth of the IoT has introduced security vulnerabilities, including concerns about user identification, data integrity, and trust, especially in SG applications. This study aims to resolve several security challenges in IoT-enabled SG applications to support sustainability. The proposed scheme effectively tackles critical security requirements such as data integrity, user anonymity, distributed storage, trust management, and decentralized architecture. The security concerns addressed by blockchain technology include preserving data integrity, fostering trust, providing secure communication, and enabling effective monitoring. Smart contracts automate system processes and are effective in maintaining user trust. The experimental findings support the viability of the proposed system, demonstrating a computational cost of 3.150 ms and a communication overhead of 992 bits, both representing improvements over various existing solutions. Additionally, the deployment cost for the smart contract is found to be 5.64 USD with a writing cost of 2.89 USD, both of which are lower than the costs associated with comparable approaches.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101811"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid proliferation of connected vehicles has transformed modern transportation, while introducing critical security and performance challenges in dynamic vehicular networks. To address sophisticated threats such as DDoS, Sybil, and routing manipulation attacks-without compromising operational efficiency-we propose ROADS-VN, a novel Routing Optimization and Adaptive Defense System for Vehicular Networks. ROADS-VN integrates machine learning-based anomaly detection, mobility-aware route adaptation, and historical threat intelligence into a modular, context-aware mechanism enabling real-time, adaptive decision-making under dynamic vehicular conditions. The architecture of ROADS-VN is designed to be compatible with federated learning, but federated learning is not implemented in the current experiments, to avoid any misinterpretation regarding deployment. Extensive simulations demonstrate that ROADS-VN achieves a Packet Delivery Ratio (PDR) of 98.5% in low-mobility, low-traffic scenarios and 94.0% PDR under high-mobility, high-traffic conditions, while maintaining average communication latency as low as 45 ms and detection accuracy up to 96.0%. The protocol exhibits strong scalability, energy efficiency, and resilience against evolving cyber threats. By seamlessly combining adaptive routing with proactive security mechanisms, ROADS-VN provides a robust foundation for secure, reliable, and intelligent vehicular communications in next-generation transportation ecosystems.
{"title":"Routing optimization and adaptive defense system for vehicular networks (ROADS-VN)","authors":"Sofiane Hamrioui , Angela Voinea Ciocan , Redouane Djelouah , Camil Adam Mohamed Hamrioui , Pascal Lorenz","doi":"10.1016/j.iot.2025.101816","DOIUrl":"10.1016/j.iot.2025.101816","url":null,"abstract":"<div><div>The rapid proliferation of connected vehicles has transformed modern transportation, while introducing critical security and performance challenges in dynamic vehicular networks. To address sophisticated threats such as DDoS, Sybil, and routing manipulation attacks-without compromising operational efficiency-we propose ROADS-VN, a novel <em>Routing Optimization and Adaptive Defense System for Vehicular Networks</em>. ROADS-VN integrates machine learning-based anomaly detection, mobility-aware route adaptation, and historical threat intelligence into a modular, context-aware mechanism enabling real-time, adaptive decision-making under dynamic vehicular conditions. The architecture of ROADS-VN is designed to be compatible with federated learning, but federated learning is not implemented in the current experiments, to avoid any misinterpretation regarding deployment. Extensive simulations demonstrate that ROADS-VN achieves a Packet Delivery Ratio (PDR) of 98.5% in low-mobility, low-traffic scenarios and 94.0% PDR under high-mobility, high-traffic conditions, while maintaining average communication latency as low as 45 ms and detection accuracy up to 96.0%. The protocol exhibits strong scalability, energy efficiency, and resilience against evolving cyber threats. By seamlessly combining adaptive routing with proactive security mechanisms, ROADS-VN provides a robust foundation for secure, reliable, and intelligent vehicular communications in next-generation transportation ecosystems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101816"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101807
George P. Pinto , Nilson R. Sousa , Claudio N. Da Silva , Maycon L.M. Peixoto Jr. , Gustavo B. Figueiredo , Cássio V.S. Prazeres
The Internet of Things has amplified the pervasive and ubiquitous collection and processing of personal data, introducing significant challenges to data privacy. Existing architectures often lack mechanisms that enable users to control data sharing or anticipate privacy risks, such as profiling. This work introduces an AI-assisted consent mechanism integrated into a Personal Data Store-based privacy architecture to support users’ decision-making. The mechanism assesses the potential of profiling prior to data disclosure by applying clustering algorithms and computing a profiling risk metric using Silhouette, Davies-Bouldin, and Calinski-Harabasz indices. We evaluated the mechanism in a simulated smart building environment, analyzing clustering quality and computational performance. Results indicate that the approach is computationally efficient and capable of identifying meaningful profile patterns, thereby offering practical feasibility for mitigating profiling risks.
{"title":"Enhancing IoT data privacy: AI-assisted consent mechanism in a PDS-based solution","authors":"George P. Pinto , Nilson R. Sousa , Claudio N. Da Silva , Maycon L.M. Peixoto Jr. , Gustavo B. Figueiredo , Cássio V.S. Prazeres","doi":"10.1016/j.iot.2025.101807","DOIUrl":"10.1016/j.iot.2025.101807","url":null,"abstract":"<div><div>The Internet of Things has amplified the pervasive and ubiquitous collection and processing of personal data, introducing significant challenges to data privacy. Existing architectures often lack mechanisms that enable users to control data sharing or anticipate privacy risks, such as profiling. This work introduces an AI-assisted consent mechanism integrated into a Personal Data Store-based privacy architecture to support users’ decision-making. The mechanism assesses the potential of profiling prior to data disclosure by applying clustering algorithms and computing a profiling risk metric using Silhouette, Davies-Bouldin, and Calinski-Harabasz indices. We evaluated the mechanism in a simulated smart building environment, analyzing clustering quality and computational performance. Results indicate that the approach is computationally efficient and capable of identifying meaningful profile patterns, thereby offering practical feasibility for mitigating profiling risks.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101807"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Ledger Technologies (DLTs) underpin Digital Circular Economy (DCE) systems that rely on efficient IoT data flows. Shimmer, a DAG-based DLT optimized for IoT, enables feeless transactions with parallel validation through its tip-selection mechanism. On such ledgers, message fragmentation induces a latency–throughput tradeoff as per-block cost rises with parallel validation. Such efficiency lowers energy and congestion, supporting DCE objectives. Yet, end-users cannot control payload size or network load, leading to unpredictable latency and high CPU use on submitting devices, increasing energy consumption. Existing approaches mostly modify ledger internals, overlooking adaptivity or end-user policies. We introduce ABS-TD3, an offline-to-online TD3 agent that receives the total message size and outputs the optimal per-block size for balancing latency and energy-efficient CPU utilization. The agent is pre-trained offline on real data with Retrieval Augmentation and adaptive weights for improved decision making, then transitioned online with prioritized replay and a novelty bonus, balancing exploitation-exploration, yielding stable adaptivity compared to standard RL approaches. ABS-TD3 is implemented on Shimmer and can integrate with future Tangle-based forks of pre-IOTA-Rebased frameworks, exposing the same client-side controls. ABS-TD3 is evaluated on Shimmer by submitting 8 message sizes ranging from 5KB to 100KB, under the 32 KB block-size limit, with 250 iterations per size via IOTA-SDK. Against max, min, random, and fixed-weight baselines, it reduces median latency by about 9 % to 12 % and median CPU utilization by about 12 % to 17 % versus max and random policies, enabling efficient IoT data submission for DCE platforms without altering DLT infrastructure.
{"title":"ABS-TD3: Efficient IoT data submission in DAG-based DLTs for digital circular economy","authors":"Konstantinos Voulgaridis , Dimitris Karampatzakis , Panagiotis Sarigiannidis , Thomas Lagkas","doi":"10.1016/j.iot.2025.101814","DOIUrl":"10.1016/j.iot.2025.101814","url":null,"abstract":"<div><div>Distributed Ledger Technologies (DLTs) underpin Digital Circular Economy (DCE) systems that rely on efficient IoT data flows. Shimmer, a DAG-based DLT optimized for IoT, enables feeless transactions with parallel validation through its tip-selection mechanism. On such ledgers, message fragmentation induces a latency–throughput tradeoff as per-block cost rises with parallel validation. Such efficiency lowers energy and congestion, supporting DCE objectives. Yet, end-users cannot control payload size or network load, leading to unpredictable latency and high CPU use on submitting devices, increasing energy consumption. Existing approaches mostly modify ledger internals, overlooking adaptivity or end-user policies. We introduce ABS-TD3, an offline-to-online TD3 agent that receives the total message size and outputs the optimal per-block size for balancing latency and energy-efficient CPU utilization. The agent is pre-trained offline on real data with Retrieval Augmentation and adaptive weights for improved decision making, then transitioned online with prioritized replay and a novelty bonus, balancing exploitation-exploration, yielding stable adaptivity compared to standard RL approaches. ABS-TD3 is implemented on Shimmer and can integrate with future Tangle-based forks of pre-IOTA-Rebased frameworks, exposing the same client-side controls. ABS-TD3 is evaluated on Shimmer by submitting 8 message sizes ranging from 5KB to 100KB, under the 32 KB block-size limit, with 250 iterations per size via IOTA-SDK. Against max, min, random, and fixed-weight baselines, it reduces median latency by about 9 % to 12 % and median CPU utilization by about 12 % to 17 % versus max and random policies, enabling efficient IoT data submission for DCE platforms without altering DLT infrastructure.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101814"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101820
Pietro Fusco, Francesco Palmieri, Massimo Ficco
The rapid expansion of the Internet of Things (IoT) has introduced significant cybersecurity risks, implying the need for lightweight edge-level intrusion detection systems (IDS). On-device training and federated learning (FL) enable collaborative construction of unified models using field data, ensuring data privacy preservation. However, repetitive global model updates and parameter transmission, particularly over IoT architectures characterized by limited bandwidth or intermittent connectivity, can result in significant network latency, as well as in unsustainable energy consumption for the involved resource-constrained IoT-edge devices. Moreover, collecting a sufficient set of realistic attack samples in situ is often difficult, resulting in highly imbalanced datasets that limit distributed training. To overcome these limitations, we combined Siamese Neural Networks (SNNs) and gradient sparsification, enabling IoT-edge devices to support privacy-preserving few-shot FL and model compression needed to train a shared IDS model collaboratively by using very few samples and optimizing the communication overhead during model updates, respectively. The percentage of gradient sparsification is dynamically selected at each training round through an epsilon-greedy exploration-exploitation strategy, allowing the system to balance adaptively the trade-off between communication savings and detection performance. To accommodate a model sparsification few-shot learning strategy in IoT environments, a distributed IDS based on federated SNNs has been proposed and tested on constrained microcontroller units. It is validated using the CSE-CIC-IDS2018 dataset. It demonstrates that the SNN-based IDS, when augmented with FL and gradient sparsification, achieves high performance even under network bandwidth limitations, as well as reduced and unbalanced training data constraints, highlighting its potential for secure and privacy-aware IoT-edge applications.
{"title":"Combining epsilon-greedy reinforcement learning based gradient sparsification and siamese neural networks for few-shot federated tinyML intrusion detection in IoT","authors":"Pietro Fusco, Francesco Palmieri, Massimo Ficco","doi":"10.1016/j.iot.2025.101820","DOIUrl":"10.1016/j.iot.2025.101820","url":null,"abstract":"<div><div>The rapid expansion of the Internet of Things (IoT) has introduced significant cybersecurity risks, implying the need for lightweight edge-level intrusion detection systems (IDS). On-device training and federated learning (FL) enable collaborative construction of unified models using field data, ensuring data privacy preservation. However, repetitive global model updates and parameter transmission, particularly over IoT architectures characterized by limited bandwidth or intermittent connectivity, can result in significant network latency, as well as in unsustainable energy consumption for the involved resource-constrained IoT-edge devices. Moreover, collecting a sufficient set of realistic attack samples in situ is often difficult, resulting in highly imbalanced datasets that limit distributed training. To overcome these limitations, we combined Siamese Neural Networks (SNNs) and gradient sparsification, enabling IoT-edge devices to support privacy-preserving few-shot FL and model compression needed to train a shared IDS model collaboratively by using very few samples and optimizing the communication overhead during model updates, respectively. The percentage of gradient sparsification is dynamically selected at each training round through an epsilon-greedy exploration-exploitation strategy, allowing the system to balance adaptively the trade-off between communication savings and detection performance. To accommodate a model sparsification few-shot learning strategy in IoT environments, a distributed IDS based on federated SNNs has been proposed and tested on constrained microcontroller units. It is validated using the CSE-CIC-IDS2018 dataset. It demonstrates that the SNN-based IDS, when augmented with FL and gradient sparsification, achieves high performance even under network bandwidth limitations, as well as reduced and unbalanced training data constraints, highlighting its potential for secure and privacy-aware IoT-edge applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101820"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101800
Ali Alssaiari , Maher Alharby , Qasim Jan , Shahid Hussain , Sana Ullah
Urbanisation and digital transformation have led to the development of smart city applications that rely on the efficiency of interconnected Internet of Things devices, which are often resource-constrained. This situation presents challenges in energy efficiency and cybersecurity. Although current AI-based solutions enhance cybersecurity, they may consume significant resources, potentially worsening energy efficiency. To address these challenges, there is a need for advanced mechanisms that balance resource utilisation and energy consumption while maintaining cybersecurity. This paper introduces an integrated approach of Deep Learning and the Black Hole Algorithm (BHA) to optimise energy use without compromising security within the smart city ecosystem. Our methodology employs Long Short-Term Memory networks for deep learning to capture IoT energy consumption patterns and incorporate contextual markers for effective anomaly detection. Simultaneously, BHA serves as a metaheuristic optimisation technique to find optimal control decisions. This dual strategy aims to reduce anomalies in IoT networks while improving energy efficiency, resulting in enhanced smart city applications. The effectiveness of this approach is demonstrated using an IoT-based smart city dataset, achieving anomaly detection with accuracy (99.60 %), precision (99.53 %), recall (99.40 %), and an F-measure (99.80 %). In addition, energy efficiency of 66.67 %, 71.43 %, 73.33 %, 77.78 %, and 63.64 % was achieved compared to the state-of-the-art methods in smart city applications.
{"title":"Balancing anomaly detection and energy efficiency in smart city IoT networks using hybrid deep learning and black hole algorithm","authors":"Ali Alssaiari , Maher Alharby , Qasim Jan , Shahid Hussain , Sana Ullah","doi":"10.1016/j.iot.2025.101800","DOIUrl":"10.1016/j.iot.2025.101800","url":null,"abstract":"<div><div>Urbanisation and digital transformation have led to the development of smart city applications that rely on the efficiency of interconnected Internet of Things devices, which are often resource-constrained. This situation presents challenges in energy efficiency and cybersecurity. Although current AI-based solutions enhance cybersecurity, they may consume significant resources, potentially worsening energy efficiency. To address these challenges, there is a need for advanced mechanisms that balance resource utilisation and energy consumption while maintaining cybersecurity. This paper introduces an integrated approach of Deep Learning and the Black Hole Algorithm (BHA) to optimise energy use without compromising security within the smart city ecosystem. Our methodology employs Long Short-Term Memory networks for deep learning to capture IoT energy consumption patterns and incorporate contextual markers for effective anomaly detection. Simultaneously, BHA serves as a metaheuristic optimisation technique to find optimal control decisions. This dual strategy aims to reduce anomalies in IoT networks while improving energy efficiency, resulting in enhanced smart city applications. The effectiveness of this approach is demonstrated using an IoT-based smart city dataset, achieving anomaly detection with accuracy (99.60 %), precision (99.53 %), recall (99.40 %), and an F-measure (99.80 %). In addition, energy efficiency of 66.67 %, 71.43 %, 73.33 %, 77.78 %, and 63.64 % was achieved compared to the state-of-the-art methods in smart city applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101800"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101813
Abdelghani Dahou , Syed Tariq Shah , Insaf Ullah , Tahira Mahboob , Ahmed Gamal Abdellatif , Mohamed Abd Elaziz , Ahmad Almogren , Mahmoud A. Shawky
Accurate localisation is a critical component in modern wireless communication systems, especially in complex environments with a very low signal-to-noise ratio (SNR). Reconfigurable intelligent surfaces (RIS) have emerged as a promising solution to enhance localisation accuracy by dynamically controlling signal reflection patterns. Motivated by the need for precise localisation solutions, this study introduces the RIS-enhanced hybrid localisation network (RHL-Net), a novel framework that integrates RIS with advanced deep learning techniques. RHL-Net employs long short-term memory (LSTM) networks for temporal data processing and Kolmogorov-Arnold networks (KAN) for spatial feature extraction. The key innovation of using KAN lies in its superior ability to learn complex spatial structures compared to traditional Multi-Layer Perceptrons (MLPs); KANs achieve higher accuracy with significantly fewer parameters and offer greater interpretability through their spline-based activation functions, which are learnable and adaptable. This makes KAN uniquely suited for distilling the intricate spatial fingerprints from the RIS-enhanced channel for precise location estimation. For performance evaluation, RHL-Net uses a dataset acquired from a dual-channel universal software radio peripheral (USRP) system, which records received signal strength (RSS) and channel phase response within a single-input multiple-output (SIMO) orthogonal frequency division multiplexing (OFDM) system. A dual-channel USRP with two antennas at the receiver () side is deployed at a grid of positions with an interspacing distance () to assess the RHL-Net localisation performance. Experimental results show that for metres with Directive and Monopole antenna configurations, RHL-Net achieves average accuracies of and , respectively, with RIS activated, significantly outperforming the deactivated configuration. Similarly, for metre, Directive and Monopole setups achieve average accuracies of and , respectively, with RIS activation. These results demonstrate the effectiveness of RHL-Net in harnessing RIS technology and the advanced spatial modeling of KAN for precise localisation, outperforming state-of-the-art methods on the evaluated dataset.
{"title":"Reconfigurable intelligent surfaces for enhanced localisation: Advancing performance with KAN-based deep learning models","authors":"Abdelghani Dahou , Syed Tariq Shah , Insaf Ullah , Tahira Mahboob , Ahmed Gamal Abdellatif , Mohamed Abd Elaziz , Ahmad Almogren , Mahmoud A. Shawky","doi":"10.1016/j.iot.2025.101813","DOIUrl":"10.1016/j.iot.2025.101813","url":null,"abstract":"<div><div>Accurate localisation is a critical component in modern wireless communication systems, especially in complex environments with a very low signal-to-noise ratio (SNR). Reconfigurable intelligent surfaces (RIS) have emerged as a promising solution to enhance localisation accuracy by dynamically controlling signal reflection patterns. Motivated by the need for precise localisation solutions, this study introduces the RIS-enhanced hybrid localisation network (RHL-Net), a novel framework that integrates RIS with advanced deep learning techniques. RHL-Net employs long short-term memory (LSTM) networks for temporal data processing and Kolmogorov-Arnold networks (KAN) for spatial feature extraction. The key innovation of using KAN lies in its superior ability to learn complex spatial structures compared to traditional Multi-Layer Perceptrons (MLPs); KANs achieve higher accuracy with significantly fewer parameters and offer greater interpretability through their spline-based activation functions, which are learnable and adaptable. This makes KAN uniquely suited for distilling the intricate spatial fingerprints from the RIS-enhanced channel for precise location estimation. For performance evaluation, RHL-Net uses a dataset acquired from a dual-channel universal software radio peripheral (USRP) system, which records received signal strength (RSS) and channel phase response within a single-input multiple-output (SIMO) orthogonal frequency division multiplexing (OFDM) system. A dual-channel USRP with two antennas at the receiver (<span><math><mrow><mi>R</mi><mi>x</mi></mrow></math></span>) side is deployed at a grid of positions with an interspacing distance (<span><math><mi>x</mi></math></span>) to assess the RHL-Net localisation performance. Experimental results show that for <span><math><mrow><mi>x</mi><mo>=</mo><mn>0.5</mn></mrow></math></span> metres with Directive and Monopole <span><math><mrow><mi>R</mi><mi>x</mi></mrow></math></span> antenna configurations, RHL-Net achieves average accuracies of <span><math><mrow><mn>69.00</mn><mspace></mspace><mo>%</mo></mrow></math></span> and <span><math><mrow><mn>74.19</mn><mspace></mspace><mo>%</mo></mrow></math></span>, respectively, with RIS activated, significantly outperforming the deactivated configuration. Similarly, for <span><math><mrow><mi>x</mi><mo>=</mo><mn>1</mn></mrow></math></span> metre, Directive and Monopole setups achieve average accuracies of <span><math><mrow><mn>85.58</mn><mspace></mspace><mo>%</mo></mrow></math></span> and <span><math><mrow><mn>73.88</mn><mspace></mspace><mo>%</mo></mrow></math></span>, respectively, with RIS activation. These results demonstrate the effectiveness of RHL-Net in harnessing RIS technology and the advanced spatial modeling of KAN for precise localisation, outperforming state-of-the-art methods on the evaluated dataset.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101813"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101818
A. Villafranca, Maria-Dolores Cano
This paper delivers three core innovations for Internet of Things (IoT) intrusion detection in sustainable agriculture: (1) a unified preprocessing pipeline integrating StandardScaler, undersampling, SMOTE, Tomek Links, and 10-fold cross-validation, (2) a lightweight, dataset-agnostic DNN architecture (256–128–64–Softmax) achieving ≥97 % accuracy without per-dataset tuning, and (3) a curated benchmark of 18 IoT-IDS datasets including the Farm-Flow greenhouse trace with full metadata. Our model achieved 99.14 % average accuracy across 18 datasets, including 99.25 % precision on BoT-IoT, 99.99 % on CICIDS2017, and perfect 100 % scores on N-BaIoT, Car-Hacking, and CIC-IoT2022, demonstrating robust intrusion detection while maintaining only ∼1.2 M parameters for resource-constrained deployment. Experimental results demonstrate that our Deep Neural Network (DNN) model, through automatic hierarchical feature extraction, outperforms specialized architectures in heterogeneous scenarios while reducing reliance on manual feature engineering. Although Machine Learning (ML)-based methods and distributed approaches offer advantages in privacy and local processing, they face computational constraints and synchronization challenges that limit scalability. These findings confirm the effectiveness and adaptability of the proposed model, establishing it as a reliable and scalable solution for enhancing IoT network security in real-world deployments. Modern greenhouses, dairy farms, and cold-chain facilities, where cyber-attacks threaten water and energy efficiency gains, benefit from this edge-deployable approach that restores security and trustworthiness to smart-agriculture IoT networks.
{"title":"A lightweight edge-DL intrusion detection system for IoT sustainable smart-agriculture","authors":"A. Villafranca, Maria-Dolores Cano","doi":"10.1016/j.iot.2025.101818","DOIUrl":"10.1016/j.iot.2025.101818","url":null,"abstract":"<div><div>This paper delivers three core innovations for Internet of Things (IoT) intrusion detection in sustainable agriculture: (1) a unified preprocessing pipeline integrating StandardScaler, undersampling, SMOTE, Tomek Links, and 10-fold cross-validation, (2) a lightweight, dataset-agnostic DNN architecture (256–128–64–Softmax) achieving ≥97 % accuracy without per-dataset tuning, and (3) a curated benchmark of 18 IoT-IDS datasets including the Farm-Flow greenhouse trace with full metadata. Our model achieved 99.14 % average accuracy across 18 datasets, including 99.25 % precision on BoT-IoT, 99.99 % on CICIDS2017, and perfect 100 % scores on N-BaIoT, Car-Hacking, and CIC-IoT2022, demonstrating robust intrusion detection while maintaining only ∼1.2 M parameters for resource-constrained deployment. Experimental results demonstrate that our Deep Neural Network (DNN) model, through automatic hierarchical feature extraction, outperforms specialized architectures in heterogeneous scenarios while reducing reliance on manual feature engineering. Although Machine Learning (ML)-based methods and distributed approaches offer advantages in privacy and local processing, they face computational constraints and synchronization challenges that limit scalability. These findings confirm the effectiveness and adaptability of the proposed model, establishing it as a reliable and scalable solution for enhancing IoT network security in real-world deployments. Modern greenhouses, dairy farms, and cold-chain facilities, where cyber-attacks threaten water and energy efficiency gains, benefit from this edge-deployable approach that restores security and trustworthiness to smart-agriculture IoT networks.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101818"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.iot.2025.101802
Ogobuchi Daniel Okey , Sajjad Dadkhah , Heather Molyneaux , Demóstenes Zegarra Rodríguez , João Henrique Kleinschmidt
The widespread integration of Internet of Things (IoT) devices has enhanced the intelligence of homes, industries, and offices, yet it introduces critical security challenges due to their susceptibility to dynamic threats and behavioral heterogeneity, necessitating identification via communication patterns rather than mere physical recognition. This paper addresses the demand for a unified security framework in IoT ecosystems, where devices, limited by diverse protocols and constrained computational resources, face attacks such as DNS tunneling, MAC spoofing, and several other threats. Existing approaches, which rely on coarse-grained signatures or segregated machine learning for device identification and intrusion detection, exhibit limited resilience, increased operational overhead, poor cross-network adaptability, and scalability constraints in real-time dynamic settings. We propose iPASecIoT, a single-model framework that concurrently identifies IoT devices and detects intrusions using fine-grained behavioral fingerprints. Our methodology combines machine and deep learning algorithms with a modified firefly algorithm employing a kappa score-based voting mechanism for adaptive feature selection, yielding a lightweight, resource-efficient model by optimizing agreement beyond chance across network traffic, inter-arrival times, and protocol-specific features. Evaluated on the CICIoMT2024, CICIoT2023, and UNSW2019 datasets, iPASecIoT achieves mean F1 scores of 99.99 %, 99.88 %, and 98.35 % for device identification and 99.96 %, 99.38 %, and 98.79 % for threat classification across the CICIoMT2024, CICIoT2023, and UNSW2019 datasets, respectively. With a mean inference time of 0.0005 seconds per sample and a mean Hamming loss of 0.001, iPASecIoT provides a pioneering, efficient, and scalable solution to counter evolving security threats in heterogeneous IoT environment.
{"title":"iPASecIoT: An intelligent pipeline for automatic and adaptive feature extraction for secure IoT device identification and intrusion detection","authors":"Ogobuchi Daniel Okey , Sajjad Dadkhah , Heather Molyneaux , Demóstenes Zegarra Rodríguez , João Henrique Kleinschmidt","doi":"10.1016/j.iot.2025.101802","DOIUrl":"10.1016/j.iot.2025.101802","url":null,"abstract":"<div><div>The widespread integration of Internet of Things (IoT) devices has enhanced the intelligence of homes, industries, and offices, yet it introduces critical security challenges due to their susceptibility to dynamic threats and behavioral heterogeneity, necessitating identification via communication patterns rather than mere physical recognition. This paper addresses the demand for a unified security framework in IoT ecosystems, where devices, limited by diverse protocols and constrained computational resources, face attacks such as DNS tunneling, MAC spoofing, and several other threats. Existing approaches, which rely on coarse-grained signatures or segregated machine learning for device identification and intrusion detection, exhibit limited resilience, increased operational overhead, poor cross-network adaptability, and scalability constraints in real-time dynamic settings. We propose iPASecIoT, a single-model framework that concurrently identifies IoT devices and detects intrusions using fine-grained behavioral fingerprints. Our methodology combines machine and deep learning algorithms with a modified firefly algorithm employing a kappa score-based voting mechanism for adaptive feature selection, yielding a lightweight, resource-efficient model by optimizing agreement beyond chance across network traffic, inter-arrival times, and protocol-specific features. Evaluated on the CICIoMT2024, CICIoT2023, and UNSW2019 datasets, iPASecIoT achieves mean F1 scores of 99.99 %, 99.88 %, and 98.35 % for device identification and 99.96 %, 99.38 %, and 98.79 % for threat classification across the CICIoMT2024, CICIoT2023, and UNSW2019 datasets, respectively. With a mean inference time of 0.0005 seconds per sample and a mean Hamming loss of <span><math><mo>≈</mo></math></span> 0.001, iPASecIoT provides a pioneering, efficient, and scalable solution to counter evolving security threats in heterogeneous IoT environment.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101802"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}