Pub Date : 2025-04-22DOI: 10.1016/j.iot.2025.101594
Nordine Quadar , Abdellah Chehri , Benoit Debaque
The integration of unmanned aerial vehicles (UAVs) has opened new avenues for enhanced security and functionality. The security of UAVs through the detection and analysis of unique signal patterns is a critical aspect of this technological advancement. This approach leverages intrinsic signal characteristics to distinguish between UAVs of identical models, providing a robust layer of security at the communication level. The application of artificial intelligence in UAV signal analysis has shown significant potential in improving UAV identification and authentication. Recent advancements utilize deep learning techniques with raw In-phase and Quadrature (I/Q) data to achieve high-precision UAV signal recognition. However, existing deep learning models face challenges with unfamiliar data scenarios involving I/Q data. This work explores alternative transformations of I/Q data and investigates the integration of statistical features such as mean, median, and mode across these transformations. It also evaluates the generalization capability of the proposed methods in various environments and examines the impact of signal-to-noise ratio (SNR) on recognition accuracy. Experimental results underscore the promise of our approach, establishing a solid foundation for practical deep-learning-based UAV security solutions and contributing to the field of IoT.
{"title":"Advanced security frameworks for UAV and IoT: A deep learning approach","authors":"Nordine Quadar , Abdellah Chehri , Benoit Debaque","doi":"10.1016/j.iot.2025.101594","DOIUrl":"10.1016/j.iot.2025.101594","url":null,"abstract":"<div><div>The integration of unmanned aerial vehicles (UAVs) has opened new avenues for enhanced security and functionality. The security of UAVs through the detection and analysis of unique signal patterns is a critical aspect of this technological advancement. This approach leverages intrinsic signal characteristics to distinguish between UAVs of identical models, providing a robust layer of security at the communication level. The application of artificial intelligence in UAV signal analysis has shown significant potential in improving UAV identification and authentication. Recent advancements utilize deep learning techniques with raw In-phase and Quadrature (I/Q) data to achieve high-precision UAV signal recognition. However, existing deep learning models face challenges with unfamiliar data scenarios involving I/Q data. This work explores alternative transformations of I/Q data and investigates the integration of statistical features such as mean, median, and mode across these transformations. It also evaluates the generalization capability of the proposed methods in various environments and examines the impact of signal-to-noise ratio (SNR) on recognition accuracy. Experimental results underscore the promise of our approach, establishing a solid foundation for practical deep-learning-based UAV security solutions and contributing to the field of IoT.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101594"},"PeriodicalIF":6.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1016/j.iot.2025.101607
Kai Yang , JiaMing Wang , GeGe Zhao , XuAn Wang , Wei Cong , ManZheng Yuan , JiaXiong Luo , XiaoFang Dong , JiaRui Wang , Jing Tao
Network intrusion detection is crucial for enhancing network security; however, existing models face three prominent challenges. First, many models place too much emphasis on overall accuracy, often neglecting the accurate distinction between different types of attacks. Second, due to feature redundancy in complex high-dimensional attack traffic, these models struggle to extract key information from large feature sets. Lastly, when dealing with imbalanced datasets, models tend to focus on learning from classes with larger sample sizes, thus overlooking those with fewer instances. To address these issues, this paper proposes a novel network intrusion detection model, NIDS-CNNRF. This model integrates Convolutional Neural Networks (CNN) for feature extraction and Random Forest (RF) for classifying attack traffic, enabling precise identification of various attack types. The Adaptive Synthetic Sampling (ADASYN) algorithm is employed to mitigate the bias toward classes with larger sample sizes, while Principal Component Analysis (PCA) is used to address feature redundancy, allowing the model to effectively extract key information. Experimental results demonstrate that the NIDS-CNNRF model significantly outperforms traditional intrusion detection models in enhancing network security, with superior performance observed on the KDD CUP99, NSL_KDD, CIC-IDS2017, and CIC-IDS2018 datasets.
{"title":"NIDS-CNNRF integrating CNN and random forest for efficient network intrusion detection model","authors":"Kai Yang , JiaMing Wang , GeGe Zhao , XuAn Wang , Wei Cong , ManZheng Yuan , JiaXiong Luo , XiaoFang Dong , JiaRui Wang , Jing Tao","doi":"10.1016/j.iot.2025.101607","DOIUrl":"10.1016/j.iot.2025.101607","url":null,"abstract":"<div><div>Network intrusion detection is crucial for enhancing network security; however, existing models face three prominent challenges. First, many models place too much emphasis on overall accuracy, often neglecting the accurate distinction between different types of attacks. Second, due to feature redundancy in complex high-dimensional attack traffic, these models struggle to extract key information from large feature sets. Lastly, when dealing with imbalanced datasets, models tend to focus on learning from classes with larger sample sizes, thus overlooking those with fewer instances. To address these issues, this paper proposes a novel network intrusion detection model, NIDS-CNNRF. This model integrates Convolutional Neural Networks (CNN) for feature extraction and Random Forest (RF) for classifying attack traffic, enabling precise identification of various attack types. The Adaptive Synthetic Sampling (ADASYN) algorithm is employed to mitigate the bias toward classes with larger sample sizes, while Principal Component Analysis (PCA) is used to address feature redundancy, allowing the model to effectively extract key information. Experimental results demonstrate that the NIDS-CNNRF model significantly outperforms traditional intrusion detection models in enhancing network security, with superior performance observed on the KDD CUP99, NSL_KDD, CIC-IDS2017, and CIC-IDS2018 datasets.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101607"},"PeriodicalIF":6.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1016/j.iot.2025.101613
Parisa Khoshvaght , Jawad Tanveer , Amir Masoud Rahmani , Mohammad Mohammadi , Amin Mehranzadeh , Jan Lansky , Mehdi Hosseinzadeh
The Internet of Things (IoT) technology today has grown rapidly compared to the last few years, and the use of this technology has increased the quality of service to users day by day. The various applications of IoT have caused the attention of this innovation to enhance among different organizations. One of the important challenges of the IoT is routing, which can affect having a stable network. In this research, a hybrid approach called H-TERF (Hybrid TOPSIS and Enhanced Random Forest) is proposed for achieving efficient routing in IoT networks, specifically in Wireless Body Area Networks (WBAN). This method initially cluster nodes by using the DBSCAN clustering algorithm to optimize intra-cluster communication. Then, for routing, the nodes are ranked using the Fuzzy TOPSIS and Fuzzy AHP. This ranking is determined by several criteria, including the remaining energy of nodes, node memory, and throughput. Additionally, to manage more complex criteria such as node historical records and traffic rate, the initial ranking by the TOPSIS approach, along with the other mentioned criteria, is fed into an enhanced random forest model to identify the optimal path. This hybrid method enhances network performance in terms of lifespan, efficiency, delay, and packet delivery ratio. The outcomes of the simulation show that the suggested method surpasses existing approaches and is highly effective for application in IoT and WBAN networks. For example, the performance improvement of the proposed approach over the F-EVM, DECR, and DHH-EFO approaches in energy consumption was 20.62%, 25.85%, and 32.57%, respectively.
{"title":"H-TERF: A hybrid approach combining fuzzy multi-criteria decision-making techniques and enhanced random forest to improve WBAN-IoT","authors":"Parisa Khoshvaght , Jawad Tanveer , Amir Masoud Rahmani , Mohammad Mohammadi , Amin Mehranzadeh , Jan Lansky , Mehdi Hosseinzadeh","doi":"10.1016/j.iot.2025.101613","DOIUrl":"10.1016/j.iot.2025.101613","url":null,"abstract":"<div><div>The Internet of Things (IoT) technology today has grown rapidly compared to the last few years, and the use of this technology has increased the quality of service to users day by day. The various applications of IoT have caused the attention of this innovation to enhance among different organizations. One of the important challenges of the IoT is routing, which can affect having a stable network. In this research, a hybrid approach called H-TERF (Hybrid TOPSIS and Enhanced Random Forest) is proposed for achieving efficient routing in IoT networks, specifically in Wireless Body Area Networks (WBAN). This method initially cluster nodes by using the DBSCAN clustering algorithm to optimize intra-cluster communication. Then, for routing, the nodes are ranked using the Fuzzy TOPSIS and Fuzzy AHP. This ranking is determined by several criteria, including the remaining energy of nodes, node memory, and throughput. Additionally, to manage more complex criteria such as node historical records and traffic rate, the initial ranking by the TOPSIS approach, along with the other mentioned criteria, is fed into an enhanced random forest model to identify the optimal path. This hybrid method enhances network performance in terms of lifespan, efficiency, delay, and packet delivery ratio. The outcomes of the simulation show that the suggested method surpasses existing approaches and is highly effective for application in IoT and WBAN networks. For example, the performance improvement of the proposed approach over the F-EVM, DECR, and DHH-EFO approaches in energy consumption was 20.62%, 25.85%, and 32.57%, respectively.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101613"},"PeriodicalIF":6.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1016/j.iot.2025.101606
Xiaoyi Ge, Xiongwei Zhang, Meng Sun, Kunkun SongGong, Xia Zou
Speech-based control is widely used for remotely operating the Internet of Things (IoT) devices, but it risks eavesdropping and cyberattacks. Speech hiding enhances security by embedding secret speech in a cover speech to conceal communication behavior. However, existing methods are limited by the extracted secret speech’s poor intelligibility and the stego speech’s insufficient security. To address these challenges, we propose a novel invertible generative speech hiding framework that integrates the embedding process into the speech synthesis pipeline. Our method establishes a bijective mapping between secret speech inputs and stego speech outputs, conditioned on text-derived Mel-spectrograms. The embedding process employs a normalizing flow-based SecFlow module to map secret speech into Gaussian-distributed latent codes, which are subsequently synthesized into stego speech through a flow-based vocoder. Crucially, the invertibility of both SecFlow and the vocoder enables precise secret speech extraction during extraction. Extensive evaluation demonstrated the generated stego speech achieves high quality with a Perceived Evaluation of Speech Quality (PESQ) score of 3.40 and a Short-Term Objective Intelligibility (STOI) score of 0.96. Extracted secret speech exhibits high quality and intelligibility with a character error rate (CER) of 0.021. In addition, the latent codes of secret speech mapped and randomly sampled Gaussian noise are very close to each other, effectively guaranteeing security. The framework achieves real-time performance with 1.28s generation latency for 2.22s speech segment embedding(achieving a real-time factor (RTF) of 0.577), which ensures efficient covert communication for latency-sensitive IoT applications.
{"title":"Invertible generative speech hiding with normalizing flow for secure IoT voice","authors":"Xiaoyi Ge, Xiongwei Zhang, Meng Sun, Kunkun SongGong, Xia Zou","doi":"10.1016/j.iot.2025.101606","DOIUrl":"10.1016/j.iot.2025.101606","url":null,"abstract":"<div><div>Speech-based control is widely used for remotely operating the Internet of Things (IoT) devices, but it risks eavesdropping and cyberattacks. Speech hiding enhances security by embedding secret speech in a cover speech to conceal communication behavior. However, existing methods are limited by the extracted secret speech’s poor intelligibility and the stego speech’s insufficient security. To address these challenges, we propose a novel invertible generative speech hiding framework that integrates the embedding process into the speech synthesis pipeline. Our method establishes a bijective mapping between secret speech inputs and stego speech outputs, conditioned on text-derived Mel-spectrograms. The embedding process employs a normalizing flow-based SecFlow module to map secret speech into Gaussian-distributed latent codes, which are subsequently synthesized into stego speech through a flow-based vocoder. Crucially, the invertibility of both SecFlow and the vocoder enables precise secret speech extraction during extraction. Extensive evaluation demonstrated the generated stego speech achieves high quality with a Perceived Evaluation of Speech Quality (PESQ) score of 3.40 and a Short-Term Objective Intelligibility (STOI) score of 0.96. Extracted secret speech exhibits high quality and intelligibility with a character error rate (CER) of 0.021. In addition, the latent codes of secret speech mapped and randomly sampled Gaussian noise are very close to each other, effectively guaranteeing security. The framework achieves real-time performance with 1.28s generation latency for 2.22s speech segment embedding(achieving a real-time factor (RTF) of 0.577), which ensures efficient covert communication for latency-sensitive IoT applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101606"},"PeriodicalIF":6.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The healthcare industry is becoming more vulnerable to privacy violations and cybercrime due to the pervasive dissemination and sensitivity of medical data. Advanced data security systems are needed to protect privacy, data integrity, and dependability as confidentiality breaches increase across industries. Decentralized healthcare networks face challenges in feature extraction during local training, hindering effective federated averaging and learning rate optimization, which affects data processing and model training efficiency. This paper introduces a novel approach of Transforming Healthcare Edge with Transformer-based Adaptive Federated Learning (THE-TAFL) and Learning Rate Optimization. In this paper, we combine Transformer-based Adaptive Federated Learning (TAFL) with learning rate optimization to improve the privacy and security of healthcare information on edge devices. We used data augmentation approaches that generate robust and generalized input datasets for deep learning models. Next, we use the Vision Transformer (ViT) model for local training, generating Local Model Weights (LMUs) that enhance feature extraction and learning. We designed a training optimization method that improves model performance and stability by combining a loss function with weight decay for regularization, learning rate scheduling, and gradient clipping. This ensures effective training across decentralized clients in a Federated Learning (FL) framework. The FL server receives LMUs from many clients and aggregates them. The aggregation procedure utilizes adaptive federated averaging to aggregate the LMUs based on the performance of each client. This adaptive method ensures that high-performing clients contribute more to the Global Model Update (GMU). Following aggregation, clients receive the GMU to continue training with the updated parameters, ensuring collaborative and dynamic learning. The proposed method provides better performance on two standard datasets using various numbers of clients.
{"title":"THE-TAFL: Transforming Healthcare Edge with Transformer-based Adaptive Federated Learning and Learning Rate Optimization","authors":"Farhan Ullah , Nazeeruddin Mohammad , Leonardo Mostarda , Diletta Cacciagrano , Shamsher Ullah , Yue Zhao","doi":"10.1016/j.iot.2025.101605","DOIUrl":"10.1016/j.iot.2025.101605","url":null,"abstract":"<div><div>The healthcare industry is becoming more vulnerable to privacy violations and cybercrime due to the pervasive dissemination and sensitivity of medical data. Advanced data security systems are needed to protect privacy, data integrity, and dependability as confidentiality breaches increase across industries. Decentralized healthcare networks face challenges in feature extraction during local training, hindering effective federated averaging and learning rate optimization, which affects data processing and model training efficiency. This paper introduces a novel approach of Transforming Healthcare Edge with Transformer-based Adaptive Federated Learning (THE-TAFL) and Learning Rate Optimization. In this paper, we combine Transformer-based Adaptive Federated Learning (TAFL) with learning rate optimization to improve the privacy and security of healthcare information on edge devices. We used data augmentation approaches that generate robust and generalized input datasets for deep learning models. Next, we use the Vision Transformer (ViT) model for local training, generating Local Model Weights (LMUs) that enhance feature extraction and learning. We designed a training optimization method that improves model performance and stability by combining a loss function with weight decay for regularization, learning rate scheduling, and gradient clipping. This ensures effective training across decentralized clients in a Federated Learning (FL) framework. The FL server receives LMUs from many clients and aggregates them. The aggregation procedure utilizes adaptive federated averaging to aggregate the LMUs based on the performance of each client. This adaptive method ensures that high-performing clients contribute more to the Global Model Update (GMU). Following aggregation, clients receive the GMU to continue training with the updated parameters, ensuring collaborative and dynamic learning. The proposed method provides better performance on two standard datasets using various numbers of clients.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101605"},"PeriodicalIF":6.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid growth of the Internet of Things (IoT) necessitates an effective Device-Type Identification System to monitor resource-constrained devices and mitigate potential security risks. Most Machine Learning (ML) based approaches for IoT Device-Type Identification utilize behavior-based, packet-based, flow-based characteristics, or a combination of these. Packet and behavior-based characteristics require analysis of individual packets. Furthermore, behavior-based characteristics need the analysis of application layer data (payloads), which may not be practical in case of encrypted traffic. Moreover, the existing approaches do not handle the mixed traffic (IoT and non-IoT) in an appropriate manner, suffer from frequent misclassification of closely related devices, and do not maintain performance when tested in different network environments. In contrast, flow-based characteristics neither require per-packet analysis nor the inspection of payloads. However, the existing flow-based approaches underperform as they consider a limited set of appropriate characteristics. To address these challenges, we propose DI4IoT, a two-stage flow-based Device-Type Identification framework using ML. The first stage categorizes the traffic into IoT and non-IoT, and the second stage identifies the device type from the categorized traffic. We create labeled flow-based characteristics and provide a methodology to select a minimal set of appropriate flow characteristics. We evaluate different ML algorithms to identify the suitable model for our proposed framework. The results demonstrate that our framework outperforms the state-of-the-art flow-based methods by over 10%. Furthermore, we evaluate and validate the performance gains in terms of Generalizability with complex network traffic compared to not only flow-based but also combined feature-type approaches.
物联网(IoT)的快速发展需要一个有效的设备类型识别系统来监控资源有限的设备并降低潜在的安全风险。大多数基于机器学习(ML)的物联网设备类型识别方法都利用了基于行为、基于数据包、基于流量的特征或这些特征的组合。基于数据包和行为的特征需要对单个数据包进行分析。此外,基于行为的特征需要分析应用层数据(有效载荷),这在加密流量的情况下可能不切实际。此外,现有的方法不能以适当的方式处理混合流量(物联网和非物联网),经常会对密切相关的设备进行错误分类,而且在不同的网络环境中进行测试时不能保持性能。相比之下,基于流的特征既不需要对每个数据包进行分析,也不需要对有效载荷进行检查。然而,现有的基于流的方法由于只考虑了有限的适当特征集,因此性能不佳。为了应对这些挑战,我们提出了 DI4IoT,这是一种使用 ML 的基于流量的设备类型识别框架,分为两个阶段。第一阶段将流量分为物联网和非物联网,第二阶段从分类流量中识别设备类型。我们创建了基于流量的标记特征,并提供了一种方法来选择一套最合适的流量特征。我们评估了不同的 ML 算法,以确定适合我们所提框架的模型。结果表明,我们的框架优于最先进的基于流量的方法 10%以上。此外,与基于流量的方法和组合特征类型方法相比,我们还评估并验证了复杂网络流量通用性方面的性能提升。
{"title":"DI4IoT: A comprehensive framework for IoT device-type identification through network flow analysis","authors":"Saurav Kumar, Manoj Das, Sukumar Nandi, Diganta Goswami","doi":"10.1016/j.iot.2025.101599","DOIUrl":"10.1016/j.iot.2025.101599","url":null,"abstract":"<div><div>The rapid growth of the Internet of Things (IoT) necessitates an effective Device-Type Identification System to monitor resource-constrained devices and mitigate potential security risks. Most Machine Learning (ML) based approaches for IoT Device-Type Identification utilize behavior-based, packet-based, flow-based characteristics, or a combination of these. Packet and behavior-based characteristics require analysis of individual packets. Furthermore, behavior-based characteristics need the analysis of application layer data (payloads), which may not be practical in case of encrypted traffic. Moreover, the existing approaches do not handle the mixed traffic (IoT and non-IoT) in an appropriate manner, suffer from frequent misclassification of closely related devices, and do not maintain performance when tested in different network environments. In contrast, flow-based characteristics neither require per-packet analysis nor the inspection of payloads. However, the existing flow-based approaches underperform as they consider a limited set of appropriate characteristics. To address these challenges, we propose DI4IoT, a two-stage flow-based Device-Type Identification framework using ML. The first stage categorizes the traffic into IoT and non-IoT, and the second stage identifies the device type from the categorized traffic. We create labeled flow-based characteristics and provide a methodology to select a minimal set of appropriate flow characteristics. We evaluate different ML algorithms to identify the suitable model for our proposed framework. The results demonstrate that our framework outperforms the state-of-the-art flow-based methods by over 10%. Furthermore, we evaluate and validate the performance gains in terms of Generalizability with complex network traffic compared to not only flow-based but also combined feature-type approaches.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"31 ","pages":"Article 101599"},"PeriodicalIF":6.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143834383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-12DOI: 10.1016/j.iot.2025.101617
Yao-Cheng Lin , Tin-Yu Wu , Chu-Fu Wang , Jheng-Yang Ou , Te-Chang Hsu , Shiyang Lyu , Ling Cheng , Yu-Xiu Lin , David Taniar
Climate change has increased the severity of droughts, threatening global agricultural productivity. The implementation of information technology for enhancing smart agriculture has proven its great potential for supporting precision agriculture that can provide crops with the ability to defend themselves against environmental threats. Rice, which is a staple food crop in tropical and subtropical regions, is particularly sensitive to water stress during its critical growth stages. This study therefore focused on Tainung No. 67 rice, known for its drought resistance, to develop an intelligent AIoT-based plant watering decision support system. The proposed system aims to optimise water use and enhance agricultural resilience by integrating real-time monitoring, AI-driven analysis, and automated irrigation. Data were collected using hyperspectral imaging, point cloud analysis, and physiological indicators (measured by the LI-600 device), providing a comprehensive time-series dataset for model training. Principal component analysis (PCA) was used to reduce data dimensionality, and an LSTM-based AI framework was used to predict water stress severity. Experimental results showed high accuracy for all datasets, with the AI model achieving 97 % accuracy for point cloud data and 98 % accuracy for hyperspectral imagery. Scenarios with mixed missing data further validated the practicality and robustness of the system. This research highlights the potential to address drought-related challenges in agriculture through the integration of IoT, AI and advanced sensing technologies. The system not only optimises irrigation strategies but also contributes to sustainable farming practices through the preservation of water resources.
{"title":"An intelligent plant watering decision support system for drought monitoring & analysis based on AIoT and an LSTM time-series framework","authors":"Yao-Cheng Lin , Tin-Yu Wu , Chu-Fu Wang , Jheng-Yang Ou , Te-Chang Hsu , Shiyang Lyu , Ling Cheng , Yu-Xiu Lin , David Taniar","doi":"10.1016/j.iot.2025.101617","DOIUrl":"10.1016/j.iot.2025.101617","url":null,"abstract":"<div><div>Climate change has increased the severity of droughts, threatening global agricultural productivity. The implementation of information technology for enhancing smart agriculture has proven its great potential for supporting precision agriculture that can provide crops with the ability to defend themselves against environmental threats. Rice, which is a staple food crop in tropical and subtropical regions, is particularly sensitive to water stress during its critical growth stages. This study therefore focused on Tainung No. 67 rice, known for its drought resistance, to develop an intelligent AIoT-based plant watering decision support system. The proposed system aims to optimise water use and enhance agricultural resilience by integrating real-time monitoring, AI-driven analysis, and automated irrigation. Data were collected using hyperspectral imaging, point cloud analysis, and physiological indicators (measured by the LI-600 device), providing a comprehensive time-series dataset for model training. Principal component analysis (PCA) was used to reduce data dimensionality, and an LSTM-based AI framework was used to predict water stress severity. Experimental results showed high accuracy for all datasets, with the AI model achieving 97 % accuracy for point cloud data and 98 % accuracy for hyperspectral imagery. Scenarios with mixed missing data further validated the practicality and robustness of the system. This research highlights the potential to address drought-related challenges in agriculture through the integration of IoT, AI and advanced sensing technologies. The system not only optimises irrigation strategies but also contributes to sustainable farming practices through the preservation of water resources.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101617"},"PeriodicalIF":6.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11DOI: 10.1016/j.iot.2025.101602
Muhammad Bilal Rasool , Uzair Muzamil Shah , Mohammad Imran , Daud Mustafa Minhas , Georg Frey
Wi-Fi camera-based home monitoring systems are increasingly popular for improving security and real-time observation. However, reliance on Wi-Fi introduces privacy vulnerabilities, as sensitive activities within monitored areas can be inferred from encrypted traffic. This paper presents a lightweight, non-ML attack model that analyzes Wi-Fi traffic metadata—such as packet size variations, serial number sequences, and transmission timings—to detect live streaming, motion detection, and person detection. Unlike machine learning-based approaches, our method requires no training data or feature extraction, making it computationally efficient and easily scalable. Empirical testing at varying distances (10 m, 20 m, and 30 m) and under different environmental conditions shows accuracy rates of up to 90% at close range and 72% at greater distances, demonstrating its robustness. Compared to existing ML-based techniques, which require extensive retraining for different camera manufacturers, our approach provides a universal and adaptable attack model. This research underscores significant privacy risks in Wi-Fi surveillance systems and emphasizes the urgent need for stronger encryption mechanisms and obfuscation techniques to mitigate unauthorized activity inference.
{"title":"Invisible eyes: Real-time activity detection through encrypted Wi-Fi traffic without machine learning","authors":"Muhammad Bilal Rasool , Uzair Muzamil Shah , Mohammad Imran , Daud Mustafa Minhas , Georg Frey","doi":"10.1016/j.iot.2025.101602","DOIUrl":"10.1016/j.iot.2025.101602","url":null,"abstract":"<div><div>Wi-Fi camera-based home monitoring systems are increasingly popular for improving security and real-time observation. However, reliance on Wi-Fi introduces privacy vulnerabilities, as sensitive activities within monitored areas can be inferred from encrypted traffic. This paper presents a lightweight, non-ML attack model that analyzes Wi-Fi traffic metadata—such as packet size variations, serial number sequences, and transmission timings—to detect live streaming, motion detection, and person detection. Unlike machine learning-based approaches, our method requires no training data or feature extraction, making it computationally efficient and easily scalable. Empirical testing at varying distances (10 m, 20 m, and 30 m) and under different environmental conditions shows accuracy rates of up to 90% at close range and 72% at greater distances, demonstrating its robustness. Compared to existing ML-based techniques, which require extensive retraining for different camera manufacturers, our approach provides a universal and adaptable attack model. This research underscores significant privacy risks in Wi-Fi surveillance systems and emphasizes the urgent need for stronger encryption mechanisms and obfuscation techniques to mitigate unauthorized activity inference.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"31 ","pages":"Article 101602"},"PeriodicalIF":6.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143838497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-10DOI: 10.1016/j.iot.2025.101609
Eyhab Al-Masri, Sri Vibhu Paruchuri
The rapid growth of Internet of Things (IoT) devices has created a pressing demand for fog computing, offering an effective alternative to the inherent constraints imposed by traditional cloud computing. Efficient resource management in fog environments remains challenging due to device heterogeneity, dynamic workloads, and conflicting performance objectives. This paper introduces FogScheduler, an innovative resource allocation algorithm that optimizes performance and energy efficiency in IoT-fog ecosystems using the TOPSIS method to rank resources based on attributes like MIPS, Thermal Design Power (TDP), memory bandwidth, and network latency. Experiments highlight FogScheduler's notable achievements, including a 46.1 % reduction in energy consumption in the best case compared to the Greedy Algorithm (GA) and a 45.6 % reduction in makespan compared to the First-Fit Algorithm (FFA). On average, FogScheduler achieves a 27 % reduction in energy consumption compared to FFA, demonstrating its consistent ability to optimize resource allocation. Even in worst-case scenarios, FogScheduler outperforms traditional algorithms, underscoring its robustness across varying resource contention levels. Results from our experiments demonstrate that FogScheduler is a highly effective solution for energy-aware and performance-optimized resource management, offering significant potential for IoT-fog-cloud ecosystems.
{"title":"FogScheduler: A resource optimization framework for energy-efficient computing in fog environments","authors":"Eyhab Al-Masri, Sri Vibhu Paruchuri","doi":"10.1016/j.iot.2025.101609","DOIUrl":"10.1016/j.iot.2025.101609","url":null,"abstract":"<div><div>The rapid growth of Internet of Things (IoT) devices has created a pressing demand for fog computing, offering an effective alternative to the inherent constraints imposed by traditional cloud computing. Efficient resource management in fog environments remains challenging due to device heterogeneity, dynamic workloads, and conflicting performance objectives. This paper introduces FogScheduler, an innovative resource allocation algorithm that optimizes performance and energy efficiency in IoT-fog ecosystems using the TOPSIS method to rank resources based on attributes like MIPS, Thermal Design Power (TDP), memory bandwidth, and network latency. Experiments highlight FogScheduler's notable achievements, including a 46.1 % reduction in energy consumption in the best case compared to the Greedy Algorithm (GA) and a 45.6 % reduction in makespan compared to the First-Fit Algorithm (FFA). On average, FogScheduler achieves a 27 % reduction in energy consumption compared to FFA, demonstrating its consistent ability to optimize resource allocation. Even in worst-case scenarios, FogScheduler outperforms traditional algorithms, underscoring its robustness across varying resource contention levels. Results from our experiments demonstrate that FogScheduler is a highly effective solution for energy-aware and performance-optimized resource management, offering significant potential for IoT-fog-cloud ecosystems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"32 ","pages":"Article 101609"},"PeriodicalIF":6.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-08DOI: 10.1016/j.iot.2025.101593
Tzu-Chia Chen
Early fault or unusual behavior detection can reduce the risk of equipment failure improve performance and increase safety. Anomaly detection in industrial big data involves identifying deviations from normal patterns in large-scale datasets. This method assists in preventing equipment failures optimizing maintenance schedules and raising overall operational efficiency in industrial settings by identifying anomalous behaviors or outliers. Through the utilization of deep learning procedures, this investigation endeavours to apply are fined procedure for anomaly detection in industrial big data. Pre-processing, feature selection and Anomaly detection are three steps of a process that are used. The input data is first fed into MapReduce framework where it is divided and pre-processed. Imputation of missing data and Yeo-Jhonson transformation are then applied to eliminate noise from data. After pre-processed data is generated, it is put through a feature selection phase using Serial Exponential Lotus Effect Optimization Algorithm (SELOA). The algorithm is created newly by combining Lotus Effect Optimization Algorithm (LOA) with Exponential Weighted Moving Average (EWMA). Finally, anomaly detection is done using the features that are selected by means of Deep Belief-MobileNet1D, which combines MobileNet1D and Deep Belief Network (DBN). With a recall of 96.2 %, precision of 92.8 %, F1 score of 94.5 % and accuracy of 95.9 %, results show that the proposed strategy surpasses standard approaches. These findings demonstrate Deep Belief-MobileNet1D model's ability to detect anomalies in industrial big data.
{"title":"Deep Belief-MobileNet1D: A novel deep learning approach for anomaly detection in industrial big data","authors":"Tzu-Chia Chen","doi":"10.1016/j.iot.2025.101593","DOIUrl":"10.1016/j.iot.2025.101593","url":null,"abstract":"<div><div>Early fault or unusual behavior detection can reduce the risk of equipment failure improve performance and increase safety. Anomaly detection in industrial big data involves identifying deviations from normal patterns in large-scale datasets. This method assists in preventing equipment failures optimizing maintenance schedules and raising overall operational efficiency in industrial settings by identifying anomalous behaviors or outliers. Through the utilization of deep learning procedures, this investigation endeavours to apply are fined procedure for anomaly detection in industrial big data. Pre-processing, feature selection and Anomaly detection are three steps of a process that are used. The input data is first fed into MapReduce framework where it is divided and pre-processed. Imputation of missing data and Yeo-Jhonson transformation are then applied to eliminate noise from data. After pre-processed data is generated, it is put through a feature selection phase using Serial Exponential Lotus Effect Optimization Algorithm (SELOA). The algorithm is created newly by combining Lotus Effect Optimization Algorithm (LOA) with Exponential Weighted Moving Average (EWMA). Finally, anomaly detection is done using the features that are selected by means of Deep Belief-MobileNet1D, which combines MobileNet1D and Deep Belief Network (DBN). With a recall of 96.2 %, precision of 92.8 %, F1 score of 94.5 % and accuracy of 95.9 %, results show that the proposed strategy surpasses standard approaches. These findings demonstrate Deep Belief-MobileNet1D model's ability to detect anomalies in industrial big data.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"31 ","pages":"Article 101593"},"PeriodicalIF":6.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}