Fog computing is an emerging paradigm for the Internet of Things (IoT), where system stability directly impacts reliability, performance, and user experience. Existing stability models often ignore application-level service completion or fail to capture dynamic interactions among sensors and fog nodes. This study addresses these gaps by establishing necessary conditions for fog nodes to process application services within finite time. First, we introduce a fluid model based on a partial differential equation (PDE) to quantify the dynamics of service counts for each sensor when fog nodes are shared. Second, we design a Lyapunov function derived from the PDE solution to analyze system stability and convergence. Third, we apply this Lyapunov function to derive conditions that guarantee timely service completion. Finally, numerical experiments validate the fluid model, investigate PDE solution behavior, and assess the convergence speed of the Lyapunov function under various system parameters. These results provide actionable insights for ensuring stability and efficiency in fog computing systems for IoT applications.
{"title":"Stability analysis for fog computing via Lyapunov function","authors":"Shih-Yu Chang , Mayank Kapadia , Peishun Yan , Wei Duan","doi":"10.1016/j.iot.2026.101871","DOIUrl":"10.1016/j.iot.2026.101871","url":null,"abstract":"<div><div>Fog computing is an emerging paradigm for the Internet of Things (IoT), where system stability directly impacts reliability, performance, and user experience. Existing stability models often ignore application-level service completion or fail to capture dynamic interactions among sensors and fog nodes. This study addresses these gaps by establishing necessary conditions for fog nodes to process application services within finite time. First, we introduce a fluid model based on a partial differential equation (PDE) to quantify the dynamics of service counts for each sensor when fog nodes are shared. Second, we design a Lyapunov function derived from the PDE solution to analyze system stability and convergence. Third, we apply this Lyapunov function to derive conditions that guarantee timely service completion. Finally, numerical experiments validate the fluid model, investigate PDE solution behavior, and assess the convergence speed of the Lyapunov function under various system parameters. These results provide actionable insights for ensuring stability and efficiency in fog computing systems for IoT applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101871"},"PeriodicalIF":7.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.iot.2025.101870
Ronny Seiger, Daniel Locher, Marco Kaufmann, Aaron F. Kurz
Modern Internet of Things (IoT) systems are equipped with a large quantity of sensors providing real-time data about the current operations of their components, which is crucial for the systems’ internal control systems and processes. However, these data are often too fine-grained to derive useful insights into the execution of the larger processes an IoT system might be part of. Process mining has developed advanced approaches for the analysis of business processes that may also be used in the context of IoT. Bringing process mining to IoT requires an event abstraction step to lift the low-level sensor data to the business process level. In this work, we aim to enable domain experts to perform this step using a newly developed domain-specific language (DSL) called Radiant. Radiant supports the specification of patterns within the sensor data that indicate the execution of higher level process activities. These patterns are translated to complex event processing (CEP) applications to be used for detecting activity executions at runtime. We propose a corresponding software architecture that enables online event abstraction from IoT sensor streams using the CEP applications. We evaluate these applications to monitor activity executions in smart manufacturing and smart healthcare. These evaluations are useful to inform the domain expert about the quality of activity detections based on the specified patterns and potential for improvement via additional or modified patterns and sensors.
{"title":"A domain-specific language and architecture for detecting process activities from sensor streams in IoT","authors":"Ronny Seiger, Daniel Locher, Marco Kaufmann, Aaron F. Kurz","doi":"10.1016/j.iot.2025.101870","DOIUrl":"10.1016/j.iot.2025.101870","url":null,"abstract":"<div><div>Modern Internet of Things (IoT) systems are equipped with a large quantity of sensors providing real-time data about the current operations of their components, which is crucial for the systems’ internal control systems and processes. However, these data are often too fine-grained to derive useful insights into the execution of the larger processes an IoT system might be part of. Process mining has developed advanced approaches for the analysis of business processes that may also be used in the context of IoT. Bringing process mining to IoT requires an event abstraction step to lift the low-level sensor data to the business process level. In this work, we aim to enable domain experts to perform this step using a newly developed domain-specific language (DSL) called Radiant. Radiant supports the specification of patterns within the sensor data that indicate the execution of higher level process activities. These patterns are translated to complex event processing (CEP) applications to be used for detecting activity executions at runtime. We propose a corresponding software architecture that enables online event abstraction from IoT sensor streams using the CEP applications. We evaluate these applications to monitor activity executions in smart manufacturing and smart healthcare. These evaluations are useful to inform the domain expert about the quality of activity detections based on the specified patterns and potential for improvement via additional or modified patterns and sensors.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101870"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.iot.2025.101867
Yahia El Fellah , Jean Baptiste Minani , Naouel Moha , Julien Gascon-Samson , Yann-Gaël Guéhéneuc
As the adoption of microservices in Internet of Things (IoT) systems grows, deploying them on the Edge remains a significant challenge for practitioners. While Edge Computing offers improved latency and resource efficiency by processing data near the source, it also introduces complexity in managing microservices. Despite increasing academic interest, few comprehensive studies have investigated the specific challenges and effective software engineering (SE) practices for deploying microservices-based IoT systems on the Edge. Therefore, we conducted a multi-method study to bridge this gap. We used three methods: (1) a systematic literature review (SLR) to identify known challenges and approaches, (2) a gray literature review (GLR) to extract SE practices used in the field, and (3) an empirical evaluation using two versions of a case study, one with and one without selected SE practices. The findings show that (1) the most reported challenges relate to resource utilization and performance optimization, (2) containerized microservices, API gateways, and database-per-service are among the most commonly recommended practices, and (3) implementing these practices led to a 132% throughput improvement, 49% reduction in latency, and memory savings of up to 13% in Edge-based IoT systems. However, increased architectural complexity also led to higher CPU usage. This study offers a catalog of best practices and empirical evidence to support IoT developers aiming to optimize microservices-based deployments on the Edge, particularly in resource-constrained environments.
{"title":"Analysis of microservices-based IoT systems: deployment challenges, industry practices, and performance insights","authors":"Yahia El Fellah , Jean Baptiste Minani , Naouel Moha , Julien Gascon-Samson , Yann-Gaël Guéhéneuc","doi":"10.1016/j.iot.2025.101867","DOIUrl":"10.1016/j.iot.2025.101867","url":null,"abstract":"<div><div>As the adoption of microservices in Internet of Things (IoT) systems grows, deploying them on the Edge remains a significant challenge for practitioners. While Edge Computing offers improved latency and resource efficiency by processing data near the source, it also introduces complexity in managing microservices. Despite increasing academic interest, few comprehensive studies have investigated the specific challenges and effective software engineering (SE) practices for deploying microservices-based IoT systems on the Edge. Therefore, we conducted a multi-method study to bridge this gap. We used three methods: (1) a systematic literature review (SLR) to identify known challenges and approaches, (2) a gray literature review (GLR) to extract SE practices used in the field, and (3) an empirical evaluation using two versions of a case study, one with and one without selected SE practices. The findings show that (1) the most reported challenges relate to resource utilization and performance optimization, (2) containerized microservices, API gateways, and database-per-service are among the most commonly recommended practices, and (3) implementing these practices led to a 132% throughput improvement, 49% reduction in latency, and memory savings of up to 13% in Edge-based IoT systems. However, increased architectural complexity also led to higher CPU usage. This study offers a catalog of best practices and empirical evidence to support IoT developers aiming to optimize microservices-based deployments on the Edge, particularly in resource-constrained environments.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101867"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.iot.2025.101869
Mohammed Yacoubi , Omar Moussaoui , Cyril Drocourt
Recent advances in the Internet of Medical Things (IoMT) have significantly improved data processing and patient care within Smart Healthcare systems. However, these developments have also expanded the surface of potential cyber threats targeting sensitive medical infrastructures. To address these challenges, a variety of security approaches both traditional and Artificial Intelligence (AI)-based have been proposed to strengthen the resilience of IoMT environments. In particular, Machine Learning (ML) and Deep Learning (DL) techniques have demonstrated strong capabilities in detecting and mitigating abnormal behaviors and malicious activities. This paper provides a comprehensive survey of recent AI-driven methods applied to IoMT security, with a particular focus on intrusion detection systems (IDS), the availability and characteristics of public datasets, and architectural considerations for deploying security solutions across Cloud, Fog, and Edge computing layers. The paper also discusses legal and ethical concerns related to data protection in healthcare contexts. Finally, the study outlines open challenges and future research directions for developing robust, adaptive, and trustworthy security frameworks in the IoMT ecosystem.
{"title":"AI for IoMT security: a comprehensive survey of intrusion detection and system architectures","authors":"Mohammed Yacoubi , Omar Moussaoui , Cyril Drocourt","doi":"10.1016/j.iot.2025.101869","DOIUrl":"10.1016/j.iot.2025.101869","url":null,"abstract":"<div><div>Recent advances in the Internet of Medical Things (IoMT) have significantly improved data processing and patient care within Smart Healthcare systems. However, these developments have also expanded the surface of potential cyber threats targeting sensitive medical infrastructures. To address these challenges, a variety of security approaches both traditional and Artificial Intelligence (AI)-based have been proposed to strengthen the resilience of IoMT environments. In particular, Machine Learning (ML) and Deep Learning (DL) techniques have demonstrated strong capabilities in detecting and mitigating abnormal behaviors and malicious activities. This paper provides a comprehensive survey of recent AI-driven methods applied to IoMT security, with a particular focus on intrusion detection systems (IDS), the availability and characteristics of public datasets, and architectural considerations for deploying security solutions across Cloud, Fog, and Edge computing layers. The paper also discusses legal and ethical concerns related to data protection in healthcare contexts. Finally, the study outlines open challenges and future research directions for developing robust, adaptive, and trustworthy security frameworks in the IoMT ecosystem.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101869"},"PeriodicalIF":7.6,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1016/j.iot.2025.101868
Ahmed-Rafik Baahmed, Jean-François Dollinger, Mohamed El Amine Brahmia, Mourad Zghal
The explosive growth of Internet of Things (IoT) data and the demand for real-time decisions necessitate edge intelligence to overcome the latency and bandwidth limitations of cloud-only processing. Real-world IoT ecosystems are characterized by their high heterogeneity, which results from a wide variety of devices, sensors, environments, data, tasks, and resources, posing significant communication and computation efficiency challenges, scalability issues, and privacy concerns for edge intelligence. We propose HiFEL-OCKT, a novel hierarchical federated edge learning methodology for addressing the realistic high heterogeneity of IoT ecosystems, while enabling efficient edge intelligence. The key novelty of our proposed HiFEL-OCKT methodology is the efficient and scalable deployment of temporal intelligence at the edge by exploiting the valuable knowledge flowing at this level, which we define with the learning objective evolution, to ensure robust edge personalization through objective congruent collaboration and multi-level knowledge transfer between IoT devices. Through extensive experiments on multiple IoT domains, including smart buildings and industrial IoT with heterogeneous real-world datasets, our HiFEL-OCKT approach uncovered the novel ability in collaborating various highly heterogeneous IoT devices from different ecosystem settings. Our approach demonstrates superior performance and efficiency compared to the state-of-the-art approaches, with an improvement rate as high as 87.57 % in the edge knowledge personalization, while achieving significant speedups as high as 4.38 × in local training.
{"title":"HiFEL-OCKT: Hierarchical federated edge learning with objective congruence and multi-level knowledge transfer for IoT ecosystems","authors":"Ahmed-Rafik Baahmed, Jean-François Dollinger, Mohamed El Amine Brahmia, Mourad Zghal","doi":"10.1016/j.iot.2025.101868","DOIUrl":"10.1016/j.iot.2025.101868","url":null,"abstract":"<div><div>The explosive growth of Internet of Things (IoT) data and the demand for real-time decisions necessitate edge intelligence to overcome the latency and bandwidth limitations of cloud-only processing. Real-world IoT ecosystems are characterized by their high heterogeneity, which results from a wide variety of devices, sensors, environments, data, tasks, and resources, posing significant communication and computation efficiency challenges, scalability issues, and privacy concerns for edge intelligence. We propose HiFEL-OCKT, a novel hierarchical federated edge learning methodology for addressing the realistic high heterogeneity of IoT ecosystems, while enabling efficient edge intelligence. The key novelty of our proposed HiFEL-OCKT methodology is the efficient and scalable deployment of temporal intelligence at the edge by exploiting the valuable knowledge flowing at this level, which we define with the learning objective evolution, to ensure robust edge personalization through objective congruent collaboration and multi-level knowledge transfer between IoT devices. Through extensive experiments on multiple IoT domains, including smart buildings and industrial IoT with heterogeneous real-world datasets, our HiFEL-OCKT approach uncovered the novel ability in collaborating various highly heterogeneous IoT devices from different ecosystem settings. Our approach demonstrates superior performance and efficiency compared to the state-of-the-art approaches, with an improvement rate as high as 87.57 % in the edge knowledge personalization, while achieving significant speedups as high as 4.38 × in local training.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101868"},"PeriodicalIF":7.6,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1016/j.iot.2025.101866
Huang Lin , Shuo Li , Wei Peng , Arthur Peng
WiFi fingerprinting-based indoor localization faces persistent challenges from signal instability and environmental interference. To address these issues, this study introduces an enhanced Group Matching Method with Transformed Fingerprints, Similarity Filtering, and Adaptive Selection, (GMM-TSFA), that improves localization accuracy, interpretability, and robustness. GMM-TSFA integrates three core mechanisms: instance-wise transformation to normalize signal variations and strengthen similarity metrics; similarity filtering to dynamically refine the candidate pool based on signal relevance; and adaptive selection to flexibly determine the most informative reference points. Together, these components improve both pattern recognition and spatial precision. Experiments on four benchmark datasets demonstrate the effectiveness of the proposed method, with a mean localization error (MLE) of 7.30 m on UJIIndoorLoc and 77.86 % of errors under 10 m, consistently outperforming baseline methods including the original GMM. Unlike deep learning-based models, GMM-TSFA offers a transparent and adaptable framework that supports explainable decision-making. These advantages make it a practical and scalable solution for real-world indoor localization systems deployed in complex environments.
{"title":"An enhanced group matching method with transformed fingerprints, similarity filtering, and adaptive selection for indoor localization","authors":"Huang Lin , Shuo Li , Wei Peng , Arthur Peng","doi":"10.1016/j.iot.2025.101866","DOIUrl":"10.1016/j.iot.2025.101866","url":null,"abstract":"<div><div>WiFi fingerprinting-based indoor localization faces persistent challenges from signal instability and environmental interference. To address these issues, this study introduces an enhanced Group Matching Method with Transformed Fingerprints, Similarity Filtering, and Adaptive Selection, (GMM-TSFA), that improves localization accuracy, interpretability, and robustness. GMM-TSFA integrates three core mechanisms: instance-wise transformation to normalize signal variations and strengthen similarity metrics; similarity filtering to dynamically refine the candidate pool based on signal relevance; and adaptive selection to flexibly determine the most informative reference points. Together, these components improve both pattern recognition and spatial precision. Experiments on four benchmark datasets demonstrate the effectiveness of the proposed method, with a mean localization error (MLE) of 7.30 m on UJIIndoorLoc and 77.86 % of errors under 10 m, consistently outperforming baseline methods including the original GMM. Unlike deep learning-based models, GMM-TSFA offers a transparent and adaptable framework that supports explainable decision-making. These advantages make it a practical and scalable solution for real-world indoor localization systems deployed in complex environments.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101866"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.iot.2025.101865
Antonios Ntib , Dimitrios Michael Manias , Abdallah Shami
Unmanned Aerial Vehicles (UAVs) are a critical component of emerging Smart Cities, supporting applications such as emergency response, transportation, environmental monitoring, and infrastructure inspection. Ensuring their reliability requires condition monitoring frameworks capable of real-time defect detection despite the UAVs’ resource-constrained nature. This work presents a real-time, scalable UAV condition monitoring framework that efficiently distributes computation between core servers and edge nodes. The novelty of this work lies in two main contributions: (i) the design and realization of a deployable framework tailored for real-time UAV monitoring with offloading capabilities to both edge and core resources, and (ii) the integration of hardware-level acceleration strategies, including OpenMP-based parallelization and AVX2 SIMD vectorization, to substantially enhance computational efficiency, scalability, and real-time feasibility. Together, these contributions position the framework as a practical solution ready for large-scale UAV swarm deployments. The overall improvements include significant reductions in processing time and enhanced resource utilization while maintaining predictive performance. A comparative evaluation across three frameworks, a baseline state-of-the-art Python framework, an intermediate C++/Cython translation, and the proposed fully optimized OpenMP/AVX2-based framework, demonstrates the framework’s readiness for integration into critical UAV-enabled IoT systems.
{"title":"Real-time scalable UAV condition monitoring framework with hardware-level acceleration for IoT applications","authors":"Antonios Ntib , Dimitrios Michael Manias , Abdallah Shami","doi":"10.1016/j.iot.2025.101865","DOIUrl":"10.1016/j.iot.2025.101865","url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs) are a critical component of emerging Smart Cities, supporting applications such as emergency response, transportation, environmental monitoring, and infrastructure inspection. Ensuring their reliability requires condition monitoring frameworks capable of real-time defect detection despite the UAVs’ resource-constrained nature. This work presents a real-time, scalable UAV condition monitoring framework that efficiently distributes computation between core servers and edge nodes. The novelty of this work lies in two main contributions: (i) the design and realization of a deployable framework tailored for real-time UAV monitoring with offloading capabilities to both edge and core resources, and (ii) the integration of hardware-level acceleration strategies, including OpenMP-based parallelization and AVX2 SIMD vectorization, to substantially enhance computational efficiency, scalability, and real-time feasibility. Together, these contributions position the framework as a practical solution ready for large-scale UAV swarm deployments. The overall improvements include significant reductions in processing time and enhanced resource utilization while maintaining predictive performance. A comparative evaluation across three frameworks, a baseline state-of-the-art Python framework, an intermediate C++/Cython translation, and the proposed fully optimized OpenMP/AVX2-based framework, demonstrates the framework’s readiness for integration into critical UAV-enabled IoT systems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101865"},"PeriodicalIF":7.6,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.iot.2025.101861
Hamzeh Abu Ali , Enver Ever , Burak Kizilkaya , Muhammad Toaha Raza Khan , Masood Ur Rehman , Shuja Ansari , Muhammad Ali Imran , Adnan Yazici
Forest fires are becoming prevalent, threatening ecosystems, economies, and public safety while creating an urgent demand for rapid and reliable detection systems. Conventional approaches such as watchtowers, manual patrols, and satellite imaging suffer from limited coverage, delays, and inadequate precision. To address these challenges, we propose a three-tier, edge-centric framework that integrates wireless sensor networks (WSNs), wireless multimedia sensor networks (WMSNs), unmanned aerial vehicles (UAVs), and lightweight machine learning (ML) and deep learning (DL) models for efficient detection. In the first tier, scalar sensors provide early hazard identification; in the second, smart sensors execute a lightweight ML model for intermediate verification, achieving a 94% F1-score with a minimal feature set; and in the third, UAVs equipped with sensors, cameras, and a compact convolutional neural network (CNN) deliver final confirmation. The CNN achieves state-of-the-art results with a 100% F1 score on the FireMan-UAV-RGBT dataset and 99.5% on UAV-FFDB while remaining compact (1.6 MB) and efficient (157 ms inference on Raspberry Pi 5), enabling real-time edge deployment. Simulations show reduced end-to-end delay (813.59 ms) compared to WSN-only (865.84 ms) and WMSN (1066.18 ms) baselines, improved throughput (7.05 kbps vs 3.80 kbps and 3.06 kbps), and a 100% delivery ratio. Real-world WSN testbed experiments further validate the framework, achieving a 97% delivery ratio, 144.39 ms latency (vs. 258.37 ms in simulations), and energy consumption of 0.0559 J/s (closely matching 0.0442 J/s in simulations). These results collectively demonstrate the practicality and effectiveness of the framework for real-time forest fire monitoring and rapid emergency response.
森林火灾越来越普遍,威胁着生态系统、经济和公共安全,同时迫切需要快速可靠的检测系统。传统的方法,如瞭望塔、人工巡逻和卫星成像,受到覆盖范围有限、延迟和精度不足的困扰。为了应对这些挑战,我们提出了一个三层、以边缘为中心的框架,该框架集成了无线传感器网络(wsn)、无线多媒体传感器网络(wmsn)、无人机(uav)以及轻量级机器学习(ML)和深度学习(DL)模型,以实现高效检测。在第一层,标量传感器提供早期危险识别;其次,智能传感器执行轻量级ML模型进行中间验证,以最小的特征集实现94%的f1得分;第三,配备传感器、摄像头和紧凑卷积神经网络(CNN)的无人机提供最终确认。CNN在FireMan-UAV-RGBT数据集上获得了100%的F1分数,在UAV-FFDB上获得了99.5%的分数,同时保持了紧凑(1.6 MB)和高效(在Raspberry Pi 5上推断157 ms),实现了实时边缘部署。仿真显示,与纯wsn (865.84 ms)和WMSN (1066.18 ms)基线相比,端到端延迟(813.59 ms)减少,吞吐量提高(7.05 kbps vs 3.80 kbps和3.06 kbps),传输率达到100%。真实WSN测试平台实验进一步验证了该框架,实现了97%的传输率、144.39 ms的延迟(模拟为258.37 ms)和0.0559 J/s的能耗(与模拟中的0.0442 J/s非常接近)。这些结果共同证明了该框架在森林火灾实时监测和快速应急响应方面的实用性和有效性。
{"title":"An edge-intelligent three-tier framework for real-time forest fire detection, integrating WSNs, WMSNs, and UAVs","authors":"Hamzeh Abu Ali , Enver Ever , Burak Kizilkaya , Muhammad Toaha Raza Khan , Masood Ur Rehman , Shuja Ansari , Muhammad Ali Imran , Adnan Yazici","doi":"10.1016/j.iot.2025.101861","DOIUrl":"10.1016/j.iot.2025.101861","url":null,"abstract":"<div><div>Forest fires are becoming prevalent, threatening ecosystems, economies, and public safety while creating an urgent demand for rapid and reliable detection systems. Conventional approaches such as watchtowers, manual patrols, and satellite imaging suffer from limited coverage, delays, and inadequate precision. To address these challenges, we propose a three-tier, edge-centric framework that integrates wireless sensor networks (WSNs), wireless multimedia sensor networks (WMSNs), unmanned aerial vehicles (UAVs), and lightweight machine learning (ML) and deep learning (DL) models for efficient detection. In the first tier, scalar sensors provide early hazard identification; in the second, smart sensors execute a lightweight ML model for intermediate verification, achieving a 94% F1-score with a minimal feature set; and in the third, UAVs equipped with sensors, cameras, and a compact convolutional neural network (CNN) deliver final confirmation. The CNN achieves state-of-the-art results with a 100% F1 score on the FireMan-UAV-RGBT dataset and 99.5% on UAV-FFDB while remaining compact (1.6 MB) and efficient (157 ms inference on Raspberry Pi 5), enabling real-time edge deployment. Simulations show reduced end-to-end delay (813.59 ms) compared to WSN-only (865.84 ms) and WMSN (1066.18 ms) baselines, improved throughput (7.05 kbps vs 3.80 kbps and 3.06 kbps), and a 100% delivery ratio. Real-world WSN testbed experiments further validate the framework, achieving a 97% delivery ratio, 144.39 ms latency (vs. 258.37 ms in simulations), and energy consumption of 0.0559 J/s (closely matching 0.0442 J/s in simulations). These results collectively demonstrate the practicality and effectiveness of the framework for real-time forest fire monitoring and rapid emergency response.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101861"},"PeriodicalIF":7.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.iot.2025.101862
Raúl Parada
This paper presents a feasibility study of a solar-autonomous wildfire detection system using neuromorphic edge AI on fixed-wing drones. Through a comprehensive year-long simulation over Parc del Garraf (Catalonia), we evaluate three edge computing platforms, Raspberry Pi 4, Google Coral TPU, and BrainChip Akida, integrated into solar-optimized eBee X drones. Results show that the BrainChip Akida achieves 4200 patrol hrs per yr, nearly three times that of traditional CPU systems, while maintaining 87 % solar energy autonomy. The Google Coral TPU and Raspberry Pi 4 reach 66 % and 52 % autonomy, respectively. Fleet scaling analysis demonstrates that increasing drone count from one to eight reduces median wildfire detection time from 18 to 2.2 hrs, surpassing critical response thresholds. Seasonal analysis reveals Akida-based systems can operate fully on solar energy during summer and most of spring and fall, minimizing grid dependency. These findings establish neuromorphic computing as a foundational technology for sustainable, perpetual environmental monitoring within the Internet of Robotic Things (IoRT).
本文研究了一种基于神经形态边缘人工智能的太阳能自主野火探测系统在固定翼无人机上的可行性。通过对Parc del Garraf (Catalonia)进行为期一年的全面模拟,我们评估了三个边缘计算平台,树莓派4,谷歌Coral TPU和BrainChip Akida,集成到太阳能优化的eBee X无人机中。结果表明,BrainChip Akida实现了每年4200小时的巡逻时间,几乎是传统CPU系统的三倍,同时保持了87%的太阳能自主性。谷歌Coral TPU和Raspberry Pi 4分别达到66%和52%的自主性。机队规模分析表明,将无人机数量从1架增加到8架,将野火探测时间的中位数从18小时减少到2.2小时,超过了关键响应阈值。季节性分析显示,秋田系统可以在夏季和春季和秋季的大部分时间完全依靠太阳能运行,从而最大限度地减少对电网的依赖。这些发现确立了神经形态计算作为机器人物联网(IoRT)中可持续、永久环境监测的基础技术。
{"title":"Neuromorphic solar edge AI for sustainable wildfire detection","authors":"Raúl Parada","doi":"10.1016/j.iot.2025.101862","DOIUrl":"10.1016/j.iot.2025.101862","url":null,"abstract":"<div><div>This paper presents a feasibility study of a solar-autonomous wildfire detection system using neuromorphic edge AI on fixed-wing drones. Through a comprehensive year-long simulation over Parc del Garraf (Catalonia), we evaluate three edge computing platforms, Raspberry Pi 4, Google Coral TPU, and BrainChip Akida, integrated into solar-optimized eBee X drones. Results show that the BrainChip Akida achieves 4200 patrol hrs per yr, nearly three times that of traditional CPU systems, while maintaining 87 % solar energy autonomy. The Google Coral TPU and Raspberry Pi 4 reach 66 % and 52 % autonomy, respectively. Fleet scaling analysis demonstrates that increasing drone count from one to eight reduces median wildfire detection time from 18 to 2.2 hrs, surpassing critical response thresholds. Seasonal analysis reveals Akida-based systems can operate fully on solar energy during summer and most of spring and fall, minimizing grid dependency. These findings establish neuromorphic computing as a foundational technology for sustainable, perpetual environmental monitoring within the Internet of Robotic Things (IoRT).</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101862"},"PeriodicalIF":7.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.iot.2025.101856
Berna Cengiz, Resul Das
The increasing demand for energy and rising expectations for user comfort necessitate the more accurate and efficient management of climate control systems in smart buildings. A crucial step in this process is reliably predicting indoor temperature. In this study, multivariate time series data, including environmental parameters such as temperature, Relative Humidity (RH), light, and Heating, Ventilating and Air Conditioning (HVAC) consumption, were used to evaluate the performance of various deep learning models. Hybrid approaches integrating Recurrent Neural Networks (RNN) architectures, including Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models, with Graph Convolutional Networks (GCN)(GCN-RNN, GCN-LSTM, GCN-GRU) were systematically compared. Furthermore, the Transformer architecture and the Extreme Gradient Boosting (XGBoost) algorithm were included in the comparison as a baseline reference. The results show that the GCN-GRU model achieved superior accuracy compared to other models in the analyzed regions and throughout the test period, reaching an R2 score of 0.9976 with low error rates and providing consistent accuracy. Beyond model performance, a user-friendly interface has been developed that enables the selection of alternative models, interactive visualization of prediction results, examination of the impact of the current control strategy on energy efficiency, and dynamic integration of new algorithms, thanks to a modular software architecture. These findings emphasize the importance of jointly processing temporal and spatial patterns and provide a practical foundation for decision support systems aimed at enhancing energy awareness and operational sustainability in IoT-enabled smart buildings.
{"title":"A spatio-temporal deep learning-based decision support system for energy awareness in IoT-based smart buildings","authors":"Berna Cengiz, Resul Das","doi":"10.1016/j.iot.2025.101856","DOIUrl":"10.1016/j.iot.2025.101856","url":null,"abstract":"<div><div>The increasing demand for energy and rising expectations for user comfort necessitate the more accurate and efficient management of climate control systems in smart buildings. A crucial step in this process is reliably predicting indoor temperature. In this study, multivariate time series data, including environmental parameters such as temperature, Relative Humidity (RH), light, and Heating, Ventilating and Air Conditioning (HVAC) consumption, were used to evaluate the performance of various deep learning models. Hybrid approaches integrating Recurrent Neural Networks (RNN) architectures, including Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models, with Graph Convolutional Networks (GCN)(GCN-RNN, GCN-LSTM, GCN-GRU) were systematically compared. Furthermore, the Transformer architecture and the Extreme Gradient Boosting (XGBoost) algorithm were included in the comparison as a baseline reference. The results show that the GCN-GRU model achieved superior accuracy compared to other models in the analyzed regions and throughout the test period, reaching an <em>R</em><sup>2</sup> score of 0.9976 with low error rates and providing consistent accuracy. Beyond model performance, a user-friendly interface has been developed that enables the selection of alternative models, interactive visualization of prediction results, examination of the impact of the current control strategy on energy efficiency, and dynamic integration of new algorithms, thanks to a modular software architecture. These findings emphasize the importance of jointly processing temporal and spatial patterns and provide a practical foundation for decision support systems aimed at enhancing energy awareness and operational sustainability in IoT-enabled smart buildings.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101856"},"PeriodicalIF":7.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}