首页 > 最新文献

Internet of Things最新文献

英文 中文
An e-health environment conceived with the support of a self-adaptive IoT architecture 在自适应物联网架构的支持下构思的电子医疗环境
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-03 DOI: 10.1016/j.iot.2025.101844
Mateus G. do Nascimento, José Maria N. David, Mario A.R. Dantas, Regina Braga, Victor Ströele
IoT has gradually exposed society to intelligent environments. Software developed for these environments requires efficient data processing, low response time, and proper functioning of sensors, devices, and systems. To meet these software requirements, we can leverage edge, fog, and cloud computing. However, the use of these computational resources presents challenges for software engineering, such as determining which architectures to employ for developing software in intelligent environments. Considering these challenges, this work addresses the research question: How can a self-adaptive architecture support automated computational resource allocation in e-health environments? To answer this research question, we propose a self-adaptive IoT architecture that uses artificial intelligence to manage computational resource usage in intelligent environments, enabling the management of physical spaces and ensuring the correct functioning of applications. A case study was conducted in an e-health environment to support our arguments. The Design Science Research methodology was used to develop the research, and its execution cycles in a real e-health corporate environment, through a case study, enabled the incremental construction of the architecture. The results demonstrate that the proposed architecture enhances the efficiency of allocating computational resources - encompassing edge, fog, and cloud computing - while ensuring the functioning of applications and supporting the management of the physical environment using artificial intelligence. As contributions, the study shows: (i) the self-adaptive architecture construction phases; (ii) how architecture adapts to the demands of the IoT intelligent environment; (iii) how artificial intelligence can support the allocation of computational resources.
物联网逐渐将社会暴露在智能环境中。为这些环境开发的软件需要高效的数据处理、较低的响应时间以及传感器、设备和系统的适当功能。为了满足这些软件需求,我们可以利用边缘计算、雾计算和云计算。然而,这些计算资源的使用对软件工程提出了挑战,例如确定在智能环境中开发软件采用哪种体系结构。考虑到这些挑战,本工作解决了研究问题:自适应架构如何在电子卫生环境中支持自动计算资源分配?为了回答这个研究问题,我们提出了一种自适应物联网架构,该架构使用人工智能来管理智能环境中的计算资源使用,实现物理空间的管理并确保应用程序的正确运行。为了支持我们的论点,我们在电子医疗环境中进行了一个案例研究。设计科学研究方法用于开展研究,其在真实的电子医疗企业环境中的执行周期,通过案例研究,使体系结构的增量构建成为可能。结果表明,所提出的架构提高了分配计算资源的效率——包括边缘计算、雾计算和云计算——同时确保应用程序的功能并支持使用人工智能管理物理环境。作为贡献,研究表明:(i)自适应建筑的构建阶段;(ii)架构如何适应物联网智能环境的需求;(iii)人工智能如何支持计算资源的分配。
{"title":"An e-health environment conceived with the support of a self-adaptive IoT architecture","authors":"Mateus G. do Nascimento,&nbsp;José Maria N. David,&nbsp;Mario A.R. Dantas,&nbsp;Regina Braga,&nbsp;Victor Ströele","doi":"10.1016/j.iot.2025.101844","DOIUrl":"10.1016/j.iot.2025.101844","url":null,"abstract":"<div><div>IoT has gradually exposed society to intelligent environments. Software developed for these environments requires efficient data processing, low response time, and proper functioning of sensors, devices, and systems. To meet these software requirements, we can leverage edge, fog, and cloud computing. However, the use of these computational resources presents challenges for software engineering, such as determining which architectures to employ for developing software in intelligent environments. Considering these challenges, this work addresses the research question: How can a self-adaptive architecture support automated computational resource allocation in e-health environments? To answer this research question, we propose a self-adaptive IoT architecture that uses artificial intelligence to manage computational resource usage in intelligent environments, enabling the management of physical spaces and ensuring the correct functioning of applications. A case study was conducted in an e-health environment to support our arguments. The Design Science Research methodology was used to develop the research, and its execution cycles in a real e-health corporate environment, through a case study, enabled the incremental construction of the architecture. The results demonstrate that the proposed architecture enhances the efficiency of allocating computational resources - encompassing edge, fog, and cloud computing - while ensuring the functioning of applications and supporting the management of the physical environment using artificial intelligence. As contributions, the study shows: (i) the self-adaptive architecture construction phases; (ii) how architecture adapts to the demands of the IoT intelligent environment; (iii) how artificial intelligence can support the allocation of computational resources.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101844"},"PeriodicalIF":7.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing precision irrigation with TinyML: Advanced NDVI anomaly detection and model optimization 用TinyML增强精准灌溉:先进的NDVI异常检测和模型优化
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-03 DOI: 10.1016/j.iot.2025.101839
Carlos Hernandez-Hidalgo, Aurora González-Vidal, Antonio F. Skarmeta
Agriculture accounts for over 70 % of global freshwater use, yet water availability is increasingly threatened by climate change, overuse, and poor management. Smallholder farmers are particularly vulnerable, often lacking access to advanced technologies. Sustainable agriculture demands innovative, energy-efficient solutions for smarter water management. This work proposes a low-power precision irrigation approach using Tiny Machine Learning (TinyML) to operate without cloud connectivity in resource-constrained environments and investigates how anomalies in the Normalized Difference Vegetation Index (NDVI) combined with environmental data such as temperature and humidity, can drive adaptive, data-driven irrigation. Different percentile thresholds (e.g., 25th–75th) were evaluated to optimize detection. Models were trained in Keras and quantized from 32-bit to 8-bit using TensorFlow Lite for deployment on microcontrollers, enabling real-time inference without internet access. Three models were compared: Linear Regression (CVRMSE = 30.16 %), Random Forest Regression (RMSE = 0.062, CVRMSE = 27.42 %), and a Neural Network (RMSE = 0.0589, CVRMSE = 36.88 %) designed for TinyML deployment. The Percentile-based NDVI Anomaly Index (PNAI) improved predictive performance by up to 56.84 % in CVRMSE over standard methods, with the 25th–75th percentile range yielding the most accurate results. After quantization, the TinyML neural network achieved an RMSE of 0.0421 and a CVRMSE of 33.41 %, with only a 1.2 % accuracy drop and a model size of 6280 bytes, confirming its feasibility for on-device execution. These results demonstrate that TinyML-based NDVI anomaly detection is a viable, low-cost, and scalable approach for precision irrigation, with future work focusing on multi-crop validation and real-world field deployment.
农业占全球淡水使用量的70%以上,但水资源供应日益受到气候变化、过度使用和管理不善的威胁。小农尤其脆弱,他们往往无法获得先进技术。可持续农业需要创新、节能的解决方案,以实现更智能的水管理。这项工作提出了一种低功耗的精确灌溉方法,使用微型机器学习(TinyML)在资源受限的环境中无云连接的情况下运行,并研究了归一化植被指数(NDVI)的异常与温度和湿度等环境数据相结合,如何驱动自适应的数据驱动灌溉。评估不同的百分位阈值(例如,25 - 75)以优化检测。在Keras中训练模型,并使用TensorFlow Lite将其从32位量化到8位,以便部署在微控制器上,实现无需互联网访问的实时推理。比较了三种模型:线性回归(CVRMSE = 30.16%)、随机森林回归(RMSE = 0.062, CVRMSE = 27.42%)和为TinyML部署设计的神经网络(RMSE = 0.0589, CVRMSE = 36.88%)。与标准方法相比,基于百分位的NDVI异常指数(PNAI)在CVRMSE中的预测性能提高了56.84%,其中25 - 75百分位范围产生的结果最准确。量化后,TinyML神经网络的RMSE为0.0421,CVRMSE为33.41%,准确率仅下降1.2%,模型大小为6280字节,证实了其在设备上执行的可行性。这些结果表明,基于tinyml的NDVI异常检测是一种可行的、低成本的、可扩展的精确灌溉方法,未来的工作将集中在多作物验证和实际的田间部署上。
{"title":"Enhancing precision irrigation with TinyML: Advanced NDVI anomaly detection and model optimization","authors":"Carlos Hernandez-Hidalgo,&nbsp;Aurora González-Vidal,&nbsp;Antonio F. Skarmeta","doi":"10.1016/j.iot.2025.101839","DOIUrl":"10.1016/j.iot.2025.101839","url":null,"abstract":"<div><div>Agriculture accounts for over 70 % of global freshwater use, yet water availability is increasingly threatened by climate change, overuse, and poor management. Smallholder farmers are particularly vulnerable, often lacking access to advanced technologies. Sustainable agriculture demands innovative, energy-efficient solutions for smarter water management. This work proposes a low-power precision irrigation approach using Tiny Machine Learning (TinyML) to operate without cloud connectivity in resource-constrained environments and investigates how anomalies in the Normalized Difference Vegetation Index (NDVI) combined with environmental data such as temperature and humidity, can drive adaptive, data-driven irrigation. Different percentile thresholds (e.g., 25th–75th) were evaluated to optimize detection. Models were trained in Keras and quantized from 32-bit to 8-bit using TensorFlow Lite for deployment on microcontrollers, enabling real-time inference without internet access. Three models were compared: Linear Regression (CVRMSE = 30.16 %), Random Forest Regression (RMSE = 0.062, CVRMSE = 27.42 %), and a Neural Network (RMSE = 0.0589, CVRMSE = 36.88 %) designed for TinyML deployment. The Percentile-based NDVI Anomaly Index (PNAI) improved predictive performance by up to 56.84 % in CVRMSE over standard methods, with the 25th–75th percentile range yielding the most accurate results. After quantization, the TinyML neural network achieved an RMSE of 0.0421 and a CVRMSE of 33.41 %, with only a 1.2 % accuracy drop and a model size of 6280 bytes, confirming its feasibility for on-device execution. These results demonstrate that TinyML-based NDVI anomaly detection is a viable, low-cost, and scalable approach for precision irrigation, with future work focusing on multi-crop validation and real-world field deployment.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101839"},"PeriodicalIF":7.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-hypervisor-based authorization and DoS attack mitigation framework using LC-WTRNN technique 基于多管理程序的授权和使用LC-WTRNN技术的DoS攻击缓解框架
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-03 DOI: 10.1016/j.iot.2025.101843
Kalyan Gattupalli , Poovendran Alagarsundaram , Harikumar Nagarajan , Venkata Surya Bhavana Harish Gollavilli , Surendar Rama Sitaraman , Pushpakumar R
Hypervisors allow the management of several Virtual Machines (VMs) on a single device but are highly susceptible to DoS attacks, which deprive resources and disrupt cloud services. Techniques currently in use fail to establish proper authorization between multi-hypervisors, thereby exposing VMs to security threats. To ameliorate this situation, we developed a LeCun Wave Tanh Recurrent Neural Network (LC-WTRNN)-based multi-hypervisor authorization framework integrated with Hamming Code Quantum Cryptography (HC-QC), Kullback-Leibler De-Swinging K-Anonymity (KLDS-KAnonymity), and the Hell Bhatt Tiger Hashing Algorithm (HB-THA). Thereby, the system efficiently detects DoS attacks, secures VM registration, and ensures data integrity. With experimental results on the CICDDoS2019 dataset, it is seen that its method achieves an accuracy of 98.62 %, a recall value of 98.45 %, and a specificity of 98.65 % on average, outperforming traditional RNN, DBN, RBM, and DNN methods by 5.3 %. Additionally, the newly proposed framework contributes to a 56.1 % reduction in the time needed for anonymization while providing 8.5 % better encryption security and 44.5 % less tree generation time against the traditional methods. These results thus validate LC-WTRNN as a scalable and secure solution to mitigating DoS attacks in cloud environments.
管理程序允许管理单个设备上的多个虚拟机(vm),但极易受到DoS攻击,这会剥夺资源并中断云服务。目前使用的技术无法在多个管理程序之间建立适当的授权,从而使虚拟机面临安全威胁。为了改善这种情况,我们开发了一个基于LeCun Wave Tanh递归神经网络(LC-WTRNN)的多管理程序授权框架,该框架集成了汉明码量子加密(HC-QC), Kullback-Leibler de - swing K-Anonymity (KLDS-KAnonymity)和Hell Bhatt Tiger哈希算法(hbtha)。从而有效检测DoS攻击,保护虚拟机注册安全,保证数据完整性。在CICDDoS2019数据集上的实验结果表明,该方法的准确率为98.62%,召回率为98.45%,平均特异性为98.65%,比传统的RNN、DBN、RBM和DNN方法高5.3%。此外,新提出的框架有助于将匿名化所需的时间减少56.1%,同时提供8.5%的加密安全性和44.5%的树生成时间比传统方法少。因此,这些结果验证了LC-WTRNN作为一种可扩展和安全的解决方案,可以减轻云环境中的DoS攻击。
{"title":"Multi-hypervisor-based authorization and DoS attack mitigation framework using LC-WTRNN technique","authors":"Kalyan Gattupalli ,&nbsp;Poovendran Alagarsundaram ,&nbsp;Harikumar Nagarajan ,&nbsp;Venkata Surya Bhavana Harish Gollavilli ,&nbsp;Surendar Rama Sitaraman ,&nbsp;Pushpakumar R","doi":"10.1016/j.iot.2025.101843","DOIUrl":"10.1016/j.iot.2025.101843","url":null,"abstract":"<div><div>Hypervisors allow the management of several Virtual Machines (VMs) on a single device but are highly susceptible to DoS attacks, which deprive resources and disrupt cloud services. Techniques currently in use fail to establish proper authorization between multi-hypervisors, thereby exposing VMs to security threats. To ameliorate this situation, we developed a LeCun Wave Tanh Recurrent Neural Network (LC-WTRNN)-based multi-hypervisor authorization framework integrated with Hamming Code Quantum Cryptography (HC-QC), Kullback-Leibler De-Swinging K-Anonymity (KLDS-KAnonymity), and the Hell Bhatt Tiger Hashing Algorithm (HB-THA). Thereby, the system efficiently detects DoS attacks, secures VM registration, and ensures data integrity. With experimental results on the CICDDoS2019 dataset, it is seen that its method achieves an accuracy of 98.62 %, a recall value of 98.45 %, and a specificity of 98.65 % on average, outperforming traditional RNN, DBN, RBM, and DNN methods by 5.3 %. Additionally, the newly proposed framework contributes to a 56.1 % reduction in the time needed for anonymization while providing 8.5 % better encryption security and 44.5 % less tree generation time against the traditional methods. These results thus validate LC-WTRNN as a scalable and secure solution to mitigating DoS attacks in cloud environments.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"36 ","pages":"Article 101843"},"PeriodicalIF":7.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent latency optimization in hyperledger fabric for seamless metaverse integration 超级账本结构中的智能延迟优化,实现无缝的元数据集成
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-02 DOI: 10.1016/j.iot.2025.101835
Jummai Enare Abang , Rabab Al-Zaidi , Haifa Takruri , Mohammed Al-Khalidi
Blockchain technology underpins secure, decentralized digital ecosystems and supports applications ranging from finance and supply chains to the emerging Metaverse. However, latency remains a key challenge, particularly for real time applications. Hyperledger Fabric (HLF), a leading enterprise blockchain, suffers from transaction delays due to its endorsement policies, which enhance security but introduce computational and communication overhead. This paper addresses the latency challenge in HLF by proposing a reinforcement learning (RL)-based dynamic endorsement mechanism. The model learns from past transaction patterns and system states to predict the optimal number of endorsers needed for each transaction. By dynamically adjusting the “AND” endorsement policy based on whether the observed latency meets a defined threshold, the approach balances security with performance, which is critical for low-latency applications like the Metaverse. Experimental evaluations across diverse HLF configurations, using both mathematical and empirical methods, show that the proposed RL model reduces transaction latency by up to 37.54 % compared to static policies and outperforms other RL models (SARSA, Dueling DQN, Double Q-learning) by 6.81 % to 16.04 %. Results confirm the model’s adaptability and superior performance, particularly in single-client environments. In terms of throughput, the proposed RL model consistently surpasses the static configuration across all workloads, demonstrating strong adaptability to varying transaction loads with the most notable improvement of 27.61 % under single-client conditions, underscoring the model’s capability to optimise light workloads. This research contributes to the development of scalable, responsive, and secure blockchain infrastructures, offering an intelligent solution for real-time latency optimisation in digital applications such as the Metaverse.
区块链技术支持安全、分散的数字生态系统,并支持从金融和供应链到新兴的元宇宙的应用。然而,延迟仍然是一个关键的挑战,特别是对于实时应用程序。Hyperledger Fabric (HLF)是一个领先的企业网络,由于其背书策略而遭受交易延迟,该策略增强了安全性,但引入了计算和通信开销。本文通过提出一种基于强化学习(RL)的动态背书机制来解决HLF中的延迟挑战。该模型从过去的事务模式和系统状态中学习,以预测每个事务所需的最佳背书者数量。通过根据观察到的延迟是否满足定义的阈值动态调整“AND”背书策略,该方法平衡了安全性和性能,这对于像Metaverse这样的低延迟应用程序至关重要。使用数学和经验方法对不同HLF配置进行的实验评估表明,与静态策略相比,所提出的RL模型将事务延迟减少了37.54%,并且优于其他RL模型(SARSA, Dueling DQN, Double Q-learning) 6.81%至16.04%。结果证实了该模型的适应性和优越的性能,特别是在单客户端环境中。就吞吐量而言,建议的RL模型在所有工作负载中始终优于静态配置,显示出对不同事务负载的强大适应性,在单客户端条件下最显著的改进为27.61%,强调了该模型优化轻工作负载的能力。这项研究有助于开发可扩展、响应迅速、安全的区块链基础设施,为数字应用(如Metaverse)中的实时延迟优化提供智能解决方案。
{"title":"Intelligent latency optimization in hyperledger fabric for seamless metaverse integration","authors":"Jummai Enare Abang ,&nbsp;Rabab Al-Zaidi ,&nbsp;Haifa Takruri ,&nbsp;Mohammed Al-Khalidi","doi":"10.1016/j.iot.2025.101835","DOIUrl":"10.1016/j.iot.2025.101835","url":null,"abstract":"<div><div>Blockchain technology underpins secure, decentralized digital ecosystems and supports applications ranging from finance and supply chains to the emerging Metaverse. However, latency remains a key challenge, particularly for real time applications. Hyperledger Fabric (HLF), a leading enterprise blockchain, suffers from transaction delays due to its endorsement policies, which enhance security but introduce computational and communication overhead. This paper addresses the latency challenge in HLF by proposing a reinforcement learning (RL)-based dynamic endorsement mechanism. The model learns from past transaction patterns and system states to predict the optimal number of endorsers needed for each transaction. By dynamically adjusting the “AND” endorsement policy based on whether the observed latency meets a defined threshold, the approach balances security with performance, which is critical for low-latency applications like the Metaverse. Experimental evaluations across diverse HLF configurations, using both mathematical and empirical methods, show that the proposed RL model reduces transaction latency by up to 37.54 % compared to static policies and outperforms other RL models (SARSA, Dueling DQN, Double Q-learning) by 6.81 % to 16.04 %. Results confirm the model’s adaptability and superior performance, particularly in single-client environments. In terms of throughput, the proposed RL model consistently surpasses the static configuration across all workloads, demonstrating strong adaptability to varying transaction loads with the most notable improvement of 27.61 % under single-client conditions, underscoring the model’s capability to optimise light workloads. This research contributes to the development of scalable, responsive, and secure blockchain infrastructures, offering an intelligent solution for real-time latency optimisation in digital applications such as the Metaverse.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101835"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A privacy-aware and sustainable joint optimization for resource-constrained internet of things using deep reinforcement learning 基于深度强化学习的资源受限物联网隐私感知可持续联合优化
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.iot.2025.101837
Mehdi Hosseinzadeh , Parisa Khoshvaght , Amir Masoud Rahmani , Farhad Soleimanian Gharehchopogh , Shakiba Rajabi , Aso Darwesh , Omed Hassan Ahmed , Thantrira Porntaveetus , Sang-Woong Lee
The rise of battery-powered Internet of Thing (IoT) fleets in buildings and campuses requires policies that manage sensing, communication, and edge–cloud offloading while considering energy, carbon, privacy, and cost limits. In this paper, we frame this challenge as a Markov Decision Process (MDP) and design a controller using Deep Reinforcement Learning (DRL). We present a Rainbow-based IoT controller that retains distributional value learning, dueling networks, NoisyNets, n-step returns, Prioritized Experience Replay (PER), and double selection, and contributes four novelties: dual-budget Lagrangian control with warm-up, connectivity-robust distributional targets reweighted by outage/queue risk, federated sketch-guided replay for underrepresented regimes, and realistic ISAC-aware macro-actions with integrated DP/CO₂ accounting and budget-aware training/logging. Simulations show that the proposed algorithm achieves ≈88 % higher anomaly detection, ≈39 % higher packet success, ≈52 % less energy consumption, and ≈74 % lower cloud cost than the best baseline, demonstrating superior utility, reliability, and sustainability in IoT workloads.
随着电池供电的物联网(IoT)车队在建筑物和校园中的兴起,需要制定管理传感、通信和边缘云卸载的政策,同时考虑能源、碳、隐私和成本限制。在本文中,我们将这一挑战框架为马尔可夫决策过程(MDP),并使用深度强化学习(DRL)设计了一个控制器。我们提出了一个基于彩虹的物联网控制器,它保留了分布式价值学习、决斗网络、噪音网络、n步返回、优先体验重放(PER)和双重选择,并贡献了四个新颖之处:具有预热功能的双预算拉格朗日控制,由中断/队列风险重新加权的连接鲁棒分布目标,针对代表性不足的制度的联邦草图指导重放,以及具有集成DP/CO₂核算和预算感知培训/日志的现实isac感知宏观操作。仿真结果表明,与最佳基线相比,该算法的异常检测率提高了约88%,数据包成功率提高了约39%,能耗降低了约52%,云成本降低了约74%,在物联网工作负载中展示了卓越的实用性、可靠性和可持续性。
{"title":"A privacy-aware and sustainable joint optimization for resource-constrained internet of things using deep reinforcement learning","authors":"Mehdi Hosseinzadeh ,&nbsp;Parisa Khoshvaght ,&nbsp;Amir Masoud Rahmani ,&nbsp;Farhad Soleimanian Gharehchopogh ,&nbsp;Shakiba Rajabi ,&nbsp;Aso Darwesh ,&nbsp;Omed Hassan Ahmed ,&nbsp;Thantrira Porntaveetus ,&nbsp;Sang-Woong Lee","doi":"10.1016/j.iot.2025.101837","DOIUrl":"10.1016/j.iot.2025.101837","url":null,"abstract":"<div><div>The rise of battery-powered Internet of Thing (IoT) fleets in buildings and campuses requires policies that manage sensing, communication, and edge–cloud offloading while considering energy, carbon, privacy, and cost limits. In this paper, we frame this challenge as a Markov Decision Process (MDP) and design a controller using Deep Reinforcement Learning (DRL). We present a Rainbow-based IoT controller that retains distributional value learning, dueling networks, NoisyNets, <em>n</em>-step returns, Prioritized Experience Replay (PER), and double selection, and contributes four novelties: dual-budget Lagrangian control with warm-up, connectivity-robust distributional targets reweighted by outage/queue risk, federated sketch-guided replay for underrepresented regimes, and realistic ISAC-aware macro-actions with integrated DP/CO₂ accounting and budget-aware training/logging. Simulations show that the proposed algorithm achieves ≈88 % higher anomaly detection, ≈39 % higher packet success, ≈52 % less energy consumption, and ≈74 % lower cloud cost than the best baseline, demonstrating superior utility, reliability, and sustainability in IoT workloads.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101837"},"PeriodicalIF":7.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bluetooth 5 power consumption for an opportunistic edge computing system based on low-power IoT devices 基于低功耗物联网设备的机会性边缘计算系统的蓝牙5功耗
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-26 DOI: 10.1016/j.iot.2025.101834
Ángel Niebla-Montero , Iván Froiz-Míguez , Paula Fraga-Lamas , Tiago M. Fernández-Caramés
The rapid expansion of the Internet of Things (IoT) has created a growing demand for efficient and reliable wireless communication, particularly in environments with limited network coverage. Opportunistic Edge Computing (OEC) has emerged as a viable solution by leveraging smart IoT gateways to provide Edge Computing services, route communications and store data in a distributed way, thus reducing reliance on Cloud infrastructure. This article explores the potential of Bluetooth 5 as a low-power communications protocol for OEC systems based on Single Board Computers (SBCs). For such a purpose, a novel OEC architecture and stack protocol are proposed to integrate a version of Bluetooth 5 adapted to enable opportunistic data exchanges in resource-constrained IoT environments. To evaluate the proposed solution, a testbed was built and experiments were carried out to measure the system latency and power consumption. The obtained results demonstrate the differences between using Bluetooth Legacy and LE Coded modulations in four different OEC scenarios. The findings show the Bluetooth 5 potential for enhancing decentralized IoT networks while maintaining low power consumption, making it a suitable choice for developing OEC IoT applications. Thus, this article provides useful guidelines for selecting the most appropriate Bluetooth 5 mode for the researchers and developers of the next-generation OEC solutions.
物联网(IoT)的快速扩展创造了对高效可靠的无线通信的不断增长的需求,特别是在网络覆盖有限的环境中。机会边缘计算(OEC)已经成为一种可行的解决方案,它利用智能物联网网关以分布式方式提供边缘计算服务、路由通信和存储数据,从而减少对云基础设施的依赖。本文探讨了蓝牙5作为基于单板计算机(sbc)的OEC系统的低功耗通信协议的潜力。为此,提出了一种新的OEC架构和堆栈协议,以集成蓝牙5版本,以便在资源受限的物联网环境中实现机会数据交换。为了对该方案进行评估,搭建了测试平台,并进行了系统时延和功耗测试。获得的结果显示了在四种不同的OEC场景中使用蓝牙传统和LE编码调制的差异。研究结果表明,蓝牙5具有增强分散物联网网络的潜力,同时保持低功耗,使其成为开发OEC物联网应用的合适选择。因此,本文为下一代OEC解决方案的研究人员和开发人员选择最合适的蓝牙5模式提供了有用的指导方针。
{"title":"Bluetooth 5 power consumption for an opportunistic edge computing system based on low-power IoT devices","authors":"Ángel Niebla-Montero ,&nbsp;Iván Froiz-Míguez ,&nbsp;Paula Fraga-Lamas ,&nbsp;Tiago M. Fernández-Caramés","doi":"10.1016/j.iot.2025.101834","DOIUrl":"10.1016/j.iot.2025.101834","url":null,"abstract":"<div><div>The rapid expansion of the Internet of Things (IoT) has created a growing demand for efficient and reliable wireless communication, particularly in environments with limited network coverage. Opportunistic Edge Computing (OEC) has emerged as a viable solution by leveraging smart IoT gateways to provide Edge Computing services, route communications and store data in a distributed way, thus reducing reliance on Cloud infrastructure. This article explores the potential of Bluetooth 5 as a low-power communications protocol for OEC systems based on Single Board Computers (SBCs). For such a purpose, a novel OEC architecture and stack protocol are proposed to integrate a version of Bluetooth 5 adapted to enable opportunistic data exchanges in resource-constrained IoT environments. To evaluate the proposed solution, a testbed was built and experiments were carried out to measure the system latency and power consumption. The obtained results demonstrate the differences between using Bluetooth Legacy and LE Coded modulations in four different OEC scenarios. The findings show the Bluetooth 5 potential for enhancing decentralized IoT networks while maintaining low power consumption, making it a suitable choice for developing OEC IoT applications. Thus, this article provides useful guidelines for selecting the most appropriate Bluetooth 5 mode for the researchers and developers of the next-generation OEC solutions.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101834"},"PeriodicalIF":7.6,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collective intelligence-based service migration enabling zoom-in functionality within industry 5.0 基于集体智能的服务迁移,支持工业5.0中的放大功能
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-26 DOI: 10.1016/j.iot.2025.101830
Riccardo Venanzi , Lorenzo Colombi , Davide Tazzioli , Simon Dahdal , Mauro Tortonesi , Luca Foschini
The rapid evolution of Industry 5.0 emphasizes the integration of human expertise with machine intelligence to create resilient, adaptive, and human-centric industrial systems. This paper introduces a novel Collective Intelligence (CI)-based service migration framework designed for Industry 5.0 environments, enabling dynamic orchestration of stateful services across heterogeneous edge-to-cloud infrastructures. At its core, the framework leverages Kubernetes (K8s) enhanced with AI-driven decision-making and human-in-the-loop collaboration to address the limitations of traditional orchestration in industrial settings. A key innovation of this work is the Zoom-In functionality, which empowers human operators to escalate anomaly detection and analysis by deploying advanced machine learning models on demand, seamlessly migrating services to resource-rich nodes when deeper investigation is warranted. The proposed framework integrates Large Language Models (LLMs) to translate operator intent into actionable policies, ensuring context-aware and explainable decision-making. Experimental validation in real industrial scenarios demonstrates high anomaly detection accuracy (F1-scores up to 1.0), reliable operator intent translation (over 70 % correct JSON generations with lightweight LLMs), and efficient multi-criteria scheduling with millisecond-level decision times. Moreover, the proposed migration mechanism reduces downtime by more than 50 % compared to vanilla Kubernetes, ensuring service continuity in mission-critical tasks. This work advances the vision of collaborative intelligence in IoT systems, bridging the gap between human judgment and automated orchestration for Industry 5.0 applications.
工业5.0的快速发展强调了人类专业知识与机器智能的集成,以创建有弹性、自适应和以人为中心的工业系统。本文介绍了一种为工业5.0环境设计的新颖的基于集体智能(CI)的服务迁移框架,支持跨异构边缘到云基础设施的有状态服务的动态编排。该框架的核心是利用Kubernetes (k8),增强了人工智能驱动的决策和人在循环协作,以解决传统编排在工业环境中的局限性。这项工作的一个关键创新是Zoom-In功能,它允许操作员根据需要部署先进的机器学习模型来升级异常检测和分析,当需要进行更深入的调查时,可以无缝地将服务迁移到资源丰富的节点。提出的框架集成了大型语言模型(llm),将操作员的意图转化为可操作的政策,确保上下文感知和可解释的决策。在实际工业场景中的实验验证表明,该方法具有较高的异常检测精度(f1得分高达1.0)、可靠的操作员意图转换(使用轻量级llm生成的JSON的正确率超过70%)以及毫秒级决策时间内的高效多标准调度。此外,与普通Kubernetes相比,拟议的迁移机制减少了50%以上的停机时间,确保了关键任务的服务连续性。这项工作推进了物联网系统中协作智能的愿景,弥合了工业5.0应用中人类判断和自动化编排之间的差距。
{"title":"Collective intelligence-based service migration enabling zoom-in functionality within industry 5.0","authors":"Riccardo Venanzi ,&nbsp;Lorenzo Colombi ,&nbsp;Davide Tazzioli ,&nbsp;Simon Dahdal ,&nbsp;Mauro Tortonesi ,&nbsp;Luca Foschini","doi":"10.1016/j.iot.2025.101830","DOIUrl":"10.1016/j.iot.2025.101830","url":null,"abstract":"<div><div>The rapid evolution of Industry 5.0 emphasizes the integration of human expertise with machine intelligence to create resilient, adaptive, and human-centric industrial systems. This paper introduces a novel Collective Intelligence (CI)-based service migration framework designed for Industry 5.0 environments, enabling dynamic orchestration of stateful services across heterogeneous edge-to-cloud infrastructures. At its core, the framework leverages Kubernetes (K8s) enhanced with AI-driven decision-making and human-in-the-loop collaboration to address the limitations of traditional orchestration in industrial settings. A key innovation of this work is the Zoom-In functionality, which empowers human operators to escalate anomaly detection and analysis by deploying advanced machine learning models on demand, seamlessly migrating services to resource-rich nodes when deeper investigation is warranted. The proposed framework integrates Large Language Models (LLMs) to translate operator intent into actionable policies, ensuring context-aware and explainable decision-making. Experimental validation in real industrial scenarios demonstrates high anomaly detection accuracy (F1-scores up to 1.0), reliable operator intent translation (over 70 % correct JSON generations with lightweight LLMs), and efficient multi-criteria scheduling with millisecond-level decision times. Moreover, the proposed migration mechanism reduces downtime by more than 50 % compared to vanilla Kubernetes, ensuring service continuity in mission-critical tasks. This work advances the vision of collaborative intelligence in IoT systems, bridging the gap between human judgment and automated orchestration for Industry 5.0 applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101830"},"PeriodicalIF":7.6,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PENSIL: Programmable network stack for low-power lossy IoT networks using lightweight-virtualization PENSIL:可编程网络堆栈,用于使用轻量级虚拟化的低功耗物联网网络
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-24 DOI: 10.1016/j.iot.2025.101829
Ahmad Mahmod , Julien Montavont , Thomas Noel
Low-Power and Lossy Wireless Networks (LLWNs) form the foundation of the Internet of Things (IoT), connecting billions of constrained devices across diverse domains. Despite their critical role, the design of LLWN devices is strongly constrained by limited memory, processing power, and energy supply. These limitations have historically led to the adoption of monolithic network stacks, where protocol logic is tightly integrated and bound at compile time. As a result, even minor changes require a full firmware update, making protocol evolution costly and impractical. Because LLWN deployments face diverse and evolving conditions, a single static stack design or fixed configuration is insufficient. In this paper, we propose PENSIL, a network architecture featuring a programmable and modular network stack for LLWN that enables selective updates of protocol functions, combined with a central orchestrator that manages device stacks. PENSIL enables dynamic and semantic reconfiguration, from parameter tuning to network configuration swapping, allowing networks to adapt without downtime. A proof-of-concept implementation on real hardware demonstrates that our architecture enhances performance through fast, lightweight and secure updates while respecting the stringent memory, energy, and processing constraints of LLWN devices, ultimately bridging the gap between programmability and efficiency.
低功耗和有损无线网络(LLWNs)构成了物联网(IoT)的基础,连接了数十亿个不同领域的受限设备。尽管它们的关键作用,LLWN器件的设计受到有限的内存,处理能力和能源供应的强烈限制。从历史上看,这些限制导致采用单片网络堆栈,其中协议逻辑在编译时被紧密集成和绑定。因此,即使是很小的更改也需要完整的固件更新,这使得协议的发展成本高昂且不切实际。由于LLWN部署面临多样化和不断发展的条件,单一的静态堆栈设计或固定配置是不够的。在本文中,我们提出了PENSIL,这是一种网络架构,具有用于LLWN的可编程和模块化网络堆栈,可以选择更新协议功能,并结合管理设备堆栈的中央编排器。PENSIL支持动态和语义重新配置,从参数调优到网络配置交换,允许网络在不停机的情况下进行调整。在真实硬件上的概念验证实现表明,我们的架构通过快速、轻量级和安全的更新来增强性能,同时尊重LLWN设备严格的内存、能量和处理限制,最终弥合了可编程性和效率之间的差距。
{"title":"PENSIL: Programmable network stack for low-power lossy IoT networks using lightweight-virtualization","authors":"Ahmad Mahmod ,&nbsp;Julien Montavont ,&nbsp;Thomas Noel","doi":"10.1016/j.iot.2025.101829","DOIUrl":"10.1016/j.iot.2025.101829","url":null,"abstract":"<div><div>Low-Power and Lossy Wireless Networks (LLWNs) form the foundation of the Internet of Things (IoT), connecting billions of constrained devices across diverse domains. Despite their critical role, the design of LLWN devices is strongly constrained by limited memory, processing power, and energy supply. These limitations have historically led to the adoption of monolithic network stacks, where protocol logic is tightly integrated and bound at compile time. As a result, even minor changes require a full firmware update, making protocol evolution costly and impractical. Because LLWN deployments face diverse and evolving conditions, a single static stack design or fixed configuration is insufficient. In this paper, we propose PENSIL, a network architecture featuring a programmable and modular network stack for LLWN that enables selective updates of protocol functions, combined with a central orchestrator that manages device stacks. PENSIL enables dynamic and semantic reconfiguration, from parameter tuning to network configuration swapping, allowing networks to adapt without downtime. A proof-of-concept implementation on real hardware demonstrates that our architecture enhances performance through fast, lightweight and secure updates while respecting the stringent memory, energy, and processing constraints of LLWN devices, ultimately bridging the gap between programmability and efficiency.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101829"},"PeriodicalIF":7.6,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FTL-TSLP: A federated transfer learning approach with a two-stage LSTM pipeline for fault-tolerant and privacy-preserving intrusion detection in IoMT networks FTL-TSLP:一种基于两阶段LSTM管道的联合迁移学习方法,用于IoMT网络中的容错和隐私保护入侵检测
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-23 DOI: 10.1016/j.iot.2025.101832
Abdelhammid Bouazza , Hichem Debbi , Hicham Lakhlef
The rapid proliferation of the Internet of Medical Things (IoMT) has transformed healthcare delivery by enabling continuous patient monitoring, intelligent clinical decision-making, and efficient remote care. However, these advancements have also introduced substantial cybersecurity risks that threaten patient privacy, safety, and the operational resilience of healthcare systems. These challenges are further compounded by stringent regulatory requirements and the inherent complexity of heterogeneous, non-independent, and identically distributed (non-IID) data. To address these challenges, we propose FTL-TSLP, a novel federated intrusion detection framework that integrates federated learning (FL) with targeted transfer learning (TL) through a two-stage LSTM-based pipeline. The framework is explicitly designed to operate effectively under both IID and non-IID data distributions while preserving data privacy. On the client side, temporal aggregation techniques efficiently compress sequential data, reducing computational costs without compromising detection accuracy. Additionally, the framework enhances fault tolerance by incorporating a Multi-Criteria Decision Analysis (MCDA) module combined with a Naïve Bayes classifier for real-time, probabilistic device-level classification. The proposed model demonstrates superior performance across the NF-UNSW-NB15-v2, WUSTL-EHMS-2020, and CICIoMT-2024 benchmark datasets. Even under extreme Dirichlet-based non-IID conditions (α=0.1), FTL-TSLP achieves 99.72 % accuracy and a 98.07 % F1-score on the CICIoMT-2024 dataset, confirming its robustness in heterogeneous IoMT traffic environments. These results highlight that FTL-TSLP offers a reliable, privacy-preserving, and computationally efficient solution for securing IoMT healthcare ecosystems.
医疗物联网(IoMT)的快速发展通过实现持续的患者监测、智能临床决策和高效的远程护理,改变了医疗保健服务。然而,这些进步也带来了巨大的网络安全风险,威胁到患者隐私、安全和医疗系统的运营弹性。严格的法规要求和异构、非独立和同分布(非iid)数据的固有复杂性进一步加剧了这些挑战。为了解决这些挑战,我们提出了FTL-TSLP,这是一种新的联邦入侵检测框架,通过基于两阶段lstm的管道将联邦学习(FL)与目标迁移学习(TL)集成在一起。该框架明确设计为在IID和非IID数据分布下有效运行,同时保护数据隐私。在客户端,时间聚合技术有效地压缩序列数据,在不影响检测精度的情况下降低计算成本。此外,该框架通过将多标准决策分析(MCDA)模块与用于实时概率设备级分类的Naïve贝叶斯分类器相结合,增强了容错性。该模型在NF-UNSW-NB15-v2、WUSTL-EHMS-2020和CICIoMT-2024基准数据集上表现出优异的性能。即使在极端的基于dirichlet的非iid条件下(α=0.1), FTL-TSLP在CICIoMT-2024数据集上也能达到99.72%的准确率和98.07%的f1得分,证实了其在异构IoMT流量环境中的鲁棒性。这些结果突出表明,FTL-TSLP为保护IoMT医疗保健生态系统提供了可靠、隐私保护和计算效率高的解决方案。
{"title":"FTL-TSLP: A federated transfer learning approach with a two-stage LSTM pipeline for fault-tolerant and privacy-preserving intrusion detection in IoMT networks","authors":"Abdelhammid Bouazza ,&nbsp;Hichem Debbi ,&nbsp;Hicham Lakhlef","doi":"10.1016/j.iot.2025.101832","DOIUrl":"10.1016/j.iot.2025.101832","url":null,"abstract":"<div><div>The rapid proliferation of the Internet of Medical Things (IoMT) has transformed healthcare delivery by enabling continuous patient monitoring, intelligent clinical decision-making, and efficient remote care. However, these advancements have also introduced substantial cybersecurity risks that threaten patient privacy, safety, and the operational resilience of healthcare systems. These challenges are further compounded by stringent regulatory requirements and the inherent complexity of heterogeneous, non-independent, and identically distributed (non-IID) data. To address these challenges, we propose FTL-TSLP, a novel federated intrusion detection framework that integrates federated learning (FL) with targeted transfer learning (TL) through a two-stage LSTM-based pipeline. The framework is explicitly designed to operate effectively under both IID and non-IID data distributions while preserving data privacy. On the client side, temporal aggregation techniques efficiently compress sequential data, reducing computational costs without compromising detection accuracy. Additionally, the framework enhances fault tolerance by incorporating a Multi-Criteria Decision Analysis (MCDA) module combined with a Naïve Bayes classifier for real-time, probabilistic device-level classification. The proposed model demonstrates superior performance across the NF-UNSW-NB15-v2, WUSTL-EHMS-2020, and CICIoMT-2024 benchmark datasets. Even under extreme Dirichlet-based non-IID conditions (<span><math><mrow><mi>α</mi><mo>=</mo><mn>0.1</mn></mrow></math></span>), FTL-TSLP achieves 99.72 % accuracy and a 98.07 % F1-score on the CICIoMT-2024 dataset, confirming its robustness in heterogeneous IoMT traffic environments. These results highlight that FTL-TSLP offers a reliable, privacy-preserving, and computationally efficient solution for securing IoMT healthcare ecosystems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101832"},"PeriodicalIF":7.6,"publicationDate":"2025-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient real-time apple disease classification approach using a novel lightweight neural network on an Arduino edge device 在Arduino边缘设备上使用新型轻量级神经网络的高效实时苹果病害分类方法
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-22 DOI: 10.1016/j.iot.2025.101833
Milan Grujev , Dragan Stojanovic , Aleksandar Milosavljevic , Milos Ilic , Veljko Prodanovic
Advances in miniaturization, battery technology, and software enable edge devices to capture high-resolution images and videos, perform real-time processing, and support faster decision-making in precision agriculture. Automated apple disease detection is crucial for maintaining crop health and preventing large-scale loss. Conventional methods rely on large neural networks that require high-performance computing resources, making them impractical for real-time on-site deployment. These computational demands limit their usability in agricultural settings, where power and connectivity constraints are common. This research aims to develop a novel, optimized machine learning model for apple leaf disease classification, designed specifically for low-power edge devices. Based on core principles of the MobileNet architecture, the model employs depthwise convolution blocks, a layer reduction process, and memory-efficient optimization techniques to achieve high accuracy while significantly reducing computational overhead. Trained on a combination of readily available datasets, the model achieved a classification accuracy of 94.80 %, with a model size of just 36.2 KB and runtime memory usage of 162.2 KB, representing a reduction of over 90 % compared to the standard MobileNetV2. The model performed inference in 463.8 milliseconds while consuming only 14 milliwatts of power. These results demonstrate the potential of lightweight neural networks for real-time disease detection on resource-constrained devices such as Arduino microcontrollers. By enabling localized, on-device inference without cloud dependency, Edge AI offers a scalable and cost-effective solution for precision agriculture, improving crop monitoring and sustainability.
小型化、电池技术和软件的进步使边缘设备能够捕获高分辨率图像和视频,执行实时处理,并支持精准农业中更快的决策。苹果病害自动检测对于保持作物健康和防止大规模损失至关重要。传统的方法依赖于需要高性能计算资源的大型神经网络,这使得它们不适合实时的现场部署。这些计算需求限制了它们在电力和连接限制普遍存在的农业环境中的可用性。本研究旨在开发一种新的、优化的苹果叶片疾病分类机器学习模型,专门为低功耗边缘设备设计。基于MobileNet架构的核心原则,该模型采用深度卷积块、层减少过程和内存高效优化技术,在显著降低计算开销的同时实现高精度。在现有数据集的组合训练下,该模型实现了94.80%的分类准确率,模型大小仅为36.2 KB,运行时内存使用量为162.2 KB,与标准MobileNetV2相比减少了90%以上。该模型在463.8毫秒内完成推理,而功耗仅为14毫瓦。这些结果证明了轻量级神经网络在资源受限设备(如Arduino微控制器)上进行实时疾病检测的潜力。通过在不依赖云的情况下实现本地化的设备上推理,Edge AI为精准农业提供了可扩展且具有成本效益的解决方案,改善了作物监测和可持续性。
{"title":"An efficient real-time apple disease classification approach using a novel lightweight neural network on an Arduino edge device","authors":"Milan Grujev ,&nbsp;Dragan Stojanovic ,&nbsp;Aleksandar Milosavljevic ,&nbsp;Milos Ilic ,&nbsp;Veljko Prodanovic","doi":"10.1016/j.iot.2025.101833","DOIUrl":"10.1016/j.iot.2025.101833","url":null,"abstract":"<div><div>Advances in miniaturization, battery technology, and software enable edge devices to capture high-resolution images and videos, perform real-time processing, and support faster decision-making in precision agriculture. Automated apple disease detection is crucial for maintaining crop health and preventing large-scale loss. Conventional methods rely on large neural networks that require high-performance computing resources, making them impractical for real-time on-site deployment. These computational demands limit their usability in agricultural settings, where power and connectivity constraints are common. This research aims to develop a novel, optimized machine learning model for apple leaf disease classification, designed specifically for low-power edge devices. Based on core principles of the MobileNet architecture, the model employs depthwise convolution blocks, a layer reduction process, and memory-efficient optimization techniques to achieve high accuracy while significantly reducing computational overhead. Trained on a combination of readily available datasets, the model achieved a classification accuracy of 94.80 %, with a model size of just 36.2 KB and runtime memory usage of 162.2 KB, representing a reduction of over 90 % compared to the standard MobileNetV2. The model performed inference in 463.8 milliseconds while consuming only 14 milliwatts of power. These results demonstrate the potential of lightweight neural networks for real-time disease detection on resource-constrained devices such as Arduino microcontrollers. By enabling localized, on-device inference without cloud dependency, Edge AI offers a scalable and cost-effective solution for precision agriculture, improving crop monitoring and sustainability.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101833"},"PeriodicalIF":7.6,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Internet of Things
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1