首页 > 最新文献

Internet of Things最新文献

英文 中文
AtomicVAD: A tiny voice activity detection model for efficient inference in intelligent IoT systems AtomicVAD:用于智能物联网系统中高效推理的微型语音活动检测模型
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-11-20 DOI: 10.1016/j.iot.2025.101822
Angelo J. Soto-Vergel , Prashant Sankaran , Juan C. Velez , Rene Amaya-Mier , Diana Ramirez-Rios
This paper introduces AtomicVAD, an ultra-lightweight, end-to-end voice activity detection (VAD) model designed for inference on resource-constrained microcontrollers at the extreme edge. Existing VAD models often rely on large architectures with thousands of trainable parameters, making them impractical for deployment on low-power microcontrollers commonly used in internet of things systems. Even with compression methods such as quantization or pruning, these models typically fail to achieve low-latency performance under strict power and memory limits. AtomicVAD overcomes these limitations through the introduction of the General Growing Cosine Unit, a trainable oscillatory activation function that embeds feature learning within periodic modulations. This design enables remarkable efficiency with approximately 0.3k trainable parameters, representing a 99.7 % reduction compared to commonly used baselines like MarbleNet, while maintaining competitive accuracy. Evaluated on the challenging AVA-Speech benchmark, AtomicVAD achieves an AUROC of 0.903 and an F2-score of 0.891, outperforming larger state-of-the-art systems and demonstrating robustness to background noise and music. Optimized for extreme efficiency, AtomicVAD enables ultra-low latency inference —as low as 26 ms on a 240 MHz Cortex-M7 and 1.22 s on a 64 MHz Cortex-M4F— facilitated by INT8 quantization. Its memory footprint remains below 75 kB Flash and 65 kB SRAM. A real-world LoRaWAN field trial further validated its practicality, showing that on-device speech gating eliminates unnecessary, bandwidth-intensive audio uploads, reducing over-the-air delays from minutes to milliseconds. Key use cases include remote monitoring, smart-home control, disaster-response sensor networks, and other long-range, low-power systems requiring efficient, always-on audio processing.
本文介绍了AtomicVAD,这是一种超轻量的端到端语音活动检测(VAD)模型,旨在对资源受限的微控制器进行极端边缘推理。即使使用量化或修剪等压缩方法,在严格的功率和内存限制下,这些模型通常也无法实现低延迟性能。AtomicVAD通过引入通用增长余弦单元克服了这些限制,这是一个可训练的振荡激活函数,在周期性调制中嵌入了特征学习。这种设计能够以大约0.3k的可训练参数实现显着的效率,与常用的基准(如MarbleNet)相比,降低了99.7%,同时保持了具有竞争力的准确性。在具有挑战性的AVA-Speech基准测试中,AtomicVAD的AUROC为0.903,F2-score为0.891,优于大型最先进的系统,并表现出对背景噪声和音乐的鲁棒性。AtomicVAD为极高的效率进行了优化,通过INT8量化实现了超低延迟推理-在240 MHz的Cortex-M7上低至26 ms,在64 MHz的Cortex-M4F上低至1.22 s。它的内存占用仍然低于75 kB闪存和65 kB SRAM。真实世界的LoRaWAN现场试验进一步验证了其实用性,表明设备上的语音门控消除了不必要的、带宽密集型的音频上传,将无线延迟从几分钟减少到几毫秒。主要用例包括远程监控、智能家居控制、灾难响应传感器网络,以及其他需要高效、始终在线的音频处理的远程、低功耗系统。
{"title":"AtomicVAD: A tiny voice activity detection model for efficient inference in intelligent IoT systems","authors":"Angelo J. Soto-Vergel ,&nbsp;Prashant Sankaran ,&nbsp;Juan C. Velez ,&nbsp;Rene Amaya-Mier ,&nbsp;Diana Ramirez-Rios","doi":"10.1016/j.iot.2025.101822","DOIUrl":"10.1016/j.iot.2025.101822","url":null,"abstract":"<div><div>This paper introduces AtomicVAD, an ultra-lightweight, end-to-end voice activity detection (VAD) model designed for inference on resource-constrained microcontrollers at the extreme edge. Existing VAD models often rely on large architectures with thousands of trainable parameters, making them impractical for deployment on low-power microcontrollers commonly used in internet of things systems. Even with compression methods such as quantization or pruning, these models typically fail to achieve low-latency performance under strict power and memory limits. AtomicVAD overcomes these limitations through the introduction of the General Growing Cosine Unit, a trainable oscillatory activation function that embeds feature learning within periodic modulations. This design enables remarkable efficiency with approximately 0.3k trainable parameters, representing a 99.7 % reduction compared to commonly used baselines like MarbleNet, while maintaining competitive accuracy. Evaluated on the challenging AVA-Speech benchmark, AtomicVAD achieves an AUROC of 0.903 and an F<sub>2</sub>-score of 0.891, outperforming larger state-of-the-art systems and demonstrating robustness to background noise and music. Optimized for extreme efficiency, AtomicVAD enables ultra-low latency inference —as low as 26 ms on a 240 MHz Cortex-M7 and 1.22 s on a 64 MHz Cortex-M4F— facilitated by INT8 quantization. Its memory footprint remains below 75 kB Flash and 65 kB SRAM. A real-world LoRaWAN field trial further validated its practicality, showing that on-device speech gating eliminates unnecessary, bandwidth-intensive audio uploads, reducing over-the-air delays from minutes to milliseconds. Key use cases include remote monitoring, smart-home control, disaster-response sensor networks, and other long-range, low-power systems requiring efficient, always-on audio processing.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101822"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A privacy-aware and sustainable joint optimization for resource-constrained internet of things using deep reinforcement learning 基于深度强化学习的资源受限物联网隐私感知可持续联合优化
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-12-01 DOI: 10.1016/j.iot.2025.101837
Mehdi Hosseinzadeh , Parisa Khoshvaght , Amir Masoud Rahmani , Farhad Soleimanian Gharehchopogh , Shakiba Rajabi , Aso Darwesh , Omed Hassan Ahmed , Thantrira Porntaveetus , Sang-Woong Lee
The rise of battery-powered Internet of Thing (IoT) fleets in buildings and campuses requires policies that manage sensing, communication, and edge–cloud offloading while considering energy, carbon, privacy, and cost limits. In this paper, we frame this challenge as a Markov Decision Process (MDP) and design a controller using Deep Reinforcement Learning (DRL). We present a Rainbow-based IoT controller that retains distributional value learning, dueling networks, NoisyNets, n-step returns, Prioritized Experience Replay (PER), and double selection, and contributes four novelties: dual-budget Lagrangian control with warm-up, connectivity-robust distributional targets reweighted by outage/queue risk, federated sketch-guided replay for underrepresented regimes, and realistic ISAC-aware macro-actions with integrated DP/CO₂ accounting and budget-aware training/logging. Simulations show that the proposed algorithm achieves ≈88 % higher anomaly detection, ≈39 % higher packet success, ≈52 % less energy consumption, and ≈74 % lower cloud cost than the best baseline, demonstrating superior utility, reliability, and sustainability in IoT workloads.
随着电池供电的物联网(IoT)车队在建筑物和校园中的兴起,需要制定管理传感、通信和边缘云卸载的政策,同时考虑能源、碳、隐私和成本限制。在本文中,我们将这一挑战框架为马尔可夫决策过程(MDP),并使用深度强化学习(DRL)设计了一个控制器。我们提出了一个基于彩虹的物联网控制器,它保留了分布式价值学习、决斗网络、噪音网络、n步返回、优先体验重放(PER)和双重选择,并贡献了四个新颖之处:具有预热功能的双预算拉格朗日控制,由中断/队列风险重新加权的连接鲁棒分布目标,针对代表性不足的制度的联邦草图指导重放,以及具有集成DP/CO₂核算和预算感知培训/日志的现实isac感知宏观操作。仿真结果表明,与最佳基线相比,该算法的异常检测率提高了约88%,数据包成功率提高了约39%,能耗降低了约52%,云成本降低了约74%,在物联网工作负载中展示了卓越的实用性、可靠性和可持续性。
{"title":"A privacy-aware and sustainable joint optimization for resource-constrained internet of things using deep reinforcement learning","authors":"Mehdi Hosseinzadeh ,&nbsp;Parisa Khoshvaght ,&nbsp;Amir Masoud Rahmani ,&nbsp;Farhad Soleimanian Gharehchopogh ,&nbsp;Shakiba Rajabi ,&nbsp;Aso Darwesh ,&nbsp;Omed Hassan Ahmed ,&nbsp;Thantrira Porntaveetus ,&nbsp;Sang-Woong Lee","doi":"10.1016/j.iot.2025.101837","DOIUrl":"10.1016/j.iot.2025.101837","url":null,"abstract":"<div><div>The rise of battery-powered Internet of Thing (IoT) fleets in buildings and campuses requires policies that manage sensing, communication, and edge–cloud offloading while considering energy, carbon, privacy, and cost limits. In this paper, we frame this challenge as a Markov Decision Process (MDP) and design a controller using Deep Reinforcement Learning (DRL). We present a Rainbow-based IoT controller that retains distributional value learning, dueling networks, NoisyNets, <em>n</em>-step returns, Prioritized Experience Replay (PER), and double selection, and contributes four novelties: dual-budget Lagrangian control with warm-up, connectivity-robust distributional targets reweighted by outage/queue risk, federated sketch-guided replay for underrepresented regimes, and realistic ISAC-aware macro-actions with integrated DP/CO₂ accounting and budget-aware training/logging. Simulations show that the proposed algorithm achieves ≈88 % higher anomaly detection, ≈39 % higher packet success, ≈52 % less energy consumption, and ≈74 % lower cloud cost than the best baseline, demonstrating superior utility, reliability, and sustainability in IoT workloads.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101837"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoT-enabled CCTV monitoring and deep learning for automated water body segmentation in agricultural reservoirs 基于物联网的闭路电视监控和深度学习,用于农业水库的自动水体分割
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-11-12 DOI: 10.1016/j.iot.2025.101823
Soon Ho Kwon , Youcan Feng , Seungyub Lee
This study presents an IoT-enabled deep learning framework for automated water body segmentation in agricultural reservoirs using CCTV imagery. The framework integrates IoT-based long-term monitoring imagery with public benchmark water datasets to improve segmentation robustness under diverse visual conditions encountered in real-world agricultural settings. Three training strategies were defined—(I) benchmark-only, (II) CCTV-only, and (III) integrated—to systematically evaluate generalization across heterogeneous data sources. We also evaluate three U-Net–based architectures: Model 1 (baseline with binary cross-entropy loss), Model 2 (the same architecture trained with a differentiable Jaccard loss), and Model 3 (a parameter-reduced architecture with weighted cross-entropy, designed for edge inference on IoT hardware). Each configuration was trained and validated over ten independent runs to ensure statistical reliability. Model 2 consistently achieved the highest and most stable performance across all training strategies, demonstrating that loss-function optimization, rather than architectural expansion alone, is the primary driver of performance improvement. The integrated training strategy (Strategy III), which combines benchmark and CCTV imagery, yielded the strongest generalization, improving mean IoU by roughly 15–20% and reducing variability compared to single-source training. Performance differences between test sites were attributable mainly to environmental variability—fog, reflections, shadows, surface ripples, and vegetation occlusions—rather than model instability. Validated on two independent reservoirs that were fully held out from training, the framework generalized to new sites without requiring additional site-specific annotation. Model 3′s parameter-efficient design (approximately 60% fewer trainable parameters) supports near-edge inference on embedded IoT devices, enabling continuous, unattended, IoT-based monitoring of agricultural reservoirs for smart irrigation management.
本研究提出了一种基于物联网的深度学习框架,用于利用闭路电视图像对农业水库的水体进行自动分割。该框架将基于物联网的长期监测图像与公共基准水数据集集成在一起,以提高在现实农业环境中遇到的不同视觉条件下的分割鲁棒性。定义了三种训练策略- (I)仅基准,(II)仅cctv和(III)集成-以系统地评估跨异构数据源的泛化。我们还评估了三种基于u - net的架构:模型1(具有二进制交叉熵损失的基线),模型2(使用可微分的Jaccard损失训练的相同架构)和模型3(具有加权交叉熵的参数减少架构,专为物联网硬件的边缘推理而设计)。每个配置都经过十次独立运行的训练和验证,以确保统计可靠性。模型2在所有训练策略中始终获得最高和最稳定的性能,表明损失函数优化,而不仅仅是架构扩展,是性能改进的主要驱动因素。综合训练策略(策略III)结合了基准和CCTV图像,产生了最强的泛化,与单源训练相比,平均IoU提高了大约15-20%,并减少了可变性。测试地点之间的性能差异主要归因于环境的可变性——雾、反射、阴影、表面波纹和植被遮挡——而不是模型的不稳定性。在两个独立的水库上进行了验证,这些水库从训练中完全坚持下来,该框架推广到新的站点,而不需要额外的站点特定注释。Model 3的参数高效设计(可训练参数减少约60%)支持嵌入式物联网设备的近边缘推理,实现对农业水库的连续、无人值守、基于物联网的智能灌溉管理监测。
{"title":"IoT-enabled CCTV monitoring and deep learning for automated water body segmentation in agricultural reservoirs","authors":"Soon Ho Kwon ,&nbsp;Youcan Feng ,&nbsp;Seungyub Lee","doi":"10.1016/j.iot.2025.101823","DOIUrl":"10.1016/j.iot.2025.101823","url":null,"abstract":"<div><div>This study presents an IoT-enabled deep learning framework for automated water body segmentation in agricultural reservoirs using CCTV imagery. The framework integrates IoT-based long-term monitoring imagery with public benchmark water datasets to improve segmentation robustness under diverse visual conditions encountered in real-world agricultural settings. Three training strategies were defined—(I) benchmark-only, (II) CCTV-only, and (III) integrated—to systematically evaluate generalization across heterogeneous data sources. We also evaluate three U-Net–based architectures: Model 1 (baseline with binary cross-entropy loss), Model 2 (the same architecture trained with a differentiable Jaccard loss), and Model 3 (a parameter-reduced architecture with weighted cross-entropy, designed for edge inference on IoT hardware). Each configuration was trained and validated over ten independent runs to ensure statistical reliability. Model 2 consistently achieved the highest and most stable performance across all training strategies, demonstrating that loss-function optimization, rather than architectural expansion alone, is the primary driver of performance improvement. The integrated training strategy (Strategy III), which combines benchmark and CCTV imagery, yielded the strongest generalization, improving mean IoU by roughly 15–20% and reducing variability compared to single-source training. Performance differences between test sites were attributable mainly to environmental variability—fog, reflections, shadows, surface ripples, and vegetation occlusions—rather than model instability. Validated on two independent reservoirs that were fully held out from training, the framework generalized to new sites without requiring additional site-specific annotation. Model 3′s parameter-efficient design (approximately 60% fewer trainable parameters) supports near-edge inference on embedded IoT devices, enabling continuous, unattended, IoT-based monitoring of agricultural reservoirs for smart irrigation management.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101823"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient real-time apple disease classification approach using a novel lightweight neural network on an Arduino edge device 在Arduino边缘设备上使用新型轻量级神经网络的高效实时苹果病害分类方法
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-11-22 DOI: 10.1016/j.iot.2025.101833
Milan Grujev , Dragan Stojanovic , Aleksandar Milosavljevic , Milos Ilic , Veljko Prodanovic
Advances in miniaturization, battery technology, and software enable edge devices to capture high-resolution images and videos, perform real-time processing, and support faster decision-making in precision agriculture. Automated apple disease detection is crucial for maintaining crop health and preventing large-scale loss. Conventional methods rely on large neural networks that require high-performance computing resources, making them impractical for real-time on-site deployment. These computational demands limit their usability in agricultural settings, where power and connectivity constraints are common. This research aims to develop a novel, optimized machine learning model for apple leaf disease classification, designed specifically for low-power edge devices. Based on core principles of the MobileNet architecture, the model employs depthwise convolution blocks, a layer reduction process, and memory-efficient optimization techniques to achieve high accuracy while significantly reducing computational overhead. Trained on a combination of readily available datasets, the model achieved a classification accuracy of 94.80 %, with a model size of just 36.2 KB and runtime memory usage of 162.2 KB, representing a reduction of over 90 % compared to the standard MobileNetV2. The model performed inference in 463.8 milliseconds while consuming only 14 milliwatts of power. These results demonstrate the potential of lightweight neural networks for real-time disease detection on resource-constrained devices such as Arduino microcontrollers. By enabling localized, on-device inference without cloud dependency, Edge AI offers a scalable and cost-effective solution for precision agriculture, improving crop monitoring and sustainability.
小型化、电池技术和软件的进步使边缘设备能够捕获高分辨率图像和视频,执行实时处理,并支持精准农业中更快的决策。苹果病害自动检测对于保持作物健康和防止大规模损失至关重要。传统的方法依赖于需要高性能计算资源的大型神经网络,这使得它们不适合实时的现场部署。这些计算需求限制了它们在电力和连接限制普遍存在的农业环境中的可用性。本研究旨在开发一种新的、优化的苹果叶片疾病分类机器学习模型,专门为低功耗边缘设备设计。基于MobileNet架构的核心原则,该模型采用深度卷积块、层减少过程和内存高效优化技术,在显著降低计算开销的同时实现高精度。在现有数据集的组合训练下,该模型实现了94.80%的分类准确率,模型大小仅为36.2 KB,运行时内存使用量为162.2 KB,与标准MobileNetV2相比减少了90%以上。该模型在463.8毫秒内完成推理,而功耗仅为14毫瓦。这些结果证明了轻量级神经网络在资源受限设备(如Arduino微控制器)上进行实时疾病检测的潜力。通过在不依赖云的情况下实现本地化的设备上推理,Edge AI为精准农业提供了可扩展且具有成本效益的解决方案,改善了作物监测和可持续性。
{"title":"An efficient real-time apple disease classification approach using a novel lightweight neural network on an Arduino edge device","authors":"Milan Grujev ,&nbsp;Dragan Stojanovic ,&nbsp;Aleksandar Milosavljevic ,&nbsp;Milos Ilic ,&nbsp;Veljko Prodanovic","doi":"10.1016/j.iot.2025.101833","DOIUrl":"10.1016/j.iot.2025.101833","url":null,"abstract":"<div><div>Advances in miniaturization, battery technology, and software enable edge devices to capture high-resolution images and videos, perform real-time processing, and support faster decision-making in precision agriculture. Automated apple disease detection is crucial for maintaining crop health and preventing large-scale loss. Conventional methods rely on large neural networks that require high-performance computing resources, making them impractical for real-time on-site deployment. These computational demands limit their usability in agricultural settings, where power and connectivity constraints are common. This research aims to develop a novel, optimized machine learning model for apple leaf disease classification, designed specifically for low-power edge devices. Based on core principles of the MobileNet architecture, the model employs depthwise convolution blocks, a layer reduction process, and memory-efficient optimization techniques to achieve high accuracy while significantly reducing computational overhead. Trained on a combination of readily available datasets, the model achieved a classification accuracy of 94.80 %, with a model size of just 36.2 KB and runtime memory usage of 162.2 KB, representing a reduction of over 90 % compared to the standard MobileNetV2. The model performed inference in 463.8 milliseconds while consuming only 14 milliwatts of power. These results demonstrate the potential of lightweight neural networks for real-time disease detection on resource-constrained devices such as Arduino microcontrollers. By enabling localized, on-device inference without cloud dependency, Edge AI offers a scalable and cost-effective solution for precision agriculture, improving crop monitoring and sustainability.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101833"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing precision irrigation with TinyML: Advanced NDVI anomaly detection and model optimization 用TinyML增强精准灌溉:先进的NDVI异常检测和模型优化
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-12-03 DOI: 10.1016/j.iot.2025.101839
Carlos Hernandez-Hidalgo, Aurora González-Vidal, Antonio F. Skarmeta
Agriculture accounts for over 70 % of global freshwater use, yet water availability is increasingly threatened by climate change, overuse, and poor management. Smallholder farmers are particularly vulnerable, often lacking access to advanced technologies. Sustainable agriculture demands innovative, energy-efficient solutions for smarter water management. This work proposes a low-power precision irrigation approach using Tiny Machine Learning (TinyML) to operate without cloud connectivity in resource-constrained environments and investigates how anomalies in the Normalized Difference Vegetation Index (NDVI) combined with environmental data such as temperature and humidity, can drive adaptive, data-driven irrigation. Different percentile thresholds (e.g., 25th–75th) were evaluated to optimize detection. Models were trained in Keras and quantized from 32-bit to 8-bit using TensorFlow Lite for deployment on microcontrollers, enabling real-time inference without internet access. Three models were compared: Linear Regression (CVRMSE = 30.16 %), Random Forest Regression (RMSE = 0.062, CVRMSE = 27.42 %), and a Neural Network (RMSE = 0.0589, CVRMSE = 36.88 %) designed for TinyML deployment. The Percentile-based NDVI Anomaly Index (PNAI) improved predictive performance by up to 56.84 % in CVRMSE over standard methods, with the 25th–75th percentile range yielding the most accurate results. After quantization, the TinyML neural network achieved an RMSE of 0.0421 and a CVRMSE of 33.41 %, with only a 1.2 % accuracy drop and a model size of 6280 bytes, confirming its feasibility for on-device execution. These results demonstrate that TinyML-based NDVI anomaly detection is a viable, low-cost, and scalable approach for precision irrigation, with future work focusing on multi-crop validation and real-world field deployment.
农业占全球淡水使用量的70%以上,但水资源供应日益受到气候变化、过度使用和管理不善的威胁。小农尤其脆弱,他们往往无法获得先进技术。可持续农业需要创新、节能的解决方案,以实现更智能的水管理。这项工作提出了一种低功耗的精确灌溉方法,使用微型机器学习(TinyML)在资源受限的环境中无云连接的情况下运行,并研究了归一化植被指数(NDVI)的异常与温度和湿度等环境数据相结合,如何驱动自适应的数据驱动灌溉。评估不同的百分位阈值(例如,25 - 75)以优化检测。在Keras中训练模型,并使用TensorFlow Lite将其从32位量化到8位,以便部署在微控制器上,实现无需互联网访问的实时推理。比较了三种模型:线性回归(CVRMSE = 30.16%)、随机森林回归(RMSE = 0.062, CVRMSE = 27.42%)和为TinyML部署设计的神经网络(RMSE = 0.0589, CVRMSE = 36.88%)。与标准方法相比,基于百分位的NDVI异常指数(PNAI)在CVRMSE中的预测性能提高了56.84%,其中25 - 75百分位范围产生的结果最准确。量化后,TinyML神经网络的RMSE为0.0421,CVRMSE为33.41%,准确率仅下降1.2%,模型大小为6280字节,证实了其在设备上执行的可行性。这些结果表明,基于tinyml的NDVI异常检测是一种可行的、低成本的、可扩展的精确灌溉方法,未来的工作将集中在多作物验证和实际的田间部署上。
{"title":"Enhancing precision irrigation with TinyML: Advanced NDVI anomaly detection and model optimization","authors":"Carlos Hernandez-Hidalgo,&nbsp;Aurora González-Vidal,&nbsp;Antonio F. Skarmeta","doi":"10.1016/j.iot.2025.101839","DOIUrl":"10.1016/j.iot.2025.101839","url":null,"abstract":"<div><div>Agriculture accounts for over 70 % of global freshwater use, yet water availability is increasingly threatened by climate change, overuse, and poor management. Smallholder farmers are particularly vulnerable, often lacking access to advanced technologies. Sustainable agriculture demands innovative, energy-efficient solutions for smarter water management. This work proposes a low-power precision irrigation approach using Tiny Machine Learning (TinyML) to operate without cloud connectivity in resource-constrained environments and investigates how anomalies in the Normalized Difference Vegetation Index (NDVI) combined with environmental data such as temperature and humidity, can drive adaptive, data-driven irrigation. Different percentile thresholds (e.g., 25th–75th) were evaluated to optimize detection. Models were trained in Keras and quantized from 32-bit to 8-bit using TensorFlow Lite for deployment on microcontrollers, enabling real-time inference without internet access. Three models were compared: Linear Regression (CVRMSE = 30.16 %), Random Forest Regression (RMSE = 0.062, CVRMSE = 27.42 %), and a Neural Network (RMSE = 0.0589, CVRMSE = 36.88 %) designed for TinyML deployment. The Percentile-based NDVI Anomaly Index (PNAI) improved predictive performance by up to 56.84 % in CVRMSE over standard methods, with the 25th–75th percentile range yielding the most accurate results. After quantization, the TinyML neural network achieved an RMSE of 0.0421 and a CVRMSE of 33.41 %, with only a 1.2 % accuracy drop and a model size of 6280 bytes, confirming its feasibility for on-device execution. These results demonstrate that TinyML-based NDVI anomaly detection is a viable, low-cost, and scalable approach for precision irrigation, with future work focusing on multi-crop validation and real-world field deployment.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101839"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acheron: a market-based multi-relay architecture for adaptive and secure cross-chain communication Acheron:基于市场的多中继架构,用于自适应和安全的跨链通信
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-12-04 DOI: 10.1016/j.iot.2025.101836
Tuan-Dung Tran , Quang Vu , Bao Huynh , Van-Hau Pham
The growing fragmentation of the blockchain ecosystem has intensified the demand for secure and scalable interoperability protocols. Existing cross-chain solutions face an ‘interoperability trilemma’, struggling to simultaneously achieve decentralization, security, and scalability amidst a fragmented blockchain ecosystem. This paper introduces Acheron, a market-based multi-relay architecture designed to navigate this trilemma by decoupling security from a single monolithic entity and parallelizing message transport across independent, sovereign Relay Chains. Each Relay Chain operates as a self-contained Proof-of-Stake (PoS) network with slashing-based cryptoeconomic security, eliminating monolithic bottlenecks. Routing decisions are governed by a Multi-Attribute Utility Theory (MAUT) model that dynamically optimizes for security, latency, and cost, while the Acheron DAO and Watchtower Network ensure verifiable governance and continuous relay telemetry. Experimental validation using the Hardhat framework and the Base Sepolia testnet demonstrates that throughput scales linearly from 0.13 to 1.27 transactions per second as relays increase from one to ten, while latency variance decreases by over 90 % and average transaction costs remain stable. Compared with established baselines, Acheron achieves a 7 % reduction in mean latency and over 2.2 ×  higher throughput under identical workloads. These results demonstrate that Acheron’s market-based paradigm presents a viable and quantitatively superior path toward achieving secure, scalable, and decentralized interoperability for both financial and IoT-driven ecosystems.
区块链生态系统的日益分散加剧了对安全和可扩展互操作性协议的需求。现有的跨链解决方案面临着“互操作性三难困境”,在分散的区块链生态系统中努力同时实现去中心化、安全性和可扩展性。本文介绍了Acheron,这是一种基于市场的多中继架构,旨在通过将安全性与单个整体实体解耦并在独立的主权中继链上并行传输消息来解决这一三难困境。每个中继链作为一个独立的权益证明(PoS)网络运行,具有基于削减的加密经济安全性,消除了单一的瓶颈。路由决策由多属性效用理论(Multi-Attribute Utility Theory, MAUT)模型治理,该模型动态优化了安全性、延迟和成本,而Acheron DAO和Watchtower Network确保了可验证的治理和连续的中继遥测。使用Hardhat框架和Base Sepolia测试网进行的实验验证表明,随着中继从1个增加到10个,吞吐量从每秒0.13到1.27个事务呈线性增长,而延迟差异减少了90%以上,平均交易成本保持稳定。与已建立的基线相比,在相同的工作负载下,Acheron的平均延迟降低了7%,吞吐量提高了2.2 × 以上。这些结果表明,Acheron基于市场的范例为金融和物联网驱动的生态系统实现安全、可扩展和分散的互操作性提供了一条可行的、数量上的优越途径。
{"title":"Acheron: a market-based multi-relay architecture for adaptive and secure cross-chain communication","authors":"Tuan-Dung Tran ,&nbsp;Quang Vu ,&nbsp;Bao Huynh ,&nbsp;Van-Hau Pham","doi":"10.1016/j.iot.2025.101836","DOIUrl":"10.1016/j.iot.2025.101836","url":null,"abstract":"<div><div>The growing fragmentation of the blockchain ecosystem has intensified the demand for secure and scalable interoperability protocols. Existing cross-chain solutions face an ‘interoperability trilemma’, struggling to simultaneously achieve decentralization, security, and scalability amidst a fragmented blockchain ecosystem. This paper introduces Acheron, a market-based multi-relay architecture designed to navigate this trilemma by decoupling security from a single monolithic entity and parallelizing message transport across independent, sovereign Relay Chains. Each Relay Chain operates as a self-contained Proof-of-Stake (PoS) network with slashing-based cryptoeconomic security, eliminating monolithic bottlenecks. Routing decisions are governed by a Multi-Attribute Utility Theory (MAUT) model that dynamically optimizes for security, latency, and cost, while the Acheron DAO and Watchtower Network ensure verifiable governance and continuous relay telemetry. Experimental validation using the Hardhat framework and the Base Sepolia testnet demonstrates that throughput scales linearly from 0.13 to 1.27 transactions per second as relays increase from one to ten, while latency variance decreases by over 90 % and average transaction costs remain stable. Compared with established baselines, Acheron achieves a 7 % reduction in mean latency and over 2.2 ×  higher throughput under identical workloads. These results demonstrate that Acheron’s market-based paradigm presents a viable and quantitatively superior path toward achieving secure, scalable, and decentralized interoperability for both financial and IoT-driven ecosystems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101836"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized proximity-aware clustering for collective self-federated learning 用于集体自联合学习的分散式邻近感知聚类
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-12-05 DOI: 10.1016/j.iot.2025.101841
Davide Domini , Nicolas Farabegoli , Gianluca Aguzzi , Mirko Viroli , Lukas Esterle
In recent years, Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative model training in IoT systems, enabling clients to learn a global model for tasks like classification, prediction, or anomaly detection in IoT environments without sharing raw data. However, traditional centralized FL architectures face bottlenecks, single points of failure, and struggle with non-IID data. These limitations hinder effective Collective Intelligence in large-scale IoT systems where numerous devices operate across diverse and dynamic environments. Existing clustered FL approaches often retain centralization or overlook how the spatial distribution inherent in IoT deployments directly influences data heterogeneity, challenging both the integration of spatially correlated devices and the establishment of intelligence distributed across the entire system. Creating such intelligence demands both decentralized architectures for scalability and effective integration of devices with similar data distributions. For these reasons, this article introduces Proximity-Aware Self-Federated Learning (PSFL), a novel decentralized approach embodying collective intelligence principles. PSFL leverages field-based coordination to enable IoT devices to form self-federations, dynamically clustered groups that train specialized models based on both spatial proximity and local model characteristics. These self-federations reflect underlying data distributions, creating a distributed ecosystem of specialized models across the network. This approach overcomes global model limitations in non-IID settings through specialized federations based on local data distributions, enhancing performance while maintaining decentralization. We evaluate our approach using the Extended MNIST and CIFAR-100 datasets against state-of-the-art baselines, demonstrating its effectiveness in forming coherent, localized models under non-IID conditions.
近年来,联邦学习(FL)已成为物联网系统中协作模型训练的隐私保护范例,使客户能够在不共享原始数据的情况下学习物联网环境中分类、预测或异常检测等任务的全局模型。然而,传统的集中式FL架构面临瓶颈、单点故障以及与非iid数据的斗争。这些限制阻碍了大规模物联网系统中有效的集体智能,其中许多设备在不同和动态的环境中运行。现有的集群FL方法通常保留集中化或忽略物联网部署中固有的空间分布如何直接影响数据异构性,这既挑战了空间相关设备的集成,也挑战了分布在整个系统中的智能的建立。创建这样的智能既需要分散的可伸缩性架构,也需要具有类似数据分布的设备的有效集成。基于这些原因,本文介绍了一种体现集体智能原则的新颖分散方法——邻近感知自联邦学习(PSFL)。PSFL利用基于现场的协调,使物联网设备能够形成自联盟,动态聚类组,根据空间接近性和局部模型特征训练专门的模型。这些自联合反映了底层数据分布,在网络上创建了一个专门模型的分布式生态系统。这种方法通过基于本地数据分布的专门联合克服了非iid设置中的全局模型限制,在保持分散性的同时提高了性能。我们使用扩展的MNIST和CIFAR-100数据集对最先进的基线进行了评估,证明了其在非iid条件下形成连贯的局部模型的有效性。
{"title":"Decentralized proximity-aware clustering for collective self-federated learning","authors":"Davide Domini ,&nbsp;Nicolas Farabegoli ,&nbsp;Gianluca Aguzzi ,&nbsp;Mirko Viroli ,&nbsp;Lukas Esterle","doi":"10.1016/j.iot.2025.101841","DOIUrl":"10.1016/j.iot.2025.101841","url":null,"abstract":"<div><div>In recent years, Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative model training in IoT systems, enabling clients to learn a global model for tasks like classification, prediction, or anomaly detection in IoT environments without sharing raw data. However, traditional centralized FL architectures face bottlenecks, single points of failure, and struggle with non-IID data. These limitations hinder effective Collective Intelligence in large-scale IoT systems where numerous devices operate across diverse and dynamic environments. Existing clustered FL approaches often retain centralization or overlook how the spatial distribution inherent in IoT deployments directly influences data heterogeneity, challenging both the integration of spatially correlated devices and the establishment of intelligence distributed across the entire system. Creating such intelligence demands both decentralized architectures for scalability and effective integration of devices with similar data distributions. For these reasons, this article introduces <em>Proximity-Aware Self-Federated Learning (PSFL)</em>, a novel decentralized approach embodying collective intelligence principles. PSFL leverages field-based coordination to enable IoT devices to form <em>self-federations</em>, dynamically clustered groups that train specialized models based on both spatial proximity and local model characteristics. These self-federations reflect underlying data distributions, creating a distributed ecosystem of specialized models across the network. This approach overcomes global model limitations in non-IID settings through specialized federations based on local data distributions, enhancing performance while maintaining decentralization. We evaluate our approach using the Extended MNIST and CIFAR-100 datasets against state-of-the-art baselines, demonstrating its effectiveness in forming coherent, localized models under non-IID conditions.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101841"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-centered and context-aware smart ML-based IoT framework for online fatigue detection: A real-world study of football training 用于在线疲劳检测的以人为中心和上下文感知的基于ml的智能物联网框架:足球训练的现实世界研究
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-12-08 DOI: 10.1016/j.iot.2025.101847
Abdelkarim Mamen , Elisabetta De Giovanni , Teodoro Montanaro , Ilaria Sergi , Luigi Patrono
Fatigue is one of the factors that most influences competitive athletes’ performance, leading to injuries and overtraining. To effectively monitor and predict fatigue levels during real-world training, it is necessary to integrate Internet of Things (IoT) technology with machine learning (ML). In this context, the paper presents three main contributions : a) a smart IoT framework that integrates edge and cloud-based modules to collect physiological parameters, monitor fatigue during real-world sessions, and assist coaches in optimizing exercise strategies ; b) a dataset collected through the proposed framework in a real pilot study with eight futsal players over five training sessions, each lasting between 35 and 50 m depending on performed exercises, using ECG and PPG-based sensors ; c) an online ML-based fatigue detection module and on-cloud analysis of various ML models, traditional and deep learning, including CNN+GRU, XGBoost, and Transformer architectures, and context-aware feature sets. We evaluated the accuracy of our fatigue detection method using standard metrics, achieving an F1-score of up to 95 % with pilot study data. Our framework incorporates a context-aware design, where contextual information (exercise type) and sensing modality (ECG- or PPG-based) are explicitly integrated with physiological features (HRV and HR) in the fatigue prediction model to adapt it to different settings, improving robustness and interpretability. Finally, we evaluated the framework’s efficacy and the value of user and expert input, highlighting the benefits of integrating IoT and ML within a human-centered, context-aware approach to balance sensor accuracy, comfort, and efficiency in competitive sports training.
疲劳是影响竞技运动员表现的主要因素之一,它会导致受伤和过度训练。为了有效地监测和预测现实训练中的疲劳水平,有必要将物联网(IoT)技术与机器学习(ML)相结合。在此背景下,本文提出了三个主要贡献:a)集成边缘和基于云的模块的智能物联网框架,用于收集生理参数,监测真实会话中的疲劳,并协助教练优化运动策略;b)使用ECG和基于ppg的传感器,通过建议的框架在一项真实的试点研究中收集的数据集,该研究包括八名五人制足球运动员在五次训练中进行的训练,每次训练时长在35至50米之间,具体取决于所进行的练习;c)基于ML的在线疲劳检测模块和各种ML模型的云上分析,传统和深度学习,包括CNN+GRU, XGBoost和Transformer架构,以及上下文感知功能集。我们使用标准指标评估了疲劳检测方法的准确性,在初步研究数据中获得了高达95%的f1分。我们的框架采用了上下文感知设计,其中上下文信息(运动类型)和传感模式(基于ECG或ppg)与疲劳预测模型中的生理特征(HRV和HR)明确集成,以使其适应不同的设置,提高鲁棒性和可解释性。最后,我们评估了框架的有效性以及用户和专家输入的价值,强调了将物联网和机器学习集成在以人为中心的情境感知方法中的好处,以平衡竞技体育训练中传感器的准确性、舒适性和效率。
{"title":"Human-centered and context-aware smart ML-based IoT framework for online fatigue detection: A real-world study of football training","authors":"Abdelkarim Mamen ,&nbsp;Elisabetta De Giovanni ,&nbsp;Teodoro Montanaro ,&nbsp;Ilaria Sergi ,&nbsp;Luigi Patrono","doi":"10.1016/j.iot.2025.101847","DOIUrl":"10.1016/j.iot.2025.101847","url":null,"abstract":"<div><div>Fatigue is one of the factors that most influences competitive athletes’ performance, leading to injuries and overtraining. To effectively monitor and predict fatigue levels during real-world training, it is necessary to integrate Internet of Things (IoT) technology with machine learning (ML). In this context, the paper presents three main contributions : a) a smart IoT framework that integrates edge and cloud-based modules to collect physiological parameters, monitor fatigue during real-world sessions, and assist coaches in optimizing exercise strategies ; b) a dataset collected through the proposed framework in a real pilot study with eight futsal players over five training sessions, each lasting between 35 and 50 m depending on performed exercises, using ECG and PPG-based sensors ; c) an online ML-based fatigue detection module and on-cloud analysis of various ML models, traditional and deep learning, including CNN+GRU, XGBoost, and Transformer architectures, and context-aware feature sets. We evaluated the accuracy of our fatigue detection method using standard metrics, achieving an F1-score of up to 95 % with pilot study data. Our framework incorporates a context-aware design, where contextual information (exercise type) and sensing modality (ECG- or PPG-based) are explicitly integrated with physiological features (HRV and HR) in the fatigue prediction model to adapt it to different settings, improving robustness and interpretability. Finally, we evaluated the framework’s efficacy and the value of user and expert input, highlighting the benefits of integrating IoT and ML within a human-centered, context-aware approach to balance sensor accuracy, comfort, and efficiency in competitive sports training.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101847"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A TinyML device for risk identification for people with hearing loss 一种TinyML设备,用于听力损失人群的风险识别
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-12-08 DOI: 10.1016/j.iot.2025.101840
Cristian Bautista-Villalpando , Victor Lomas-Barrie
This paper presents an innovative approach to enhancing the quality of life for people with hearing impairment by implementing a portable and discreet TinyML device. The Support System for Identifying Emergency Sounds (SSIES) is designed to recognize four characteristic emergency sounds: car horn, scream, ambulance sirens, and crying babies. Through unique vibration patterns for each sound, the device provides a haptic response that allows the user to be aware of their surroundings and react if necessary. In addition, the device provides information on the direction of arrival (DOA) of the sound. In the state of the art, various supervised machine learning techniques have been explored to achieve this behavior. In this work, we focus primarily on artificial neural network algorithms (ANN) and their optimization for execution on devices with limited computational resources, a trend known as Machine Learning at the Edge.
The methodology used in this project is based on a combination of the HW/SW co-design and development lifecycle model for embedded systems and the lifecycle of ML-based solutions.
The results obtained indicate that the proposed TinyML device is feasible and has the potential to significantly improve environmental awareness for people with hearing impairment.
本文提出了一种创新的方法,通过实施便携式和谨慎的TinyML设备来提高听力障碍患者的生活质量。识别紧急声音支持系统(SSIES)旨在识别四种典型的紧急声音:汽车喇叭声、尖叫声、救护车警报声和婴儿哭声。通过对每种声音的独特振动模式,该设备提供触觉响应,使用户能够意识到周围环境并在必要时做出反应。此外,该设备还提供声音到达方向(DOA)的信息。在目前的技术状态下,各种监督机器学习技术已经被探索来实现这种行为。在这项工作中,我们主要关注人工神经网络算法(ANN)及其在计算资源有限的设备上执行的优化,这一趋势被称为边缘机器学习。该项目中使用的方法是基于嵌入式系统的硬件/软件协同设计和开发生命周期模型以及基于ml的解决方案的生命周期的组合。结果表明,所提出的TinyML装置是可行的,具有显著提高听障人士环保意识的潜力。
{"title":"A TinyML device for risk identification for people with hearing loss","authors":"Cristian Bautista-Villalpando ,&nbsp;Victor Lomas-Barrie","doi":"10.1016/j.iot.2025.101840","DOIUrl":"10.1016/j.iot.2025.101840","url":null,"abstract":"<div><div>This paper presents an innovative approach to enhancing the quality of life for people with hearing impairment by implementing a portable and discreet TinyML device. The Support System for Identifying Emergency Sounds (SSIES) is designed to recognize four characteristic emergency sounds: car horn, scream, ambulance sirens, and crying babies. Through unique vibration patterns for each sound, the device provides a haptic response that allows the user to be aware of their surroundings and react if necessary. In addition, the device provides information on the direction of arrival (DOA) of the sound. In the state of the art, various supervised machine learning techniques have been explored to achieve this behavior. In this work, we focus primarily on artificial neural network algorithms (ANN) and their optimization for execution on devices with limited computational resources, a trend known as Machine Learning at the Edge.</div><div>The methodology used in this project is based on a combination of the HW/SW co-design and development lifecycle model for embedded systems and the lifecycle of ML-based solutions.</div><div>The results obtained indicate that the proposed TinyML device is feasible and has the potential to significantly improve environmental awareness for people with hearing impairment.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101840"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LoRaGeo-PSW: a prompt-aligned large language model for few-shot fingerprint geolocation in urban LoRaWAN networks LoRaGeo-PSW:用于城市LoRaWAN网络中少拍指纹定位的提示对齐大语言模型
IF 7.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-01 Epub Date: 2025-11-10 DOI: 10.1016/j.iot.2025.101821
Wenbin Shi , Zhongxu Zhan , Jingsheng Lei , Xingli Gan
Accurate and efficient geolocation remains a critical need for low-power IoT networks, particularly in large urban environments where GNSS-based positioning is often infeasible. Fingerprint-based localization using LoRaWAN signals offers a scalable alternative, but conventional methods depend on rigid matching algorithms and static radio maps, leading to poor performance in complex cityscapes. This work proposes LoRaGeo-PSW, a novel geolocation framework based on large language models (LLMs) that aligns structured wireless signal features with prompt-driven reasoning. Built upon a GPT-2 foundation, the model encodes LoRaWAN fingerprints—including RSSI and SNR from multiple gateways—as token sequences, and interprets them in context using a structured workflow-aware architecture. By integrating signal processing workflows via lightweight adapter modules and employing low-rank adaptation (LoRA), LoRaGeo-PSW offers both interpretability and parameter efficiency. Crucially, the model enables few-shot localization by conditioning on a handful of example fingerprints, thereby adapting to new environments without retraining. Evaluated on a public LoRaWAN dataset from urban Antwerp with over 130,000 transmissions, the model achieves a median localization error of approximately 150 m, substantially surpassing classical fingerprinting and deep learning baselines. This work introduces a new paradigm for wireless localization, demonstrating that LLMs can effectively bridge structured signal reasoning and geospatial inference through prompt-driven alignment.
准确和高效的地理定位仍然是低功耗物联网网络的关键需求,特别是在大型城市环境中,基于gnss的定位通常是不可行的。使用LoRaWAN信号的基于指纹的定位提供了一种可扩展的替代方案,但传统方法依赖于严格的匹配算法和静态无线电地图,导致在复杂的城市环境中性能不佳。这项工作提出了LoRaGeo-PSW,这是一种基于大型语言模型(llm)的新型地理定位框架,它将结构化无线信号特征与提示驱动推理相结合。该模型建立在GPT-2的基础上,将LoRaWAN指纹(包括来自多个网关的RSSI和SNR)编码为令牌序列,并使用结构化工作流感知架构在上下文中对其进行解释。通过轻量级适配器模块集成信号处理工作流,并采用低阶自适应(LoRA), LoRaGeo-PSW提供了可解释性和参数效率。至关重要的是,该模型通过对少数指纹样本进行条件反射,实现了几次定位,从而无需重新训练即可适应新环境。在来自安特卫普城市超过13万次传输的公共LoRaWAN数据集上进行评估后,该模型的中位定位误差约为150米,大大超过了传统的指纹识别和深度学习基线。这项工作为无线定位引入了一种新的范例,表明llm可以通过提示驱动的校准有效地桥接结构化信号推理和地理空间推理。
{"title":"LoRaGeo-PSW: a prompt-aligned large language model for few-shot fingerprint geolocation in urban LoRaWAN networks","authors":"Wenbin Shi ,&nbsp;Zhongxu Zhan ,&nbsp;Jingsheng Lei ,&nbsp;Xingli Gan","doi":"10.1016/j.iot.2025.101821","DOIUrl":"10.1016/j.iot.2025.101821","url":null,"abstract":"<div><div>Accurate and efficient geolocation remains a critical need for low-power IoT networks, particularly in large urban environments where GNSS-based positioning is often infeasible. Fingerprint-based localization using LoRaWAN signals offers a scalable alternative, but conventional methods depend on rigid matching algorithms and static radio maps, leading to poor performance in complex cityscapes. This work proposes LoRaGeo-PSW, a novel geolocation framework based on large language models (LLMs) that aligns structured wireless signal features with prompt-driven reasoning. Built upon a GPT-2 foundation, the model encodes LoRaWAN fingerprints—including RSSI and SNR from multiple gateways—as token sequences, and interprets them in context using a structured workflow-aware architecture. By integrating signal processing workflows via lightweight adapter modules and employing low-rank adaptation (LoRA), LoRaGeo-PSW offers both interpretability and parameter efficiency. Crucially, the model enables few-shot localization by conditioning on a handful of example fingerprints, thereby adapting to new environments without retraining. Evaluated on a public LoRaWAN dataset from urban Antwerp with over 130,000 transmissions, the model achieves a median localization error of approximately 150 m, substantially surpassing classical fingerprinting and deep learning baselines. This work introduces a new paradigm for wireless localization, demonstrating that LLMs can effectively bridge structured signal reasoning and geospatial inference through prompt-driven alignment.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"35 ","pages":"Article 101821"},"PeriodicalIF":7.6,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145521123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Internet of Things
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1