Self-trainable and adaptive sensor intelligence for selective data generation.

IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Frontiers in Artificial Intelligence Pub Date : 2025-01-22 eCollection Date: 2024-01-01 DOI:10.3389/frai.2024.1403187
Arghavan Rezvani, Wenjun Huang, Hanning Chen, Yang Ni, Mohsen Imani
{"title":"Self-trainable and adaptive sensor intelligence for selective data generation.","authors":"Arghavan Rezvani, Wenjun Huang, Hanning Chen, Yang Ni, Mohsen Imani","doi":"10.3389/frai.2024.1403187","DOIUrl":null,"url":null,"abstract":"<p><p>With the increasing integration of machine learning into IoT devices, managing energy consumption and data transmission has become a critical challenge. Many IoT applications depend on complex computations performed on server-side infrastructure, necessitating efficient methods to reduce unnecessary data transmission. One promising solution involves deploying compact machine learning models near sensors, enabling intelligent identification and transmission of only relevant data frames. However, existing near-sensor models lack adaptability, as they require extensive pre-training and are often rigidly configured prior to deployment. This paper proposes a novel framework that fuses online learning, active learning, and knowledge distillation to enable adaptive, resource-efficient near-sensor intelligence. Our approach allows near-sensor models to dynamically fine-tune their parameters post-deployment using online learning, eliminating the need for extensive pre-labeling and training. Through a sequential training and execution process, the framework achieves continuous adaptability without prior knowledge of the deployment environment. To enhance performance while preserving model efficiency, we integrate knowledge distillation, enabling the transfer of critical insights from a larger teacher model to a compact student model. Additionally, active learning reduces the required training data while maintaining competitive performance. We validated our framework on both benchmark data from the MS COCO dataset and in a simulated IoT environment. The results demonstrate significant improvements in energy efficiency and data transmission optimization, highlighting the practical applicability of our method in real-world IoT scenarios.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1403187"},"PeriodicalIF":4.7000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794785/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1403187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

With the increasing integration of machine learning into IoT devices, managing energy consumption and data transmission has become a critical challenge. Many IoT applications depend on complex computations performed on server-side infrastructure, necessitating efficient methods to reduce unnecessary data transmission. One promising solution involves deploying compact machine learning models near sensors, enabling intelligent identification and transmission of only relevant data frames. However, existing near-sensor models lack adaptability, as they require extensive pre-training and are often rigidly configured prior to deployment. This paper proposes a novel framework that fuses online learning, active learning, and knowledge distillation to enable adaptive, resource-efficient near-sensor intelligence. Our approach allows near-sensor models to dynamically fine-tune their parameters post-deployment using online learning, eliminating the need for extensive pre-labeling and training. Through a sequential training and execution process, the framework achieves continuous adaptability without prior knowledge of the deployment environment. To enhance performance while preserving model efficiency, we integrate knowledge distillation, enabling the transfer of critical insights from a larger teacher model to a compact student model. Additionally, active learning reduces the required training data while maintaining competitive performance. We validated our framework on both benchmark data from the MS COCO dataset and in a simulated IoT environment. The results demonstrate significant improvements in energy efficiency and data transmission optimization, highlighting the practical applicability of our method in real-world IoT scenarios.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
选择性数据生成的自训练和自适应传感器智能。
随着机器学习越来越多地集成到物联网设备中,管理能耗和数据传输已成为一项关键挑战。许多物联网应用依赖于在服务器端基础设施上执行的复杂计算,因此需要有效的方法来减少不必要的数据传输。一个有希望的解决方案是在传感器附近部署紧凑的机器学习模型,实现智能识别和仅传输相关数据帧。然而,现有的近传感器模型缺乏适应性,因为它们需要大量的预训练,并且通常在部署之前进行严格配置。本文提出了一个融合在线学习、主动学习和知识蒸馏的新框架,以实现自适应、资源高效的近传感器智能。我们的方法允许近传感器模型在部署后使用在线学习动态微调其参数,从而消除了大量预标记和训练的需要。通过连续的训练和执行过程,框架实现了连续的适应性,而无需事先了解部署环境。为了在保持模型效率的同时提高性能,我们集成了知识蒸馏,使关键见解能够从较大的教师模型转移到紧凑的学生模型。此外,主动学习减少了所需的训练数据,同时保持了竞争力。我们在MS COCO数据集的基准数据和模拟物联网环境中验证了我们的框架。结果表明,在能源效率和数据传输优化方面有了显着改善,突出了我们的方法在现实世界物联网场景中的实际适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
期刊最新文献
An efficient strategy for fine-tuning large language models. A review of optimization strategies for deep and machine learning in diabetic macular edema. An AI approach to lunar phase detection: enhancing the identification of the new crescent with astronomical data integration. Improving reliability and accuracy of structured data extraction using a consensus large-language model approach-a use case description in multiple sclerosis. Tabular diffusion counterfactual explanations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1