TinyWolf - 利用增强型灰狼优化技术为物联网提供高效的设备上 TinyML 训练

IF 6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Internet of Things Pub Date : 2024-09-05 DOI:10.1016/j.iot.2024.101365
Subhrangshu Adhikary , Subhayu Dutta , Ashutosh Dhar Dwivedi
{"title":"TinyWolf - 利用增强型灰狼优化技术为物联网提供高效的设备上 TinyML 训练","authors":"Subhrangshu Adhikary ,&nbsp;Subhayu Dutta ,&nbsp;Ashutosh Dhar Dwivedi","doi":"10.1016/j.iot.2024.101365","DOIUrl":null,"url":null,"abstract":"<div><p>Training a deep learning model generally requires a huge amount of memory and processing power. Once trained, the learned model can make predictions very fast with very little resource consumption. The learned weights can be fitted into a microcontroller to build affordable embedded intelligence systems which is also known as TinyML. Although few attempts have been made, the limits of the state-of-the-art training of a deep learning model within a microcontroller can be pushed further. Generally deep learning models are trained with gradient optimizers which predict with high accuracy but require a very high amount of resources. On the other hand, nature-inspired meta-heuristic optimizers can be used to build a fast approximation of the model’s optimal solution with low resources. After a rigorous test, we have found that Grey Wolf Optimizer can be modified for enhanced uses of main memory, paging and swap space among <span><math><mrow><mi>α</mi><mo>,</mo><mspace></mspace><mi>β</mi><mo>,</mo><mspace></mspace><mi>δ</mi></mrow></math></span> and <span><math><mi>ω</mi></math></span> wolves. This modification saved up to 71% memory requirements compared to gradient optimizers. We have used this modification to train the TinyML model within a microcontroller of 256KB RAM. The performances of the proposed framework have been meticulously benchmarked on 13 open-sourced datasets.</p></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"28 ","pages":"Article 101365"},"PeriodicalIF":6.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2542660524003068/pdfft?md5=ab42e32e095597b7bee6c567498b913a&pid=1-s2.0-S2542660524003068-main.pdf","citationCount":"0","resultStr":"{\"title\":\"TinyWolf — Efficient on-device TinyML training for IoT using enhanced Grey Wolf Optimization\",\"authors\":\"Subhrangshu Adhikary ,&nbsp;Subhayu Dutta ,&nbsp;Ashutosh Dhar Dwivedi\",\"doi\":\"10.1016/j.iot.2024.101365\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Training a deep learning model generally requires a huge amount of memory and processing power. Once trained, the learned model can make predictions very fast with very little resource consumption. The learned weights can be fitted into a microcontroller to build affordable embedded intelligence systems which is also known as TinyML. Although few attempts have been made, the limits of the state-of-the-art training of a deep learning model within a microcontroller can be pushed further. Generally deep learning models are trained with gradient optimizers which predict with high accuracy but require a very high amount of resources. On the other hand, nature-inspired meta-heuristic optimizers can be used to build a fast approximation of the model’s optimal solution with low resources. After a rigorous test, we have found that Grey Wolf Optimizer can be modified for enhanced uses of main memory, paging and swap space among <span><math><mrow><mi>α</mi><mo>,</mo><mspace></mspace><mi>β</mi><mo>,</mo><mspace></mspace><mi>δ</mi></mrow></math></span> and <span><math><mi>ω</mi></math></span> wolves. This modification saved up to 71% memory requirements compared to gradient optimizers. We have used this modification to train the TinyML model within a microcontroller of 256KB RAM. The performances of the proposed framework have been meticulously benchmarked on 13 open-sourced datasets.</p></div>\",\"PeriodicalId\":29968,\"journal\":{\"name\":\"Internet of Things\",\"volume\":\"28 \",\"pages\":\"Article 101365\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2542660524003068/pdfft?md5=ab42e32e095597b7bee6c567498b913a&pid=1-s2.0-S2542660524003068-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet of Things\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2542660524003068\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet of Things","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542660524003068","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

训练深度学习模型通常需要大量内存和处理能力。一旦经过训练,所学模型就能以极低的资源消耗快速做出预测。学习到的权重可以安装到微控制器中,从而构建出经济实惠的嵌入式智能系统,这也被称为 TinyML。虽然已经进行了一些尝试,但在微控制器中训练深度学习模型的最新技术极限还可以进一步提高。一般来说,深度学习模型是通过梯度优化器进行训练的,这种方法预测准确率高,但需要大量资源。另一方面,受自然启发的元启发式优化器可用于以较低的资源建立模型最优解的快速近似值。经过严格测试,我们发现灰狼优化器可以进行修改,以提高α、β、δ和ω狼的主内存、分页和交换空间的使用率。与梯度优化器相比,这种修改最多可节省 71% 的内存需求。我们利用这一修改在 256KB RAM 的微控制器中训练 TinyML 模型。我们在 13 个开源数据集上对拟议框架的性能进行了细致的基准测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
TinyWolf — Efficient on-device TinyML training for IoT using enhanced Grey Wolf Optimization

Training a deep learning model generally requires a huge amount of memory and processing power. Once trained, the learned model can make predictions very fast with very little resource consumption. The learned weights can be fitted into a microcontroller to build affordable embedded intelligence systems which is also known as TinyML. Although few attempts have been made, the limits of the state-of-the-art training of a deep learning model within a microcontroller can be pushed further. Generally deep learning models are trained with gradient optimizers which predict with high accuracy but require a very high amount of resources. On the other hand, nature-inspired meta-heuristic optimizers can be used to build a fast approximation of the model’s optimal solution with low resources. After a rigorous test, we have found that Grey Wolf Optimizer can be modified for enhanced uses of main memory, paging and swap space among α,β,δ and ω wolves. This modification saved up to 71% memory requirements compared to gradient optimizers. We have used this modification to train the TinyML model within a microcontroller of 256KB RAM. The performances of the proposed framework have been meticulously benchmarked on 13 open-sourced datasets.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Internet of Things
Internet of Things Multiple-
CiteScore
3.60
自引率
5.10%
发文量
115
审稿时长
37 days
期刊介绍: Internet of Things; Engineering Cyber Physical Human Systems is a comprehensive journal encouraging cross collaboration between researchers, engineers and practitioners in the field of IoT & Cyber Physical Human Systems. The journal offers a unique platform to exchange scientific information on the entire breadth of technology, science, and societal applications of the IoT. The journal will place a high priority on timely publication, and provide a home for high quality. Furthermore, IOT is interested in publishing topical Special Issues on any aspect of IOT.
期刊最新文献
Mitigating smart contract vulnerabilities in electronic toll collection using blockchain security LBTMA: An integrated P4-enabled framework for optimized traffic management in SD-IoT networks AI-based autonomous UAV swarm system for weed detection and treatment: Enhancing organic orange orchard efficiency with agriculture 5.0 A consortium blockchain-edge enabled authentication scheme for underwater acoustic network (UAN) Is artificial intelligence a new battleground for cybersecurity?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1