Advancing On-Device Neural Network Training with TinyPropv2: Dynamic, Sparse, and Efficient Backpropagation

Marcus Rüb, Axel Sikora, Daniel Mueller-Gritschneder
{"title":"Advancing On-Device Neural Network Training with TinyPropv2: Dynamic, Sparse, and Efficient Backpropagation","authors":"Marcus Rüb, Axel Sikora, Daniel Mueller-Gritschneder","doi":"arxiv-2409.07109","DOIUrl":null,"url":null,"abstract":"This study introduces TinyPropv2, an innovative algorithm optimized for\non-device learning in deep neural networks, specifically designed for low-power\nmicrocontroller units. TinyPropv2 refines sparse backpropagation by dynamically\nadjusting the level of sparsity, including the ability to selectively skip\ntraining steps. This feature significantly lowers computational effort without\nsubstantially compromising accuracy. Our comprehensive evaluation across\ndiverse datasets CIFAR 10, CIFAR100, Flower, Food, Speech Command, MNIST, HAR,\nand DCASE2020 reveals that TinyPropv2 achieves near-parity with full training\nmethods, with an average accuracy drop of only around 1 percent in most cases.\nFor instance, against full training, TinyPropv2's accuracy drop is minimal, for\nexample, only 0.82 percent on CIFAR 10 and 1.07 percent on CIFAR100. In terms\nof computational effort, TinyPropv2 shows a marked reduction, requiring as\nlittle as 10 percent of the computational effort needed for full training in\nsome scenarios, and consistently outperforms other sparse training\nmethodologies. These findings underscore TinyPropv2's capacity to efficiently\nmanage computational resources while maintaining high accuracy, positioning it\nas an advantageous solution for advanced embedded device applications in the\nIoT ecosystem.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study introduces TinyPropv2, an innovative algorithm optimized for on-device learning in deep neural networks, specifically designed for low-power microcontroller units. TinyPropv2 refines sparse backpropagation by dynamically adjusting the level of sparsity, including the ability to selectively skip training steps. This feature significantly lowers computational effort without substantially compromising accuracy. Our comprehensive evaluation across diverse datasets CIFAR 10, CIFAR100, Flower, Food, Speech Command, MNIST, HAR, and DCASE2020 reveals that TinyPropv2 achieves near-parity with full training methods, with an average accuracy drop of only around 1 percent in most cases. For instance, against full training, TinyPropv2's accuracy drop is minimal, for example, only 0.82 percent on CIFAR 10 and 1.07 percent on CIFAR100. In terms of computational effort, TinyPropv2 shows a marked reduction, requiring as little as 10 percent of the computational effort needed for full training in some scenarios, and consistently outperforms other sparse training methodologies. These findings underscore TinyPropv2's capacity to efficiently manage computational resources while maintaining high accuracy, positioning it as an advantageous solution for advanced embedded device applications in the IoT ecosystem.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用 TinyPropv2 推进设备上的神经网络训练:动态、稀疏、高效的反向传播
本研究介绍了 TinyPropv2,这是一种专为低功耗微控制器设计的创新算法,针对深度神经网络的设备上学习进行了优化。TinyPropv2 通过动态调整稀疏程度来完善稀疏反向传播,包括有选择地跳过训练步骤的能力。这一功能大大降低了计算量,同时也不会对准确性造成实质性影响。我们在 CIFAR 10、CIFAR 100、花卉、食品、语音命令、MNIST、HAR 和 DCASE2020 等不同数据集上进行的综合评估表明,TinyPropv2 与完全训练方法几乎达到了平分秋色的效果,在大多数情况下,平均准确率下降幅度只有 1% 左右。在计算工作量方面,TinyPropv2 显示出明显的降低,在某些情况下只需要完全训练所需的 10%,并且一直优于其他稀疏训练方法。这些发现强调了 TinyPropv2 在保持高精度的同时有效管理计算资源的能力,使其成为物联网生态系统中先进嵌入式设备应用的有利解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1