µ-FF: On-Device Forward-Forward Training Algorithm for Microcontrollers

Fabrizio De Vita, Rawan M. A. Nawaiseh, Dario Bruneo, Valeria Tomaselli, Marco Lattuada, M. Falchetto
{"title":"µ-FF: On-Device Forward-Forward Training Algorithm for Microcontrollers","authors":"Fabrizio De Vita, Rawan M. A. Nawaiseh, Dario Bruneo, Valeria Tomaselli, Marco Lattuada, M. Falchetto","doi":"10.1109/SMARTCOMP58114.2023.00024","DOIUrl":null,"url":null,"abstract":"Deliver intelligence into low-cost hardware e.g., Microcontroller Units (MCUs) for the realization of low-power tailored applications nowadays is an emerging research area. However, the training of deep learning models on embedded systems is still challenging mainly due to their low amount of memory, available energy, and computing power which significantly limit the complexity of the tasks that can be executed, thus making impossible use of traditional training algorithms such as backpropagation (BP). During these years techniques such as weights compression and quantization have emerged as solutions, but they only address the inference phase. Forward-Forward (FF) is a novel training algorithm that has been recently proposed as a possible alternative to BP when the available resources are limited. This is achieved by training the layers of a neural network separately, thus reducing the required energy and memory. In this paper, we propose µ-FF, a variation of the original FF which tackles the training process with a multivariate Ridge regression approach and allows to find closed-form solution by using the Mean Squared Error (MSE) as loss function. Such an approach does not use BP and does not need to compute gradients, thus saving memory and computing resources to enable the on-device training directly on MCUs of the STM32 family. Experimental results conducted on the Fashion-MNIST dataset demonstrate the effectiveness of the proposed approach in terms of memory and accuracy.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP58114.2023.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deliver intelligence into low-cost hardware e.g., Microcontroller Units (MCUs) for the realization of low-power tailored applications nowadays is an emerging research area. However, the training of deep learning models on embedded systems is still challenging mainly due to their low amount of memory, available energy, and computing power which significantly limit the complexity of the tasks that can be executed, thus making impossible use of traditional training algorithms such as backpropagation (BP). During these years techniques such as weights compression and quantization have emerged as solutions, but they only address the inference phase. Forward-Forward (FF) is a novel training algorithm that has been recently proposed as a possible alternative to BP when the available resources are limited. This is achieved by training the layers of a neural network separately, thus reducing the required energy and memory. In this paper, we propose µ-FF, a variation of the original FF which tackles the training process with a multivariate Ridge regression approach and allows to find closed-form solution by using the Mean Squared Error (MSE) as loss function. Such an approach does not use BP and does not need to compute gradients, thus saving memory and computing resources to enable the on-device training directly on MCUs of the STM32 family. Experimental results conducted on the Fashion-MNIST dataset demonstrate the effectiveness of the proposed approach in terms of memory and accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
微控制器的设备前向-前向训练算法
将智能传递到低成本硬件中,例如微控制器单元(mcu),以实现低功耗定制应用,目前是一个新兴的研究领域。然而,在嵌入式系统上训练深度学习模型仍然具有挑战性,主要是因为它们的内存量、可用能量和计算能力都很低,这极大地限制了可以执行的任务的复杂性,因此不可能使用传统的训练算法,如反向传播(BP)。近年来,诸如权重压缩和量化等技术已经作为解决方案出现,但它们只针对推理阶段。前向-前向(Forward-Forward, FF)是最近提出的一种新的训练算法,可以在可用资源有限的情况下替代BP。这是通过单独训练神经网络的各个层来实现的,从而减少了所需的能量和内存。在本文中,我们提出了µ-FF,这是原始FF的一种变体,它使用多元Ridge回归方法处理训练过程,并允许通过使用均方误差(MSE)作为损失函数来找到封闭形式的解。这种方法不使用BP,不需要计算梯度,节省了内存和计算资源,可以直接在STM32系列的mcu上进行设备上训练。在Fashion-MNIST数据集上的实验结果证明了该方法在记忆和准确率方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Teaching Humanoid Robots to Assist Humans for Collaborative Tasks Keynotes A Novel Context Aware Paths Recommendation Approach for the Cultural Heritage Enhancement Internet of Things in SPA Medicine: A General Framework to Improve User Treatments Nisshash: Design of An IoT-based Smart T-Shirt for Guided Breathing Exercises
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1