PRIOT: Pruning-Based Integer-Only Transfer Learning for Embedded Systems

IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Embedded Systems Letters Pub Date : 2024-10-23 DOI:10.1109/LES.2024.3485003
Honoka Anada;Sefutsu Ryu;Masayuki Usui;Tatsuya Kaneko;Shinya Takamaeda-Yamazaki
{"title":"PRIOT: Pruning-Based Integer-Only Transfer Learning for Embedded Systems","authors":"Honoka Anada;Sefutsu Ryu;Masayuki Usui;Tatsuya Kaneko;Shinya Takamaeda-Yamazaki","doi":"10.1109/LES.2024.3485003","DOIUrl":null,"url":null,"abstract":"On-device transfer learning is crucial for adapting a common backbone model to the unique environment of each edge device. Tiny microcontrollers, such as the Raspberry Pi Pico, are key targets for on-device learning but often lack floating-point units, necessitating integer-only training. Dynamic computation of quantization scale factors, which is adopted in former studies, incurs high computational costs. Therefore, this letter focuses on integer-only training with static-scale factors, which is challenging with existing training methods. We propose a new training method named PRIOT, which optimizes the network by pruning selected edges rather than updating weights, allowing effective training with static-scale factors. The pruning pattern is determined by the edge-popup algorithm, which trains a parameter named score assigned to each edge instead of the original parameters and prunes the edges with low scores before inference. Additionally, we introduce a memory-efficient variant, PRIOT-S, which only assigns scores to a small fraction of edges. We implement PRIOT and PRIOT-S on the Raspberry Pi Pico and evaluate their accuracy and computational costs using a tiny CNN model on the rotated MNIST dataset and the VGG11 model on the rotated CIFAR-10 dataset. Our results demonstrate that PRIOT improves accuracy by 8.08–33.75 percentage points over existing methods, while PRIOT-S reduces memory footprint with minimal accuracy loss.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 2","pages":"87-90"},"PeriodicalIF":2.0000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Embedded Systems Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10729874/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

On-device transfer learning is crucial for adapting a common backbone model to the unique environment of each edge device. Tiny microcontrollers, such as the Raspberry Pi Pico, are key targets for on-device learning but often lack floating-point units, necessitating integer-only training. Dynamic computation of quantization scale factors, which is adopted in former studies, incurs high computational costs. Therefore, this letter focuses on integer-only training with static-scale factors, which is challenging with existing training methods. We propose a new training method named PRIOT, which optimizes the network by pruning selected edges rather than updating weights, allowing effective training with static-scale factors. The pruning pattern is determined by the edge-popup algorithm, which trains a parameter named score assigned to each edge instead of the original parameters and prunes the edges with low scores before inference. Additionally, we introduce a memory-efficient variant, PRIOT-S, which only assigns scores to a small fraction of edges. We implement PRIOT and PRIOT-S on the Raspberry Pi Pico and evaluate their accuracy and computational costs using a tiny CNN model on the rotated MNIST dataset and the VGG11 model on the rotated CIFAR-10 dataset. Our results demonstrate that PRIOT improves accuracy by 8.08–33.75 percentage points over existing methods, while PRIOT-S reduces memory footprint with minimal accuracy loss.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于剪枝的嵌入式系统全整数迁移学习
设备上迁移学习对于使通用骨干模型适应每个边缘设备的独特环境至关重要。微小的微控制器,如树莓派Pico,是设备上学习的关键目标,但通常缺乏浮点单元,需要只进行整数训练。以往研究采用动态计算量化尺度因子的方法,计算成本较高。因此,这封信的重点是具有静态尺度因子的整数训练,这是现有训练方法的挑战。本文提出了一种新的训练方法PRIOT,该方法通过剪枝选择边缘而不是更新权值来优化网络,从而实现了静态尺度因子的有效训练。剪枝模式由edge-pop - up算法确定,该算法训练一个名为“分数”的参数来代替原始参数分配给每条边,并在推理前对分数低的边进行剪枝。此外,我们引入了一个内存高效的变体,PRIOT-S,它只给一小部分边分配分数。我们在Raspberry Pi Pico上实现了PRIOT和PRIOT- s,并在旋转的MNIST数据集上使用了一个微型CNN模型,在旋转的cifar10数据集上使用了VGG11模型,评估了它们的精度和计算成本。我们的研究结果表明,PRIOT比现有方法提高了8.08-33.75个百分点的精度,而PRIOT- s以最小的精度损失减少了内存占用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Embedded Systems Letters
IEEE Embedded Systems Letters Engineering-Control and Systems Engineering
CiteScore
3.30
自引率
0.00%
发文量
65
期刊介绍: The IEEE Embedded Systems Letters (ESL), provides a forum for rapid dissemination of latest technical advances in embedded systems and related areas in embedded software. The emphasis is on models, methods, and tools that ensure secure, correct, efficient and robust design of embedded systems and their applications.
期刊最新文献
Editorial Table of Contents IEEE Embedded Systems Letters Publication Information Table of Contents IEEE Embedded Systems Letters Publication Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1