探讨量化在短时学习中的应用

Meiqi Wang, Ruixin Xue, Jun Lin, Zhongfeng Wang
{"title":"探讨量化在短时学习中的应用","authors":"Meiqi Wang, Ruixin Xue, Jun Lin, Zhongfeng Wang","doi":"10.1109/NEWCAS49341.2020.9159767","DOIUrl":null,"url":null,"abstract":"Training the neural networks on chip, which enables the local privacy data to be stored and processed at edge platforms, is earning vital importance with the explosive growth of Internet of Things (IoT). Although the on-chip training has been widely investigated in previous arts, there are few works related to the on-chip learning of Few-Shot Learning (FSL), an emerging topic which explores effective learning with only a small number of samples. In this paper, we explore the effectiveness of quantization, a mainstream compression method that helps reduce the memory footprint and computational resource requirements of a full-precision neural network to enable the on-chip deployment of FSL. We first perform extensive experiments on quantization of three mainstream meta-learning-based FSL networks, MAML, Meta-SGD, and Reptile, for both training and testing stages. Experimental results show that the 16-bit quantized training and testing models can be achieved with negligible losses on MAML and Meta-SGD. Then a comprehensive analysis is presented which demonstrates that a most favorable trade-off between accuracy, computational complexity, and model size can be achieved using the Meta-SGD model. This paves the way for the deployment of FSL system on the resource-constrained platforms.","PeriodicalId":135163,"journal":{"name":"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploring Quantization in Few-Shot Learning\",\"authors\":\"Meiqi Wang, Ruixin Xue, Jun Lin, Zhongfeng Wang\",\"doi\":\"10.1109/NEWCAS49341.2020.9159767\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training the neural networks on chip, which enables the local privacy data to be stored and processed at edge platforms, is earning vital importance with the explosive growth of Internet of Things (IoT). Although the on-chip training has been widely investigated in previous arts, there are few works related to the on-chip learning of Few-Shot Learning (FSL), an emerging topic which explores effective learning with only a small number of samples. In this paper, we explore the effectiveness of quantization, a mainstream compression method that helps reduce the memory footprint and computational resource requirements of a full-precision neural network to enable the on-chip deployment of FSL. We first perform extensive experiments on quantization of three mainstream meta-learning-based FSL networks, MAML, Meta-SGD, and Reptile, for both training and testing stages. Experimental results show that the 16-bit quantized training and testing models can be achieved with negligible losses on MAML and Meta-SGD. Then a comprehensive analysis is presented which demonstrates that a most favorable trade-off between accuracy, computational complexity, and model size can be achieved using the Meta-SGD model. This paves the way for the deployment of FSL system on the resource-constrained platforms.\",\"PeriodicalId\":135163,\"journal\":{\"name\":\"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NEWCAS49341.2020.9159767\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NEWCAS49341.2020.9159767","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

随着物联网(IoT)的爆炸式增长,在芯片上训练神经网络,使本地隐私数据能够在边缘平台上存储和处理,变得至关重要。尽管片上训练在以往的艺术中已经得到了广泛的研究,但很少有与片上学习(few - shot learning, FSL)相关的作品,FSL是一个新兴的主题,它探索了只有少量样本的有效学习。在本文中,我们探讨了量化的有效性,量化是一种主流压缩方法,有助于减少全精度神经网络的内存占用和计算资源需求,从而实现FSL的片上部署。我们首先在训练和测试阶段对三种主流的基于元学习的FSL网络MAML、Meta-SGD和Reptile进行了大量的量化实验。实验结果表明,在MAML和Meta-SGD上可以实现16位量化训练和测试模型,损失可以忽略不计。然后,综合分析表明,使用Meta-SGD模型可以在精度、计算复杂性和模型大小之间实现最有利的权衡。这为FSL系统在资源受限平台的部署铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Exploring Quantization in Few-Shot Learning
Training the neural networks on chip, which enables the local privacy data to be stored and processed at edge platforms, is earning vital importance with the explosive growth of Internet of Things (IoT). Although the on-chip training has been widely investigated in previous arts, there are few works related to the on-chip learning of Few-Shot Learning (FSL), an emerging topic which explores effective learning with only a small number of samples. In this paper, we explore the effectiveness of quantization, a mainstream compression method that helps reduce the memory footprint and computational resource requirements of a full-precision neural network to enable the on-chip deployment of FSL. We first perform extensive experiments on quantization of three mainstream meta-learning-based FSL networks, MAML, Meta-SGD, and Reptile, for both training and testing stages. Experimental results show that the 16-bit quantized training and testing models can be achieved with negligible losses on MAML and Meta-SGD. Then a comprehensive analysis is presented which demonstrates that a most favorable trade-off between accuracy, computational complexity, and model size can be achieved using the Meta-SGD model. This paves the way for the deployment of FSL system on the resource-constrained platforms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Neural Networks for Epileptic Seizure Prediction: Algorithms and Hardware Implementation Cascaded tunable distributed amplifiers for serial optical links: Some design rules Motor Task Learning in Brain Computer Interfaces using Time-Dependent Regularized Common Spatial Patterns and Residual Networks Towards GaN500-based High Temperature ICs: Characterization and Modeling up to 600°C A Current Reference with high Robustness to Process and Supply Voltage Variations unaffected by Body Effect upon Threshold Voltage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1