Mixed-Method Quantization of Convolutional Neural Networks on an Adaptive FPGA Accelerator

Hadee Madadum, Y. Becerikli
{"title":"Mixed-Method Quantization of Convolutional Neural Networks on an Adaptive FPGA Accelerator","authors":"Hadee Madadum, Y. Becerikli","doi":"10.1109/UBMK55850.2022.9919597","DOIUrl":null,"url":null,"abstract":"Research on quantization of Convolutional Neural Networks (ConNN) has gained attention recently due to the increasing demand for using ConNN models on embedded devices. The quantization method involves compressing the ConNN model to simplify complex computing and reduce resource requirements. Despite this, simply mapping a 32-bit ConNN model to a lower bit can have a negative impact on accuracy. One of the limitations of the quantization is the diversity of parameters in the ConNN model. Different layers have different structures. Thus, using the same quantization method for all layers in the ConNN model can lead to sub-optimal performance. Therefore, we propose a mixed-method quantization, a compression technique that uses different quantization approaches for a single ConNN model. We also propose an adaptive accelerator for quantized ConNN where its architecture is reconfigured during runtime using partial reconfiguration capability. The experimental results show that the proposed design achieves accuracy close to 32-bit models when quantizing ConNN models to 4 bits without retraining. In addition, the adaptive accelerator can achieve the highest resource efficiency of 1.11 GOP/s/DSP and 1.49 GOP/s/kLUT.","PeriodicalId":417604,"journal":{"name":"2022 7th International Conference on Computer Science and Engineering (UBMK)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer Science and Engineering (UBMK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UBMK55850.2022.9919597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Research on quantization of Convolutional Neural Networks (ConNN) has gained attention recently due to the increasing demand for using ConNN models on embedded devices. The quantization method involves compressing the ConNN model to simplify complex computing and reduce resource requirements. Despite this, simply mapping a 32-bit ConNN model to a lower bit can have a negative impact on accuracy. One of the limitations of the quantization is the diversity of parameters in the ConNN model. Different layers have different structures. Thus, using the same quantization method for all layers in the ConNN model can lead to sub-optimal performance. Therefore, we propose a mixed-method quantization, a compression technique that uses different quantization approaches for a single ConNN model. We also propose an adaptive accelerator for quantized ConNN where its architecture is reconfigured during runtime using partial reconfiguration capability. The experimental results show that the proposed design achieves accuracy close to 32-bit models when quantizing ConNN models to 4 bits without retraining. In addition, the adaptive accelerator can achieve the highest resource efficiency of 1.11 GOP/s/DSP and 1.49 GOP/s/kLUT.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
卷积神经网络在自适应FPGA加速器上的混合量化方法
由于卷积神经网络模型在嵌入式设备上的应用需求日益增加,卷积神经网络的量化研究近年来得到了广泛的关注。量化方法包括压缩ConNN模型,以简化复杂的计算和减少资源需求。尽管如此,简单地将32位的ConNN模型映射到较低的位可能会对准确性产生负面影响。量化的限制之一是ConNN模型中参数的多样性。不同的层有不同的结构。因此,对ConNN模型中的所有层使用相同的量化方法可能导致次优性能。因此,我们提出了一种混合方法量化,一种对单个ConNN模型使用不同量化方法的压缩技术。我们还提出了一种量化conn的自适应加速器,该加速器在运行时使用部分重构能力对其架构进行重新配置。实验结果表明,在不需要再训练的情况下,将ConNN模型量化为4位时,所提出的设计达到了接近32位模型的精度。此外,自适应加速器可以达到1.11 GOP/s/DSP和1.49 GOP/s/kLUT的最高资源效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Study on Power and Energy Measurement of NVIDIA Jetson Embedded GPUs Using Built-in Sensor Forecasting the Short-Term Electricity In Steel Manufacturing For Purchase Accuracy on Day-Ahead Market Adaptive Slot-Filling for Turkish Natural Language Understanding Design and Implementation of Basic Log Structured File System for Internal Flash on Embedded Systems Toolset of “Turkic Morpheme” Portal for Creation of Electronic Corpora of Turkic Languages in a Unified Conceptual Space
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1