具有条件计算和低外部存储器访问的深度卷积神经网络加速器

Minkyu Kim, Jae-sun Seo
{"title":"具有条件计算和低外部存储器访问的深度卷积神经网络加速器","authors":"Minkyu Kim, Jae-sun Seo","doi":"10.1109/CICC48029.2020.9075931","DOIUrl":null,"url":null,"abstract":"This paper presents an ASIC accelerator for deep convolutional neural networks (DCNNs) featuring a novel conditional computing scheme that synergistically combines precision-cascading with zero-skipping. To reduce many redundant convolution operations that are followed by max-pooling operations, we propose precision-cascading, where the input features are divided into a number of low-precision groups and approximate convolutions with only the most significant bits (MSBs) are performed first. Based on this approximate computation, the full-precision convolution is performed only on the maximum pooling output that is found. This way, the total number of bit-wise convolutions can be reduced by ~2× without affecting the output feature values and with <0.8% degradation in final ImageNet classification accuracy. Precision-cascading provides the added benefit of increased sparsity per low-precision group, which we exploit with zero-skipping to eliminate clock cycles as well as external memory access that involve zero inputs. By jointly optimizing the conditional computing scheme and hardware architecture, the 40nm prototype chip demonstrates a peak energy-efficiency of 8.85 TOPS/W at 0.9V supply and low external memory access of 55.31 MB (or 0.0018 access/MAC) for ImageNet classification with VGG-16 CNN.","PeriodicalId":409525,"journal":{"name":"2020 IEEE Custom Integrated Circuits Conference (CICC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access\",\"authors\":\"Minkyu Kim, Jae-sun Seo\",\"doi\":\"10.1109/CICC48029.2020.9075931\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an ASIC accelerator for deep convolutional neural networks (DCNNs) featuring a novel conditional computing scheme that synergistically combines precision-cascading with zero-skipping. To reduce many redundant convolution operations that are followed by max-pooling operations, we propose precision-cascading, where the input features are divided into a number of low-precision groups and approximate convolutions with only the most significant bits (MSBs) are performed first. Based on this approximate computation, the full-precision convolution is performed only on the maximum pooling output that is found. This way, the total number of bit-wise convolutions can be reduced by ~2× without affecting the output feature values and with <0.8% degradation in final ImageNet classification accuracy. Precision-cascading provides the added benefit of increased sparsity per low-precision group, which we exploit with zero-skipping to eliminate clock cycles as well as external memory access that involve zero inputs. By jointly optimizing the conditional computing scheme and hardware architecture, the 40nm prototype chip demonstrates a peak energy-efficiency of 8.85 TOPS/W at 0.9V supply and low external memory access of 55.31 MB (or 0.0018 access/MAC) for ImageNet classification with VGG-16 CNN.\",\"PeriodicalId\":409525,\"journal\":{\"name\":\"2020 IEEE Custom Integrated Circuits Conference (CICC)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Custom Integrated Circuits Conference (CICC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CICC48029.2020.9075931\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Custom Integrated Circuits Conference (CICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICC48029.2020.9075931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

本文提出了一种用于深度卷积神经网络(DCNNs)的ASIC加速器,该加速器采用了一种新的条件计算方案,将精度级联和跳零协同结合起来。为了减少最大池化操作之后的冗余卷积操作,我们提出了精度级联,其中输入特征被划分为许多低精度组,并首先执行仅具有最高有效位(msb)的近似卷积。基于这种近似计算,只对找到的最大池化输出执行全精度卷积。这样,在不影响输出特征值的情况下,按位卷积的总数可以减少约2倍,并且最终ImageNet分类精度的下降<0.8%。精度级联提供了每个低精度组增加稀疏性的额外好处,我们利用跳零来消除时钟周期以及涉及零输入的外部存储器访问。通过对条件计算方案和硬件架构的共同优化,40nm原型芯片在0.9V电源下的最高能效为8.85 TOPS/W,与vgg16 CNN进行ImageNet分类时,外部存储器存取率仅为55.31 MB(或0.0018 access/MAC)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access
This paper presents an ASIC accelerator for deep convolutional neural networks (DCNNs) featuring a novel conditional computing scheme that synergistically combines precision-cascading with zero-skipping. To reduce many redundant convolution operations that are followed by max-pooling operations, we propose precision-cascading, where the input features are divided into a number of low-precision groups and approximate convolutions with only the most significant bits (MSBs) are performed first. Based on this approximate computation, the full-precision convolution is performed only on the maximum pooling output that is found. This way, the total number of bit-wise convolutions can be reduced by ~2× without affecting the output feature values and with <0.8% degradation in final ImageNet classification accuracy. Precision-cascading provides the added benefit of increased sparsity per low-precision group, which we exploit with zero-skipping to eliminate clock cycles as well as external memory access that involve zero inputs. By jointly optimizing the conditional computing scheme and hardware architecture, the 40nm prototype chip demonstrates a peak energy-efficiency of 8.85 TOPS/W at 0.9V supply and low external memory access of 55.31 MB (or 0.0018 access/MAC) for ImageNet classification with VGG-16 CNN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design-Space Exploration of Quantum Approximate Optimization Algorithm under Noise A Low-Power Dual-Factor Authentication Unit for Secure Implantable Devices Compute-in-Memory with Emerging Nonvolatile-Memories: Challenges and Prospects A 9.3mV Load and 5.2mV Line transients Fast Response Buck Converter with Active Ramping Voltage Mode Control ESD Protection Design Overview in Advanced SOI and Bulk FinFET Technologies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1