Extreme Partial-Sum Quantization for Analog Computing-In-Memory Neural Network Accelerators

IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE ACM Journal on Emerging Technologies in Computing Systems Pub Date : 2022-10-13 DOI:https://dl.acm.org/doi/10.1145/3528104
Yulhwa Kim, Hyungjun Kim, Jae-Joon Kim
{"title":"Extreme Partial-Sum Quantization for Analog Computing-In-Memory Neural Network Accelerators","authors":"Yulhwa Kim, Hyungjun Kim, Jae-Joon Kim","doi":"https://dl.acm.org/doi/10.1145/3528104","DOIUrl":null,"url":null,"abstract":"<p>In Analog Computing-in-Memory (CIM) neural network accelerators, analog-to-digital converters (ADCs) are required to convert the analog partial sums generated from a CIM array to digital values. The overhead from ADCs substantially degrades the energy efficiency of CIM accelerators so that previous works attempted to lower the ADC resolution considering the distribution of the partial sums. Despite the efforts, the required ADC resolution still remains relatively high. In this article, we propose the data-driven partial sum quantization scheme, which exhaustively searches for the optimal quantization range with little computational burden. We also report that analyzing the characteristics of the partial sum distributions at each layer gives an additional information to further reduce the ADC resolution compared to previous works that mostly used the characteristics of the partial sum distributions of the entire network. Based on the finer-level data-driven approach combined with retraining, we present a methodology for extreme partial-sum quantization. Experimental results show that the proposed method can reduce the ADC resolution to 2 to 3 bits for CIFAR-10 dataset, which is the smaller ADC bit resolution than any previous CIM-based NN accelerators.</p>","PeriodicalId":50924,"journal":{"name":"ACM Journal on Emerging Technologies in Computing Systems","volume":"62 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal on Emerging Technologies in Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3528104","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

In Analog Computing-in-Memory (CIM) neural network accelerators, analog-to-digital converters (ADCs) are required to convert the analog partial sums generated from a CIM array to digital values. The overhead from ADCs substantially degrades the energy efficiency of CIM accelerators so that previous works attempted to lower the ADC resolution considering the distribution of the partial sums. Despite the efforts, the required ADC resolution still remains relatively high. In this article, we propose the data-driven partial sum quantization scheme, which exhaustively searches for the optimal quantization range with little computational burden. We also report that analyzing the characteristics of the partial sum distributions at each layer gives an additional information to further reduce the ADC resolution compared to previous works that mostly used the characteristics of the partial sum distributions of the entire network. Based on the finer-level data-driven approach combined with retraining, we present a methodology for extreme partial-sum quantization. Experimental results show that the proposed method can reduce the ADC resolution to 2 to 3 bits for CIFAR-10 dataset, which is the smaller ADC bit resolution than any previous CIM-based NN accelerators.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
模拟内存计算神经网络加速器的极值部分和量化
在内存模拟计算(CIM)神经网络加速器中,需要模数转换器(adc)将CIM阵列生成的模拟部分和转换为数字值。ADC的开销大大降低了CIM加速器的能源效率,因此以前的工作试图降低ADC的分辨率,考虑到部分和的分布。尽管如此,所需的ADC分辨率仍然相对较高。在本文中,我们提出了数据驱动的部分和量化方案,该方案在计算量很小的情况下穷尽搜索最优量化范围。我们还报告说,与之前主要使用整个网络的部分和分布特征的工作相比,分析每层部分和分布的特征提供了额外的信息,以进一步降低ADC分辨率。基于精细级数据驱动方法和再训练相结合,提出了一种极值部分和量化方法。实验结果表明,该方法可以将CIFAR-10数据集的ADC分辨率降低到2 ~ 3位,比以往任何基于cim的神经网络加速器的ADC位分辨率都要小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Journal on Emerging Technologies in Computing Systems
ACM Journal on Emerging Technologies in Computing Systems 工程技术-工程:电子与电气
CiteScore
4.80
自引率
4.50%
发文量
86
审稿时长
3 months
期刊介绍: The Journal of Emerging Technologies in Computing Systems invites submissions of original technical papers describing research and development in emerging technologies in computing systems. Major economic and technical challenges are expected to impede the continued scaling of semiconductor devices. This has resulted in the search for alternate mechanical, biological/biochemical, nanoscale electronic, asynchronous and quantum computing and sensor technologies. As the underlying nanotechnologies continue to evolve in the labs of chemists, physicists, and biologists, it has become imperative for computer scientists and engineers to translate the potential of the basic building blocks (analogous to the transistor) emerging from these labs into information systems. Their design will face multiple challenges ranging from the inherent (un)reliability due to the self-assembly nature of the fabrication processes for nanotechnologies, from the complexity due to the sheer volume of nanodevices that will have to be integrated for complex functionality, and from the need to integrate these new nanotechnologies with silicon devices in the same system. The journal provides comprehensive coverage of innovative work in the specification, design analysis, simulation, verification, testing, and evaluation of computing systems constructed out of emerging technologies and advanced semiconductors
期刊最新文献
PUF-Based Digital Money with Propagation-of-Provenance and Offline Transfers Between Two Parties SAT-based Exact Modulo Scheduling Mapping for Resource-Constrained CGRAs Towards practical superconducting accelerators for machine learning using U-SFQ Towards Energy-Efficient Spiking Neural Networks: A Robust Hybrid CMOS-Memristive Accelerator An Analysis of Various Design Pathways Towards Multi-Terabit Photonic On-Interposer Interconnects
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1