Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator

Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Yongbiao Chen, Li Jiang
{"title":"Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator","authors":"Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Yongbiao Chen, Li Jiang","doi":"10.1109/ICCAD51958.2021.9643569","DOIUrl":null,"url":null,"abstract":"Resistive Random-Access-Memory (ReRAM) crossbar is one of the most promising neural network accelerators, thanks to its in-memory and in-situ analog computing abilities for Matrix Multiplication-and-Accumulations (MACs). Nevertheless, the number of rows and columns of ReRAM cells for concurrent execution of MACs is constrained, resulting in limited in-memory computing throughput. Moreover, it is challenging to deploy Deep Neural Network(DNN) models with large model size in the crossbar, since the sparsity of DNNs cannot be effectively exploited in the crossbar structure. As the countermeasure, we develop a novel ReRAM-based DNN accelerator, named Bit-Transformer, which pays attention to the correlation between the bit-level sparsity and the performance of the ReRAM-based crossbar. We propose a superior bit-flip scheme combined with the exponent-based quantization, which can adaptively flip the bits of the mapped DNNs to release redundant space without sacrificing the accuracy much or incurring much hardware overhead. Meanwhile, we design an architecture that can integrate the techniques to massively shrink the crossbar footprint to be used. In this way, It efficiently leverages the bit-level sparsity for performance gains while reducing the energy consumption of computation. The comprehensive experiments indicate that our Bit-Transformer outperforms prior state-of-the-art designs up to 13 x, 35 x, and 67 x, in terms of energy-efficiency, area-efficiency, and throughput, respectively. Code will be open-source in the camera-ready version.","PeriodicalId":370791,"journal":{"name":"2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAD51958.2021.9643569","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Resistive Random-Access-Memory (ReRAM) crossbar is one of the most promising neural network accelerators, thanks to its in-memory and in-situ analog computing abilities for Matrix Multiplication-and-Accumulations (MACs). Nevertheless, the number of rows and columns of ReRAM cells for concurrent execution of MACs is constrained, resulting in limited in-memory computing throughput. Moreover, it is challenging to deploy Deep Neural Network(DNN) models with large model size in the crossbar, since the sparsity of DNNs cannot be effectively exploited in the crossbar structure. As the countermeasure, we develop a novel ReRAM-based DNN accelerator, named Bit-Transformer, which pays attention to the correlation between the bit-level sparsity and the performance of the ReRAM-based crossbar. We propose a superior bit-flip scheme combined with the exponent-based quantization, which can adaptively flip the bits of the mapped DNNs to release redundant space without sacrificing the accuracy much or incurring much hardware overhead. Meanwhile, we design an architecture that can integrate the techniques to massively shrink the crossbar footprint to be used. In this way, It efficiently leverages the bit-level sparsity for performance gains while reducing the energy consumption of computation. The comprehensive experiments indicate that our Bit-Transformer outperforms prior state-of-the-art designs up to 13 x, 35 x, and 67 x, in terms of energy-efficiency, area-efficiency, and throughput, respectively. Code will be open-source in the camera-ready version.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
位转换器:在基于reram的加速器中将位级稀疏性转化为更高的性能
电阻随机存取存储器(ReRAM)交叉棒是最有前途的神经网络加速器之一,由于其在内存和原位模拟计算矩阵乘法和积累(mac)的能力。然而,用于mac并发执行的ReRAM单元的行数和列数受到限制,导致内存中计算吞吐量有限。此外,深层神经网络(Deep Neural Network, DNN)模型的稀疏性无法在交叉栏结构中得到有效利用,因此在交叉栏结构中部署大模型尺寸的深度神经网络(Deep Neural Network, DNN)模型具有挑战性。作为应对措施,我们开发了一种新的基于reram的深度神经网络加速器Bit-Transformer,它关注了比特级稀疏度与基于reram的交叉棒性能之间的相关性。我们提出了一种与指数量化相结合的优越的位翻转方案,该方案可以自适应地翻转映射dnn的位以释放冗余空间,而不会牺牲太多的精度或产生太多的硬件开销。同时,我们设计了一个可以集成技术的架构,以大规模地缩小横梁占用空间。通过这种方式,它有效地利用了比特级稀疏性来提高性能,同时减少了计算的能耗。综合实验表明,我们的Bit-Transformer在能效、面积效率和吞吐量方面分别优于先前最先进的设计高达13倍、35倍和67倍。代码将在相机版本中开放源代码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast and Accurate PPA Modeling with Transfer Learning Mobileware: A High-Performance MobileNet Accelerator with Channel Stationary Dataflow A General Hardware and Software Co-Design Framework for Energy-Efficient Edge AI ToPro: A Topology Projector and Waveguide Router for Wavelength-Routed Optical Networks-on-Chip Early Validation of SoCs Security Architecture Against Timing Flows Using SystemC-based VPs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1