Ternary Compute-Enabled Memory using Ferroelectric Transistors for Accelerating Deep Neural Networks

S. Thirumala, Shubham Jain, S. Gupta, A. Raghunathan
{"title":"Ternary Compute-Enabled Memory using Ferroelectric Transistors for Accelerating Deep Neural Networks","authors":"S. Thirumala, Shubham Jain, S. Gupta, A. Raghunathan","doi":"10.23919/DATE48585.2020.9116495","DOIUrl":null,"url":null,"abstract":"Ternary Deep Neural Networks (DNNs), which employ ternary precision for weights and activations, have recently been shown to attain accuracies close to full-precision DNNs, raising interest in their efficient hardware realization. In this work we propose a Non-Volatile Ternary Compute-Enabled memory cell (TeC-Cell) based on ferroelectric transistors (FEFETs) for inmemory computing in the signed ternary regime. In particular, the proposed cell enables storage of ternary weights and employs multi-word-line assertion to perform massively parallel signed dot-product computations between ternary weights and ternary inputs. We evaluate the proposed design at the array level and show 72% and 74% higher energy efficiency for multiply-andaccumulate (MAC) operations compared to standard nearmemory computing designs based on SRAM and FEFET, respectively. Furthermore, we evaluate the proposed TeC-Cell in an existing ternary in-memory DNN accelerator. Our results show 3.3X-3.4X reduction in system energy and 4.3X-7X improvement in system performance over SRAM and FEFET based nearmemory accelerators, across a wide range of DNN benchmarks including both deep convolutional and recurrent neural networks.","PeriodicalId":289525,"journal":{"name":"2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE48585.2020.9116495","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Ternary Deep Neural Networks (DNNs), which employ ternary precision for weights and activations, have recently been shown to attain accuracies close to full-precision DNNs, raising interest in their efficient hardware realization. In this work we propose a Non-Volatile Ternary Compute-Enabled memory cell (TeC-Cell) based on ferroelectric transistors (FEFETs) for inmemory computing in the signed ternary regime. In particular, the proposed cell enables storage of ternary weights and employs multi-word-line assertion to perform massively parallel signed dot-product computations between ternary weights and ternary inputs. We evaluate the proposed design at the array level and show 72% and 74% higher energy efficiency for multiply-andaccumulate (MAC) operations compared to standard nearmemory computing designs based on SRAM and FEFET, respectively. Furthermore, we evaluate the proposed TeC-Cell in an existing ternary in-memory DNN accelerator. Our results show 3.3X-3.4X reduction in system energy and 4.3X-7X improvement in system performance over SRAM and FEFET based nearmemory accelerators, across a wide range of DNN benchmarks including both deep convolutional and recurrent neural networks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用铁电晶体管加速深度神经网络的三元计算存储器
三元深度神经网络(dnn)采用三元精度进行权值和激活,最近已被证明可以达到接近全精度dnn的精度,这引起了人们对其高效硬件实现的兴趣。在这项工作中,我们提出了一种基于铁电晶体管(fefet)的非易失性三元计算支持存储单元(TeC-Cell),用于在有符号三元制度下的内存计算。特别是,所提出的单元支持三元权值的存储,并使用多字行断言在三元权值和三元输入之间执行大量并行的有符号点积计算。我们在阵列级评估了提出的设计,并显示与基于SRAM和ffet的标准近内存计算设计相比,乘法和累积(MAC)操作的能源效率分别提高了72%和74%。此外,我们在现有的三元内存DNN加速器中评估了所提出的TeC-Cell。我们的研究结果表明,在包括深度卷积和循环神经网络在内的广泛的DNN基准测试中,与基于SRAM和ffet的近存储器加速器相比,系统能量降低了3.3 -3.4倍,系统性能提高了4.3 - 7x。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
相关文献
Approximation-Based Neural Network and Fuzzy Logic Control
IF 0 IFAC Proceedings VolumesPub Date : 1996-06-01 DOI: 10.1016/S1474-6670(17)58519-5
S. Commuri, F. Lewis, K. Liu
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
In-Memory Resistive RAM Implementation of Binarized Neural Networks for Medical Applications Towards Formal Verification of Optimized and Industrial Multipliers A 100KHz-1GHz Termination-dependent Human Body Communication Channel Measurement using Miniaturized Wearable Devices Computational SRAM Design Automation using Pushed-Rule Bitcells for Energy-Efficient Vector Processing PIM-Aligner: A Processing-in-MRAM Platform for Biological Sequence Alignment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1