CRUS: A Hardware-Efficient Algorithm Mitigating Highly Nonlinear Weight Update in CIM Crossbar Arrays for Artificial Neural Networks

IF 2 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Journal on Exploratory Solid-State Computational Devices and Circuits Pub Date : 2022-11-04 DOI:10.1109/JXCDC.2022.3220032
Junmo Lee;Joon Hwang;Youngwoon Cho;Min-Kyu Park;Woo Young Choi;Sangbum Kim;Jong-Ho Lee
{"title":"CRUS: A Hardware-Efficient Algorithm Mitigating Highly Nonlinear Weight Update in CIM Crossbar Arrays for Artificial Neural Networks","authors":"Junmo Lee;Joon Hwang;Youngwoon Cho;Min-Kyu Park;Woo Young Choi;Sangbum Kim;Jong-Ho Lee","doi":"10.1109/JXCDC.2022.3220032","DOIUrl":null,"url":null,"abstract":"Mitigating the nonlinear weight update of synaptic devices is one of the main challenges in designing compute-in-memory (CIM) crossbar arrays for artificial neural networks (ANNs). While various nonlinearity mitigation schemes have been proposed so far, only a few of them have dealt with high-weight update nonlinearity. This article presents a hardware-efficient on-chip weight update scheme named the conditional reverse update scheme (CRUS), which algorithmically mitigates highly nonlinear weight change in synaptic devices. For hardware efficiency, CRUS is implemented on-chip using low precision (1-bit) and infrequent circuit operations. To utilize algorithmic insights, the impact of the nonlinear weight update on training is investigated. We first introduce a metric called update noise (UN), which quantifies the deviation of the actual weight update in synaptic devices from the expected weight update calculated from the stochastic gradient descent (SGD) algorithm. Based on UN analysis, we aim to reduce AUN, the UN average over the entire training process. The key principle to reducing average UN (AUN) is to conditionally skip long-term depression (LTD) pulses during training. The trends of AUN and accuracy under various LTD skip conditions are investigated to find maximum accuracy conditions. By properly tuning LTD skip conditions, CRUS achieves >90% accuracy on the Modified National Institute of Standards and Technology (MNIST) dataset even under high-weight update nonlinearity. Furthermore, it shows better accuracy than previous nonlinearity mitigation techniques under similar hardware conditions. It also exhibits robustness to cycle-to-cycle variations (CCVs) in conductance updates. The results suggest that CRUS can be an effective solution to relieve the algorithm-hardware tradeoff in CIM crossbar array design.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"8 2","pages":"145-154"},"PeriodicalIF":2.0000,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6570653/9969523/09940271.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/9940271/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Mitigating the nonlinear weight update of synaptic devices is one of the main challenges in designing compute-in-memory (CIM) crossbar arrays for artificial neural networks (ANNs). While various nonlinearity mitigation schemes have been proposed so far, only a few of them have dealt with high-weight update nonlinearity. This article presents a hardware-efficient on-chip weight update scheme named the conditional reverse update scheme (CRUS), which algorithmically mitigates highly nonlinear weight change in synaptic devices. For hardware efficiency, CRUS is implemented on-chip using low precision (1-bit) and infrequent circuit operations. To utilize algorithmic insights, the impact of the nonlinear weight update on training is investigated. We first introduce a metric called update noise (UN), which quantifies the deviation of the actual weight update in synaptic devices from the expected weight update calculated from the stochastic gradient descent (SGD) algorithm. Based on UN analysis, we aim to reduce AUN, the UN average over the entire training process. The key principle to reducing average UN (AUN) is to conditionally skip long-term depression (LTD) pulses during training. The trends of AUN and accuracy under various LTD skip conditions are investigated to find maximum accuracy conditions. By properly tuning LTD skip conditions, CRUS achieves >90% accuracy on the Modified National Institute of Standards and Technology (MNIST) dataset even under high-weight update nonlinearity. Furthermore, it shows better accuracy than previous nonlinearity mitigation techniques under similar hardware conditions. It also exhibits robustness to cycle-to-cycle variations (CCVs) in conductance updates. The results suggest that CRUS can be an effective solution to relieve the algorithm-hardware tradeoff in CIM crossbar array design.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CRUS:一种硬件有效的人工神经网络CIM交叉条阵列中高度非线性权重更新算法
缓解突触设备的非线性权重更新是为人工神经网络设计内存计算(CIM)交叉阵列的主要挑战之一。虽然到目前为止已经提出了各种非线性缓解方案,但其中只有少数方案处理了高权重更新非线性。本文提出了一种硬件高效的片上权重更新方案,称为条件反向更新方案(CRUS),该方案在算法上缓解了突触设备中高度非线性的权重变化。为了提高硬件效率,CRUS是使用低精度(1位)和不频繁的电路操作在芯片上实现的。为了利用算法见解,研究了非线性权重更新对训练的影响。我们首先引入了一种称为更新噪声(UN)的度量,它量化了突触设备中实际权重更新与随机梯度下降(SGD)算法计算的预期权重更新的偏差。根据联合国的分析,我们的目标是减少AUN,即联合国在整个培训过程中的平均值。降低平均UN(AUN)的关键原则是在训练过程中有条件地跳过长期抑郁(LTD)脉冲。研究了AUN和精度在各种LTD跳跃条件下的趋势,以找到最大精度条件。通过适当调整LTD跳跃条件,CRUS在修改后的国家标准与技术研究所(MNIST)数据集上实现了>90%的准确率,即使在高权重更新非线性下也是如此。此外,在类似的硬件条件下,它显示出比以前的非线性抑制技术更好的精度。它还对电导更新中的周期间变化(CCV)表现出鲁棒性。结果表明,CRUS是一种有效的解决方案,可以缓解CIM交叉阵列设计中算法硬件的折衷。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.00
自引率
4.20%
发文量
11
审稿时长
13 weeks
期刊最新文献
2024 Index IEEE Journal on Exploratory Solid-State Computational Devices and Circuits Vol. 10 Front Cover Table of Contents INFORMATION FOR AUTHORS IEEE Journal on Exploratory Solid-State Computational Devices and Circuits publication information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1