PIPECIM: Energy-Efficient Pipelined Computing-in-Memory Computation Engine With Sparsity-Aware Technique

IF 2.8 2区 工程技术 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Transactions on Very Large Scale Integration (VLSI) Systems Pub Date : 2024-10-01 DOI:10.1109/TVLSI.2024.3462507
Yuanbo Wang;Liang Chang;Jingke Wang;Pan Zhao;Jiahao Zeng;Xin Zhao;Wuyang Hao;Liang Zhou;Haining Tan;Yinhe Han;Jun Zhou
{"title":"PIPECIM: Energy-Efficient Pipelined Computing-in-Memory Computation Engine With Sparsity-Aware Technique","authors":"Yuanbo Wang;Liang Chang;Jingke Wang;Pan Zhao;Jiahao Zeng;Xin Zhao;Wuyang Hao;Liang Zhou;Haining Tan;Yinhe Han;Jun Zhou","doi":"10.1109/TVLSI.2024.3462507","DOIUrl":null,"url":null,"abstract":"Computing-in-memory (CIM) architecture has become a promising solution to improve the parallelism of the multiply-and-accumulation (MAC) operation for artificial intelligence (AI) processors. Recently, revived CIM engine partly relieves the memory wall issue by integrating computation in/with the memory. However, current CIM solutions still require large data movements with the increase of the practical neural network model and massive input data. Previous CIM works only considered computation without concern for the memory attribute, leading to a low memory computing ratio. This article presents a static-random access-memory (SRAM)-based digital CIM macro supporting pipeline mode and computation-memory-aware technique to improve the memory computing ratio. We develop a novel weight driver with fine-grained ping-pong operation, avoiding the computation stall caused by weight update. Based on our evaluation, the peak energy efficiency is 19.78 TOPS/W at the 22-nm technology node, 8-bit width, and 50% sparsity of the input feature map.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 2","pages":"525-536"},"PeriodicalIF":2.8000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10701033/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Computing-in-memory (CIM) architecture has become a promising solution to improve the parallelism of the multiply-and-accumulation (MAC) operation for artificial intelligence (AI) processors. Recently, revived CIM engine partly relieves the memory wall issue by integrating computation in/with the memory. However, current CIM solutions still require large data movements with the increase of the practical neural network model and massive input data. Previous CIM works only considered computation without concern for the memory attribute, leading to a low memory computing ratio. This article presents a static-random access-memory (SRAM)-based digital CIM macro supporting pipeline mode and computation-memory-aware technique to improve the memory computing ratio. We develop a novel weight driver with fine-grained ping-pong operation, avoiding the computation stall caused by weight update. Based on our evaluation, the peak energy efficiency is 19.78 TOPS/W at the 22-nm technology node, 8-bit width, and 50% sparsity of the input feature map.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PIPECIM:基于稀疏感知技术的高效内存管道计算引擎
内存计算(CIM)架构已成为提高人工智能(AI)处理器乘法累加(MAC)运算并行性的一种有前途的解决方案。最近,复兴的CIM引擎通过将计算集成到内存中,在一定程度上缓解了内存墙问题。然而,随着实用神经网络模型的增加和海量输入数据的增加,目前的CIM解决方案仍然需要大量的数据移动。以前的CIM只考虑计算,不考虑内存属性,导致内存计算率较低。本文提出了一种基于静态随机存取存储器(SRAM)的数字CIM宏,支持流水线模式和计算内存感知技术,以提高内存计算比。我们开发了一种具有细粒度乒乓操作的权重驱动,避免了权重更新带来的计算失速。根据我们的评估,在22nm技术节点,8位宽度,输入特征映射的50%稀疏度下,峰值能量效率为19.78 TOPS/W。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.40
自引率
7.10%
发文量
187
审稿时长
3.6 months
期刊介绍: The IEEE Transactions on VLSI Systems is published as a monthly journal under the co-sponsorship of the IEEE Circuits and Systems Society, the IEEE Computer Society, and the IEEE Solid-State Circuits Society. Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels. To address this critical area through a common forum, the IEEE Transactions on VLSI Systems have been founded. The editorial board, consisting of international experts, invites original papers which emphasize and merit the novel systems integration aspects of microelectronic systems including interactions among systems design and partitioning, logic and memory design, digital and analog circuit design, layout synthesis, CAD tools, chips and wafer fabrication, testing and packaging, and systems level qualification. Thus, the coverage of these Transactions will focus on VLSI/ULSI microelectronic systems integration.
期刊最新文献
Table of Contents IEEE Transactions on Very Large Scale Integration (VLSI) Systems Society Information IEEE Transactions on Very Large Scale Integration (VLSI) Systems Publication Information Table of Contents IEEE Transactions on Very Large Scale Integration (VLSI) Systems Society Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1