SADIMM: Accelerating $\underline{\text{S}}$S―parse $\underline{\text{A}}$A―ttention Using $\underline{\text{DIMM}}$DIMM―-Based Near-Memory Processing

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Transactions on Computers Pub Date : 2024-11-15 DOI:10.1109/TC.2024.3500362
Huize Li;Dan Chen;Tulika Mitra
{"title":"SADIMM: Accelerating $\\underline{\\text{S}}$S―parse $\\underline{\\text{A}}$A―ttention Using $\\underline{\\text{DIMM}}$DIMM―-Based Near-Memory Processing","authors":"Huize Li;Dan Chen;Tulika Mitra","doi":"10.1109/TC.2024.3500362","DOIUrl":null,"url":null,"abstract":"Self-attention mechanism is the performance bottleneck of Transformer based language models. In response, researchers have proposed sparse attention to expedite Transformer execution. However, sparse attention involves massive random access, rendering it as a memory-intensive kernel. Memory-based architectures, such as <i>near-memory processing</i> (NMP), demonstrate notable performance enhancements in memory-intensive applications. Nonetheless, existing NMP-based sparse attention accelerators face suboptimal performance due to hardware and software challenges. On the hardware front, current solutions employ homogeneous logic integration, struggling to support the diverse operations in sparse attention. On the software side, token-based dataflow is commonly adopted, leading to load imbalance after the pruning of weakly connected tokens. To address these challenges, this paper introduces SADIMM, a hardware-software co-designed NMP-based sparse attention accelerator. In hardware, we propose a heterogeneous integration approach to efficiently support various operations within the attention mechanism. This involves employing different logic units for different operations, thereby improving hardware efficiency. In software, we implement a dimension-based dataflow, dividing input sequences by model dimensions. This approach achieves load balancing after the pruning of weakly connected tokens. Compared to NVIDIA RTX A6000 GPU, the experimental results on BERT, BART, and GPT-2 models demonstrate that SADIMM achieves 48<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula>, 35<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula>, 37<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> speedups and 194<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula>, 202<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula>, 191<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> energy efficiency improvement, respectively.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 2","pages":"542-554"},"PeriodicalIF":3.8000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10755033/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Self-attention mechanism is the performance bottleneck of Transformer based language models. In response, researchers have proposed sparse attention to expedite Transformer execution. However, sparse attention involves massive random access, rendering it as a memory-intensive kernel. Memory-based architectures, such as near-memory processing (NMP), demonstrate notable performance enhancements in memory-intensive applications. Nonetheless, existing NMP-based sparse attention accelerators face suboptimal performance due to hardware and software challenges. On the hardware front, current solutions employ homogeneous logic integration, struggling to support the diverse operations in sparse attention. On the software side, token-based dataflow is commonly adopted, leading to load imbalance after the pruning of weakly connected tokens. To address these challenges, this paper introduces SADIMM, a hardware-software co-designed NMP-based sparse attention accelerator. In hardware, we propose a heterogeneous integration approach to efficiently support various operations within the attention mechanism. This involves employing different logic units for different operations, thereby improving hardware efficiency. In software, we implement a dimension-based dataflow, dividing input sequences by model dimensions. This approach achieves load balancing after the pruning of weakly connected tokens. Compared to NVIDIA RTX A6000 GPU, the experimental results on BERT, BART, and GPT-2 models demonstrate that SADIMM achieves 48$\boldsymbol{\times}$, 35$\boldsymbol{\times}$, 37$\boldsymbol{\times}$ speedups and 194$\boldsymbol{\times}$, 202$\boldsymbol{\times}$, 191$\boldsymbol{\times}$ energy efficiency improvement, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
$\underline{\text{S}}$S - parse $\underline{\text{A}}$A - attention使用$\underline{\text{DIMM}}$DIMM -Based Near-Memory Processing
自关注机制是基于Transformer的语言模型的性能瓶颈。作为回应,研究人员建议分散注意力以加快Transformer的执行。然而,稀疏注意涉及大量随机访问,使其成为内存密集型内核。基于内存的体系结构,如近内存处理(NMP),在内存密集型应用程序中表现出显著的性能增强。然而,由于硬件和软件的挑战,现有的基于nmp的稀疏注意力加速器面临着性能欠佳的问题。在硬件方面,当前的解决方案采用同构逻辑集成,难以支持稀疏注意力下的各种操作。在软件端,通常采用基于令牌的数据流,导致弱连接令牌修剪后的负载不平衡。为了解决这些问题,本文介绍了一种基于nmp的硬件软件协同设计的稀疏注意力加速器SADIMM。在硬件方面,我们提出了一种异构集成方法来有效地支持注意力机制内的各种操作。这涉及到为不同的操作使用不同的逻辑单元,从而提高硬件效率。在软件中,我们实现了一个基于维度的数据流,按模型维度划分输入序列。这种方法在修剪弱连接令牌后实现负载平衡。与NVIDIA RTX A6000 GPU相比,在BERT、BART和GPT-2模型上的实验结果表明,SADIMM分别实现了48$\boldsymbol{\times}$、35$\boldsymbol{\times}$、37$\boldsymbol{\times}$的速度提升,以及194$\boldsymbol{\times}$、202$\boldsymbol{\times}$、191$\boldsymbol{\times}$的能效提升。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Computers
IEEE Transactions on Computers 工程技术-工程:电子与电气
CiteScore
6.60
自引率
5.40%
发文量
199
审稿时长
6.0 months
期刊介绍: The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.
期刊最新文献
GRASP: Accelerating Hash-Based PQC Performance on GPU Parallel Architecture FlexClave: An Extensible and Secure Trusted Execution Environment Framework Collaborative Prediction of Cloud DRAM Failures With Rules and Machine Learning Hardware-Efficient Taylor Series-Based Optimal Unsigned Square Rooter for Fast and Low Power Computation MalPDT: Backdoor Attack Against Static Malware Detection With Plug-and-Play Dynamic Triggers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1