{"title":"Efficient SpMM Accelerator for Deep Learning: Sparkle and Its Automated Generator","authors":"Shiyao Xu, Jingfei Jiang, Jinwei Xu, Xifu Qian","doi":"10.1145/3665896","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) technology has made breakthroughs in a wide range of intelligent tasks such as vision, language, recommendation systems, etc. Sparse matrix multiplication (SpMM) is the key computation kernel of most sparse models. Conventional computing platforms such as CPUs, GPUs, and AI chips with regular processing units are unable to effectively support sparse computation due to their fixed structure and instruction sets. This work extends Sparkle, an accelerator architecture, which is developed specifically for processing SpMM in DL. During the balanced data loading process, some modifications are implemented to enhance the flexibility of the Sparkle architecture. Additionally, a Sparkle generator is proposed to accommodate diverse resource constraints and facilitate adaptable deployment. Leveraging Sparkle’s structural parameters and template-based design methods, the generator enables automatic Sparkle circuit generation under varying parameters. An instantiated Sparkle accelerator is implemented on the Xilinx xqvu11p FPGA platform with a specific configuration. Compared to the state-of-the-art SpMM accelerator SIGMA, the Sparkle accelerator instance improves the sparse computing efficiency by about 10 to 20 \\(\\%\\) . Furthermore, the Sparkle instance achieved 7.76 \\(\\times\\) higher performance over the Nvidia Orin NX GPU. More instances of accelerators with different parameters were evaluated, demonstrating that the Sparkle architecture can effectively accelerate SpMM.","PeriodicalId":49248,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Reconfigurable Technology and Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3665896","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning (DL) technology has made breakthroughs in a wide range of intelligent tasks such as vision, language, recommendation systems, etc. Sparse matrix multiplication (SpMM) is the key computation kernel of most sparse models. Conventional computing platforms such as CPUs, GPUs, and AI chips with regular processing units are unable to effectively support sparse computation due to their fixed structure and instruction sets. This work extends Sparkle, an accelerator architecture, which is developed specifically for processing SpMM in DL. During the balanced data loading process, some modifications are implemented to enhance the flexibility of the Sparkle architecture. Additionally, a Sparkle generator is proposed to accommodate diverse resource constraints and facilitate adaptable deployment. Leveraging Sparkle’s structural parameters and template-based design methods, the generator enables automatic Sparkle circuit generation under varying parameters. An instantiated Sparkle accelerator is implemented on the Xilinx xqvu11p FPGA platform with a specific configuration. Compared to the state-of-the-art SpMM accelerator SIGMA, the Sparkle accelerator instance improves the sparse computing efficiency by about 10 to 20 \(\%\) . Furthermore, the Sparkle instance achieved 7.76 \(\times\) higher performance over the Nvidia Orin NX GPU. More instances of accelerators with different parameters were evaluated, demonstrating that the Sparkle architecture can effectively accelerate SpMM.
期刊介绍:
TRETS is the top journal focusing on research in, on, and with reconfigurable systems and on their underlying technology. The scope, rationale, and coverage by other journals are often limited to particular aspects of reconfigurable technology or reconfigurable systems. TRETS is a journal that covers reconfigurability in its own right.
Topics that would be appropriate for TRETS would include all levels of reconfigurable system abstractions and all aspects of reconfigurable technology including platforms, programming environments and application successes that support these systems for computing or other applications.
-The board and systems architectures of a reconfigurable platform.
-Programming environments of reconfigurable systems, especially those designed for use with reconfigurable systems that will lead to increased programmer productivity.
-Languages and compilers for reconfigurable systems.
-Logic synthesis and related tools, as they relate to reconfigurable systems.
-Applications on which success can be demonstrated.
The underlying technology from which reconfigurable systems are developed. (Currently this technology is that of FPGAs, but research on the nature and use of follow-on technologies is appropriate for TRETS.)
In considering whether a paper is suitable for TRETS, the foremost question should be whether reconfigurability has been essential to success. Topics such as architecture, programming languages, compilers, and environments, logic synthesis, and high performance applications are all suitable if the context is appropriate. For example, an architecture for an embedded application that happens to use FPGAs is not necessarily suitable for TRETS, but an architecture using FPGAs for which the reconfigurability of the FPGAs is an inherent part of the specifications (perhaps due to a need for re-use on multiple applications) would be appropriate for TRETS.