{"title":"具有三维计算阵列的高能效非结构化稀疏感知深度SNN加速器","authors":"Chaoming Fang;Ziyang Shen;Zongsheng Wang;Chuanqing Wang;Shiqi Zhao;Fengshi Tian;Jie Yang;Mohamad Sawan","doi":"10.1109/JSSC.2024.3507095","DOIUrl":null,"url":null,"abstract":"Deep spiking neural networks (DSNNs), such as spiking transformers, have demonstrated comparable performance to artificial neural networks (ANNs). With higher spike input sparsity and the utilization of accumulation (AC)-only operations, DSNNs have great potential for achieving high energy efficiency. Many researchers have proposed neuromorphic processors to accelerate spiking neural networks (SNNs) with dedicated architectures. However, three problems still exist when processing DSNNs, including redundant memory access among timesteps, inefficiency in exploiting unstructured sparsity in spikes, and the lack of optimizations for new operators involved in DSNNs. In this work, an accelerator for deep and sparse SNNs is proposed with three design features: a 3-D computation array that allows parallel computation of multiple timesteps to maximize weight data reuse and reduce external memory access; a parallel non-zero data fetcher that efficiently searches non-zero spike positions and fetches corresponding weights to reduce computation latency; and a multimode unified computation scheduler that can be configured to maximize energy efficiency for spiking convolution (SCONV), spiking <inline-formula> <tex-math>$Q, K,~\\text {and}~V$ </tex-math></inline-formula> matrix generation, and spiking self-attention (SSA). The accelerator is implemented and fabricated using 40-nm CMOS technology. When compared with state-of-the-art sparse processors, it achieves the best energy efficiency of 0.078 pJ/SOP and the highest recognition accuracy of 77.6% on ImageNet using the spiking transformer algorithm.","PeriodicalId":13129,"journal":{"name":"IEEE Journal of Solid-state Circuits","volume":"60 3","pages":"977-989"},"PeriodicalIF":4.6000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Energy-Efficient Unstructured Sparsity-Aware Deep SNN Accelerator With 3-D Computation Array\",\"authors\":\"Chaoming Fang;Ziyang Shen;Zongsheng Wang;Chuanqing Wang;Shiqi Zhao;Fengshi Tian;Jie Yang;Mohamad Sawan\",\"doi\":\"10.1109/JSSC.2024.3507095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep spiking neural networks (DSNNs), such as spiking transformers, have demonstrated comparable performance to artificial neural networks (ANNs). With higher spike input sparsity and the utilization of accumulation (AC)-only operations, DSNNs have great potential for achieving high energy efficiency. Many researchers have proposed neuromorphic processors to accelerate spiking neural networks (SNNs) with dedicated architectures. However, three problems still exist when processing DSNNs, including redundant memory access among timesteps, inefficiency in exploiting unstructured sparsity in spikes, and the lack of optimizations for new operators involved in DSNNs. In this work, an accelerator for deep and sparse SNNs is proposed with three design features: a 3-D computation array that allows parallel computation of multiple timesteps to maximize weight data reuse and reduce external memory access; a parallel non-zero data fetcher that efficiently searches non-zero spike positions and fetches corresponding weights to reduce computation latency; and a multimode unified computation scheduler that can be configured to maximize energy efficiency for spiking convolution (SCONV), spiking <inline-formula> <tex-math>$Q, K,~\\\\text {and}~V$ </tex-math></inline-formula> matrix generation, and spiking self-attention (SSA). The accelerator is implemented and fabricated using 40-nm CMOS technology. When compared with state-of-the-art sparse processors, it achieves the best energy efficiency of 0.078 pJ/SOP and the highest recognition accuracy of 77.6% on ImageNet using the spiking transformer algorithm.\",\"PeriodicalId\":13129,\"journal\":{\"name\":\"IEEE Journal of Solid-state Circuits\",\"volume\":\"60 3\",\"pages\":\"977-989\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Solid-state Circuits\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10777513/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Solid-state Circuits","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10777513/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
An Energy-Efficient Unstructured Sparsity-Aware Deep SNN Accelerator With 3-D Computation Array
Deep spiking neural networks (DSNNs), such as spiking transformers, have demonstrated comparable performance to artificial neural networks (ANNs). With higher spike input sparsity and the utilization of accumulation (AC)-only operations, DSNNs have great potential for achieving high energy efficiency. Many researchers have proposed neuromorphic processors to accelerate spiking neural networks (SNNs) with dedicated architectures. However, three problems still exist when processing DSNNs, including redundant memory access among timesteps, inefficiency in exploiting unstructured sparsity in spikes, and the lack of optimizations for new operators involved in DSNNs. In this work, an accelerator for deep and sparse SNNs is proposed with three design features: a 3-D computation array that allows parallel computation of multiple timesteps to maximize weight data reuse and reduce external memory access; a parallel non-zero data fetcher that efficiently searches non-zero spike positions and fetches corresponding weights to reduce computation latency; and a multimode unified computation scheduler that can be configured to maximize energy efficiency for spiking convolution (SCONV), spiking $Q, K,~\text {and}~V$ matrix generation, and spiking self-attention (SSA). The accelerator is implemented and fabricated using 40-nm CMOS technology. When compared with state-of-the-art sparse processors, it achieves the best energy efficiency of 0.078 pJ/SOP and the highest recognition accuracy of 77.6% on ImageNet using the spiking transformer algorithm.
期刊介绍:
The IEEE Journal of Solid-State Circuits publishes papers each month in the broad area of solid-state circuits with particular emphasis on transistor-level design of integrated circuits. It also provides coverage of topics such as circuits modeling, technology, systems design, layout, and testing that relate directly to IC design. Integrated circuits and VLSI are of principal interest; material related to discrete circuit design is seldom published. Experimental verification is strongly encouraged.