Mingqi Yin , Xiaole Cui , Feng Wei , Hanqing Liu , Yuanyuan Jiang , Xiaoxin Cui
{"title":"基于 FPGA 的可重构尖峰神经网络加速器","authors":"Mingqi Yin , Xiaole Cui , Feng Wei , Hanqing Liu , Yuanyuan Jiang , Xiaoxin Cui","doi":"10.1016/j.mejo.2024.106377","DOIUrl":null,"url":null,"abstract":"<div><p>The spiking neural network (SNN) is suitable for the intelligent edge computing applications because of its low-power characteristic. This work designs a reconfigurable spiking neural network accelerator supporting the spatiotemporal backpropagation (STBP) training method. The reconfigurable architecture is proposed between the spatial convolution module and the temporal accumulation module of the SNN accelerator. A sparse zero-hopping mechanism is designed to exploit the input sparsity of SNN datasets, and a mask mechanism is introduced between the forward inference computation and the backward training computation to exploit the output sparsity. During the training process, the peak and average performances of the SNN accelerator are 5.57 TOPS and 4.96 TOPS respectively, the power consumption is 6.124 W and the energy efficiency is 0.81 TOPS/W. The peak and average performances of the SNN accelerator are 5.98 TOPS and 5.14 TOPS respectively, the power consumption is 6.943 W and the energy efficiency is 0.74 TOPS/W, during the inference process.</p></div>","PeriodicalId":49818,"journal":{"name":"Microelectronics Journal","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A reconfigurable FPGA-based spiking neural network accelerator\",\"authors\":\"Mingqi Yin , Xiaole Cui , Feng Wei , Hanqing Liu , Yuanyuan Jiang , Xiaoxin Cui\",\"doi\":\"10.1016/j.mejo.2024.106377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The spiking neural network (SNN) is suitable for the intelligent edge computing applications because of its low-power characteristic. This work designs a reconfigurable spiking neural network accelerator supporting the spatiotemporal backpropagation (STBP) training method. The reconfigurable architecture is proposed between the spatial convolution module and the temporal accumulation module of the SNN accelerator. A sparse zero-hopping mechanism is designed to exploit the input sparsity of SNN datasets, and a mask mechanism is introduced between the forward inference computation and the backward training computation to exploit the output sparsity. During the training process, the peak and average performances of the SNN accelerator are 5.57 TOPS and 4.96 TOPS respectively, the power consumption is 6.124 W and the energy efficiency is 0.81 TOPS/W. The peak and average performances of the SNN accelerator are 5.98 TOPS and 5.14 TOPS respectively, the power consumption is 6.943 W and the energy efficiency is 0.74 TOPS/W, during the inference process.</p></div>\",\"PeriodicalId\":49818,\"journal\":{\"name\":\"Microelectronics Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Microelectronics Journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S187923912400081X\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microelectronics Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S187923912400081X","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
A reconfigurable FPGA-based spiking neural network accelerator
The spiking neural network (SNN) is suitable for the intelligent edge computing applications because of its low-power characteristic. This work designs a reconfigurable spiking neural network accelerator supporting the spatiotemporal backpropagation (STBP) training method. The reconfigurable architecture is proposed between the spatial convolution module and the temporal accumulation module of the SNN accelerator. A sparse zero-hopping mechanism is designed to exploit the input sparsity of SNN datasets, and a mask mechanism is introduced between the forward inference computation and the backward training computation to exploit the output sparsity. During the training process, the peak and average performances of the SNN accelerator are 5.57 TOPS and 4.96 TOPS respectively, the power consumption is 6.124 W and the energy efficiency is 0.81 TOPS/W. The peak and average performances of the SNN accelerator are 5.98 TOPS and 5.14 TOPS respectively, the power consumption is 6.943 W and the energy efficiency is 0.74 TOPS/W, during the inference process.
期刊介绍:
Published since 1969, the Microelectronics Journal is an international forum for the dissemination of research and applications of microelectronic systems, circuits, and emerging technologies. Papers published in the Microelectronics Journal have undergone peer review to ensure originality, relevance, and timeliness. The journal thus provides a worldwide, regular, and comprehensive update on microelectronic circuits and systems.
The Microelectronics Journal invites papers describing significant research and applications in all of the areas listed below. Comprehensive review/survey papers covering recent developments will also be considered. The Microelectronics Journal covers circuits and systems. This topic includes but is not limited to: Analog, digital, mixed, and RF circuits and related design methodologies; Logic, architectural, and system level synthesis; Testing, design for testability, built-in self-test; Area, power, and thermal analysis and design; Mixed-domain simulation and design; Embedded systems; Non-von Neumann computing and related technologies and circuits; Design and test of high complexity systems integration; SoC, NoC, SIP, and NIP design and test; 3-D integration design and analysis; Emerging device technologies and circuits, such as FinFETs, SETs, spintronics, SFQ, MTJ, etc.
Application aspects such as signal and image processing including circuits for cryptography, sensors, and actuators including sensor networks, reliability and quality issues, and economic models are also welcome.