Fangxin Liu, Wenbo Zhao, Zongwu Wang, Xiaokang Yang, Li Jiang
{"title":"SIMSnn: A Weight-Agnostic ReRAM-based Search-In-Memory Engine for SNN Acceleration","authors":"Fangxin Liu, Wenbo Zhao, Zongwu Wang, Xiaokang Yang, Li Jiang","doi":"10.23919/DATE56975.2023.10136973","DOIUrl":null,"url":null,"abstract":"Bio-plausible spiking neural networks (SNNs) have gained a great momentum due to its inherent efficiency of processing event-driven information. The dominant computation-matrix bit-wise And-Add operations-in SNN is naturally fit for process-in-memory architecture (PIM). The long input spike train of SNN and the bit-serial processing mechanism of PIM, however, incur considerable latency and frequent analog-to-digital conversion, offsetting the performance gain and energy-efficiency. In this paper, we propose a novel Search-in-Memory (SIM) architecture to accelerate the SNN inference, named SIMSnn. Rather than processing the input bit-by-bit over multiple time steps, SIMSnn can take in a sequence of spikes and search the result by parallel associative matches in the CAM crossbar. As a weight-agnostic SNN accelerator, SIMSnn can adapt to various evolving SNNs without rewriting the crossbar array.","PeriodicalId":340349,"journal":{"name":"2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE56975.2023.10136973","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Bio-plausible spiking neural networks (SNNs) have gained a great momentum due to its inherent efficiency of processing event-driven information. The dominant computation-matrix bit-wise And-Add operations-in SNN is naturally fit for process-in-memory architecture (PIM). The long input spike train of SNN and the bit-serial processing mechanism of PIM, however, incur considerable latency and frequent analog-to-digital conversion, offsetting the performance gain and energy-efficiency. In this paper, we propose a novel Search-in-Memory (SIM) architecture to accelerate the SNN inference, named SIMSnn. Rather than processing the input bit-by-bit over multiple time steps, SIMSnn can take in a sequence of spikes and search the result by parallel associative matches in the CAM crossbar. As a weight-agnostic SNN accelerator, SIMSnn can adapt to various evolving SNNs without rewriting the crossbar array.