{"title":"Training SNNs Low Latency Utilizing Batch Normalization Through Time and Iterative Initialization Retraining","authors":"Thi Diem Tran, Huu-Hanh Hoang","doi":"10.1109/ICAIIC57133.2023.10067096","DOIUrl":null,"url":null,"abstract":"Spiking Neural Network (SNN), developing on neuromorphic hardware, is a promising energy-efficient AI paradigm. However, processing over several timesteps reduces the energy benefits of SNNs due to high latency, the number of operations, and memory access costs from acquiring membrane potentials. Furthermore, their non-derivative nature makes SNNs difficult to train properly. To overcome these issues and leverage the full potential of SNNs, in this research, we offer a novel way for training deep SNNs utilizing Batch Normalization Through Time and Iterative Initialization and Retraining techniques. First, the BNTT improves low-latency and low-energy training in SNNs by allowing neurons to handle the spike rate over many timesteps. Second, we can obtain SNNs with up to unit latency pass during inference when applying the Iterative Initialization and Retraining technique during training SNNs. On the CIFAR-10, CIFAR-100, and ImageNet, we achieve cutting-edge SNN performance using a deep neural network with just one timestep. We achieve top-1 accuracy of 91.01%, 71.88%, and 69.8% on CIFAR-10, CIFAR-100, and ImageNet, respectively, using the VGG 16 architecture.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC57133.2023.10067096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Spiking Neural Network (SNN), developing on neuromorphic hardware, is a promising energy-efficient AI paradigm. However, processing over several timesteps reduces the energy benefits of SNNs due to high latency, the number of operations, and memory access costs from acquiring membrane potentials. Furthermore, their non-derivative nature makes SNNs difficult to train properly. To overcome these issues and leverage the full potential of SNNs, in this research, we offer a novel way for training deep SNNs utilizing Batch Normalization Through Time and Iterative Initialization and Retraining techniques. First, the BNTT improves low-latency and low-energy training in SNNs by allowing neurons to handle the spike rate over many timesteps. Second, we can obtain SNNs with up to unit latency pass during inference when applying the Iterative Initialization and Retraining technique during training SNNs. On the CIFAR-10, CIFAR-100, and ImageNet, we achieve cutting-edge SNN performance using a deep neural network with just one timestep. We achieve top-1 accuracy of 91.01%, 71.88%, and 69.8% on CIFAR-10, CIFAR-100, and ImageNet, respectively, using the VGG 16 architecture.