{"title":"SNN-BERT: Training-efficient Spiking Neural Networks for energy-efficient BERT","authors":"","doi":"10.1016/j.neunet.2024.106630","DOIUrl":null,"url":null,"abstract":"<div><p>Spiking Neural Networks (SNNs) are naturally suited to process sequence tasks such as NLP with low power, due to its brain-inspired spatio-temporal dynamics and spike-driven nature. Current SNNs employ ”repeat coding” that re-enter all input tokens at each timestep, which fails to fully exploit temporal relationships between the tokens and introduces memory overhead. In this work, we align the number of input tokens with the timestep and refer to this input coding as ”individual coding”. To cope with the increase in training time for individual encoded SNNs due to the dramatic increase in timesteps, we design a Bidirectional Parallel Spiking Neuron (BPSN) with following features: First, BPSN supports spike parallel computing and effectively avoids the issue of uninterrupted firing; Second, BPSN excels in handling adaptive sequence length tasks, which is a capability that existing work does not have; Third, the fusion of bidirectional information enhances the temporal information modeling capabilities of SNNs; To validate the effectiveness of our BPSN, we present the SNN-BERT, a deep direct training SNN architecture based on the BERT model in NLP. Compared to prior repeat 4-timestep coding baseline, our method achieves a 6.46<span><math><mo>×</mo></math></span> reduction in energy consumption and a significant 16.1% improvement, raising the performance upper bound of the SNN domain on the GLUE dataset to 74.4%. Additionally, our method achieves 3.5<span><math><mo>×</mo></math></span> training acceleration and 3.8<span><math><mo>×</mo></math></span> training memory optimization. Compared with artificial neural networks of similar architecture, we obtain comparable performance but up to 22.5<span><math><mo>×</mo></math></span> energy efficiency. We would provide the codes.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024005549","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Spiking Neural Networks (SNNs) are naturally suited to process sequence tasks such as NLP with low power, due to its brain-inspired spatio-temporal dynamics and spike-driven nature. Current SNNs employ ”repeat coding” that re-enter all input tokens at each timestep, which fails to fully exploit temporal relationships between the tokens and introduces memory overhead. In this work, we align the number of input tokens with the timestep and refer to this input coding as ”individual coding”. To cope with the increase in training time for individual encoded SNNs due to the dramatic increase in timesteps, we design a Bidirectional Parallel Spiking Neuron (BPSN) with following features: First, BPSN supports spike parallel computing and effectively avoids the issue of uninterrupted firing; Second, BPSN excels in handling adaptive sequence length tasks, which is a capability that existing work does not have; Third, the fusion of bidirectional information enhances the temporal information modeling capabilities of SNNs; To validate the effectiveness of our BPSN, we present the SNN-BERT, a deep direct training SNN architecture based on the BERT model in NLP. Compared to prior repeat 4-timestep coding baseline, our method achieves a 6.46 reduction in energy consumption and a significant 16.1% improvement, raising the performance upper bound of the SNN domain on the GLUE dataset to 74.4%. Additionally, our method achieves 3.5 training acceleration and 3.8 training memory optimization. Compared with artificial neural networks of similar architecture, we obtain comparable performance but up to 22.5 energy efficiency. We would provide the codes.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.