{"title":"BitSNNs:重新审视高能效尖峰神经网络","authors":"Yangfan Hu;Qian Zheng;Gang Pan","doi":"10.1109/TCDS.2024.3383428","DOIUrl":null,"url":null,"abstract":"To address the energy bottleneck in deep neural networks (DNNs), the research community has developed binary neural networks (BNNs) and spiking neural networks (SNNs) from different perspectives. To combine the advantages of both BNNs and SNNs for better energy efficiency, this article proposes BitSNNs, which leverage binary weights, single-step inference, and activation sparsity. During the development of BitSNNs, we observed performance degradation in deep ResNets due to the gradient approximation error. To mitigate this issue, we delve into the learning process and propose the utilization of a hardtanh function before activation binarization. Additionally, this article investigates the critical role of activation sparsity in BitSNNs for energy efficiency, a topic often overlooked in the existing literature. Our study reveals strategies to strike a balance between accuracy and energy consumption during the training/testing stage, potentially benefiting applications in edge computing. Notably, our proposed method achieves state-of-the-art performance while significantly reducing energy consumption.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1736-1747"},"PeriodicalIF":5.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BitSNNs: Revisiting Energy-Efficient Spiking Neural Networks\",\"authors\":\"Yangfan Hu;Qian Zheng;Gang Pan\",\"doi\":\"10.1109/TCDS.2024.3383428\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To address the energy bottleneck in deep neural networks (DNNs), the research community has developed binary neural networks (BNNs) and spiking neural networks (SNNs) from different perspectives. To combine the advantages of both BNNs and SNNs for better energy efficiency, this article proposes BitSNNs, which leverage binary weights, single-step inference, and activation sparsity. During the development of BitSNNs, we observed performance degradation in deep ResNets due to the gradient approximation error. To mitigate this issue, we delve into the learning process and propose the utilization of a hardtanh function before activation binarization. Additionally, this article investigates the critical role of activation sparsity in BitSNNs for energy efficiency, a topic often overlooked in the existing literature. Our study reveals strategies to strike a balance between accuracy and energy consumption during the training/testing stage, potentially benefiting applications in edge computing. Notably, our proposed method achieves state-of-the-art performance while significantly reducing energy consumption.\",\"PeriodicalId\":54300,\"journal\":{\"name\":\"IEEE Transactions on Cognitive and Developmental Systems\",\"volume\":\"16 5\",\"pages\":\"1736-1747\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cognitive and Developmental Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10486935/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive and Developmental Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10486935/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
To address the energy bottleneck in deep neural networks (DNNs), the research community has developed binary neural networks (BNNs) and spiking neural networks (SNNs) from different perspectives. To combine the advantages of both BNNs and SNNs for better energy efficiency, this article proposes BitSNNs, which leverage binary weights, single-step inference, and activation sparsity. During the development of BitSNNs, we observed performance degradation in deep ResNets due to the gradient approximation error. To mitigate this issue, we delve into the learning process and propose the utilization of a hardtanh function before activation binarization. Additionally, this article investigates the critical role of activation sparsity in BitSNNs for energy efficiency, a topic often overlooked in the existing literature. Our study reveals strategies to strike a balance between accuracy and energy consumption during the training/testing stage, potentially benefiting applications in edge computing. Notably, our proposed method achieves state-of-the-art performance while significantly reducing energy consumption.
期刊介绍:
The IEEE Transactions on Cognitive and Developmental Systems (TCDS) focuses on advances in the study of development and cognition in natural (humans, animals) and artificial (robots, agents) systems. It welcomes contributions from multiple related disciplines including cognitive systems, cognitive robotics, developmental and epigenetic robotics, autonomous and evolutionary robotics, social structures, multi-agent and artificial life systems, computational neuroscience, and developmental psychology. Articles on theoretical, computational, application-oriented, and experimental studies as well as reviews in these areas are considered.