Training Low-Latency Deep Spiking Neural Networks with Knowledge Distillation and Batch Normalization Through Time

Thi Diem Tran, K. Le, An Luong Truong Nguyen
{"title":"Training Low-Latency Deep Spiking Neural Networks with Knowledge Distillation and Batch Normalization Through Time","authors":"Thi Diem Tran, K. Le, An Luong Truong Nguyen","doi":"10.1109/CINE56307.2022.10037455","DOIUrl":null,"url":null,"abstract":"Spiking Neural Networks (SNNs) can significantly enhance energy efficiency on neuromorphic hardware due their sparse, biological plausibility and binary event (or spike) driven processing. However, from the non-differentiable nature of a spiking neuron, training high-accuracy and low-latency SNNs is challenging. Recent researches continue to look for ways to improve accuracy and latency. To address these issues in SNNs, we propose a technique that concatenates Knowledge Distillation (KD) and Batch Normalization Through Time (BNTT) method in this study. The BNTT boosts low-latency and low-energy training in SNNs by allowing a neuron to handle the spike rate through various timesteps. The KD approach effectively transfers hidden information from the teacher model to the student network, which converts artificial neural network parameters to SNN weights. This concept allows enriching the performance of SNNs better than the prior technique. Experiments are carried out on the Tiny-ImageNet, CIFAR-10, and CIFAR-100 datasets. on various VGG architectures. We reach top-1 accuracy of 55.67% for ImageNet on VGG-11 and 73.11% for the CIFAR-100 dataset on VGG-16. These results demonstrate that our proposal outperforms earlier converted SNNs in accuracy with only 5 timesteps.","PeriodicalId":336238,"journal":{"name":"2022 5th International Conference on Computational Intelligence and Networks (CINE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Computational Intelligence and Networks (CINE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CINE56307.2022.10037455","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Spiking Neural Networks (SNNs) can significantly enhance energy efficiency on neuromorphic hardware due their sparse, biological plausibility and binary event (or spike) driven processing. However, from the non-differentiable nature of a spiking neuron, training high-accuracy and low-latency SNNs is challenging. Recent researches continue to look for ways to improve accuracy and latency. To address these issues in SNNs, we propose a technique that concatenates Knowledge Distillation (KD) and Batch Normalization Through Time (BNTT) method in this study. The BNTT boosts low-latency and low-energy training in SNNs by allowing a neuron to handle the spike rate through various timesteps. The KD approach effectively transfers hidden information from the teacher model to the student network, which converts artificial neural network parameters to SNN weights. This concept allows enriching the performance of SNNs better than the prior technique. Experiments are carried out on the Tiny-ImageNet, CIFAR-10, and CIFAR-100 datasets. on various VGG architectures. We reach top-1 accuracy of 55.67% for ImageNet on VGG-11 and 73.11% for the CIFAR-100 dataset on VGG-16. These results demonstrate that our proposal outperforms earlier converted SNNs in accuracy with only 5 timesteps.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于知识蒸馏和批归一化的低延迟深度峰值神经网络训练
尖峰神经网络(SNNs)由于其稀疏性、生物合理性和二进制事件(或尖峰)驱动的处理,可以显著提高神经形态硬件的能量效率。然而,由于峰值神经元的不可微特性,训练高精度和低延迟snn是具有挑战性的。最近的研究继续寻找提高准确性和延迟的方法。为了解决snn中的这些问题,本研究提出了一种将知识蒸馏(KD)和时间批归一化(BNTT)方法相结合的技术。BNTT通过允许神经元通过不同的时间步长来处理峰值速率,从而提高了snn的低延迟和低能量训练。KD方法有效地将隐藏信息从教师模型转移到学生网络,将人工神经网络参数转换为SNN权重。这一概念使得snn的性能比先前的技术更丰富。在Tiny-ImageNet、CIFAR-10和CIFAR-100数据集上进行了实验。在各种VGG架构上。在VGG-11和VGG-16上,ImageNet和CIFAR-100数据集的准确率分别达到55.67%和73.11%。这些结果表明,我们的建议在精度上优于早期转换的snn,只有5个时间步长。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
EEG-Based Brain Computer Interface for Emotion Recognition Breast Cancer Prediction Using Long Short-Term Memory Algorithm Improving Learner's Comprehension Using Entailment-Based Question Generation Application of a Novel Deep Fuzzy Dual Support Vector Regression Machine in Stock Price Prediction A Lightweight DoS and DDoS Attack Detection Mechanism-Based on Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1