A lightweight Max-Pooling method and architecture for Deep Spiking Convolutional Neural Networks

Duy-Anh Nguyen, Xuan-Tu Tran, K. Dang, F. Iacopi
{"title":"A lightweight Max-Pooling method and architecture for Deep Spiking Convolutional Neural Networks","authors":"Duy-Anh Nguyen, Xuan-Tu Tran, K. Dang, F. Iacopi","doi":"10.1109/APCCAS50809.2020.9301703","DOIUrl":null,"url":null,"abstract":"The training of Deep Spiking Neural Networks (DSNNs) is facing many challenges due to the non-differentiable nature of spikes. The conversion of a traditional Deep Neural Networks (DNNs) to its DSNNs counterpart is currently one of the prominent solutions, as it leverages many state-of-the-art pre-trained models and training techniques. However, the conversion of max-pooling layer is a non-trivia task. The state-of-the-art conversion methods either replace the max-pooling layer with other pooling mechanisms or use a max-pooling method based on the cumulative number of output spikes. This incurs both memory storage overhead and increases computational complexity, as one inference in DSNNs requires many timesteps, and the number of output spikes after each layer needs to be accumulated. In this paper1, we propose a novel max-pooling mechanism that is not based on the number of output spikes but is based on the membrane potential of the spiking neurons. Simulation results show that our approach still preserves classification accuracies on MNIST and CIFARIO dataset. Hardware implementation results show that our proposed hardware block is lightweight with an area cost of 15.3kGEs, at a maximum frequency of 300 MHz.","PeriodicalId":127075,"journal":{"name":"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APCCAS50809.2020.9301703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The training of Deep Spiking Neural Networks (DSNNs) is facing many challenges due to the non-differentiable nature of spikes. The conversion of a traditional Deep Neural Networks (DNNs) to its DSNNs counterpart is currently one of the prominent solutions, as it leverages many state-of-the-art pre-trained models and training techniques. However, the conversion of max-pooling layer is a non-trivia task. The state-of-the-art conversion methods either replace the max-pooling layer with other pooling mechanisms or use a max-pooling method based on the cumulative number of output spikes. This incurs both memory storage overhead and increases computational complexity, as one inference in DSNNs requires many timesteps, and the number of output spikes after each layer needs to be accumulated. In this paper1, we propose a novel max-pooling mechanism that is not based on the number of output spikes but is based on the membrane potential of the spiking neurons. Simulation results show that our approach still preserves classification accuracies on MNIST and CIFARIO dataset. Hardware implementation results show that our proposed hardware block is lightweight with an area cost of 15.3kGEs, at a maximum frequency of 300 MHz.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种轻量级的深度尖峰卷积神经网络的最大池化方法和体系结构
由于峰值的不可微性,深度峰值神经网络的训练面临许多挑战。将传统的深度神经网络(dnn)转换为dsnn是目前最突出的解决方案之一,因为它利用了许多最先进的预训练模型和训练技术。然而,最大池化层的转换是一项不容忽视的任务。最先进的转换方法要么用其他池化机制替换最大池化层,要么使用基于输出峰值累积数量的最大池化方法。这既增加了内存存储开销,又增加了计算复杂性,因为dsnn中的一个推理需要许多时间步,并且需要累积每层之后的输出峰值数量。在本文中,我们提出了一种新的最大池化机制,它不是基于输出尖峰的数量,而是基于尖峰神经元的膜电位。仿真结果表明,该方法在MNIST和CIFARIO数据集上仍然保持了分类精度。硬件实现结果表明,我们提出的硬件块重量轻,面积成本为15.3 kge,最大频率为300 MHz。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
"Truth from Practice, Learning beyond Teaching" Exploration in Teaching Analog Integrated Circuit 100 MHz Random Number Generator Design Using Interleaved Metastable NAND/NOR Latches* Performance Analysis of Non-Profiled Side Channel Attacks Based on Convolutional Neural Networks A Self-coupled DT MASH ΔΣ Modulator with High Tolerance to Noise Leakage An Energy-Efficient Time-Domain Binary Neural Network Accelerator with Error-Detection in 28nm CMOS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1