Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks

Roel Koopman, Amirreza Yousefzadeh, Mahyar Shahsavari, Guangzhi Tang, Manolis Sifalakis
{"title":"Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks","authors":"Roel Koopman, Amirreza Yousefzadeh, Mahyar Shahsavari, Guangzhi Tang, Manolis Sifalakis","doi":"arxiv-2408.05098","DOIUrl":null,"url":null,"abstract":"Currently, neural-network processing in machine learning applications relies\non layer synchronization, whereby neurons in a layer aggregate incoming\ncurrents from all neurons in the preceding layer, before evaluating their\nactivation function. This is practiced even in artificial Spiking Neural\nNetworks (SNNs), which are touted as consistent with neurobiology, in spite of\nprocessing in the brain being, in fact asynchronous. A truly asynchronous\nsystem however would allow all neurons to evaluate concurrently their threshold\nand emit spikes upon receiving any presynaptic current. Omitting layer\nsynchronization is potentially beneficial, for latency and energy efficiency,\nbut asynchronous execution of models previously trained with layer\nsynchronization may entail a mismatch in network dynamics and performance. We\npresent a study that documents and quantifies this problem in three datasets on\nour simulation environment that implements network asynchrony, and we show that\nmodels trained with layer synchronization either perform sub-optimally in\nabsence of the synchronization, or they will fail to benefit from any energy\nand latency reduction, when such a mechanism is in place. We then \"make ends\nmeet\" and address the problem with unlayered backprop, a novel\nbackpropagation-based training method, for learning models suitable for\nasynchronous processing. We train with it models that use different neuron\nexecution scheduling strategies, and we show that although their neurons are\nmore reactive, these models consistently exhibit lower overall spike density\n(up to 50%), reach a correct decision faster (up to 2x) without integrating all\nspikes, and achieve superior accuracy (up to 10% higher). Our findings suggest\nthat asynchronous event-based (neuromorphic) AI computing is indeed more\nefficient, but we need to seriously rethink how we train our SNN models, to\nbenefit from it.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"93 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Currently, neural-network processing in machine learning applications relies on layer synchronization, whereby neurons in a layer aggregate incoming currents from all neurons in the preceding layer, before evaluating their activation function. This is practiced even in artificial Spiking Neural Networks (SNNs), which are touted as consistent with neurobiology, in spite of processing in the brain being, in fact asynchronous. A truly asynchronous system however would allow all neurons to evaluate concurrently their threshold and emit spikes upon receiving any presynaptic current. Omitting layer synchronization is potentially beneficial, for latency and energy efficiency, but asynchronous execution of models previously trained with layer synchronization may entail a mismatch in network dynamics and performance. We present a study that documents and quantifies this problem in three datasets on our simulation environment that implements network asynchrony, and we show that models trained with layer synchronization either perform sub-optimally in absence of the synchronization, or they will fail to benefit from any energy and latency reduction, when such a mechanism is in place. We then "make ends meet" and address the problem with unlayered backprop, a novel backpropagation-based training method, for learning models suitable for asynchronous processing. We train with it models that use different neuron execution scheduling strategies, and we show that although their neurons are more reactive, these models consistently exhibit lower overall spike density (up to 50%), reach a correct decision faster (up to 2x) without integrating all spikes, and achieve superior accuracy (up to 10% higher). Our findings suggest that asynchronous event-based (neuromorphic) AI computing is indeed more efficient, but we need to seriously rethink how we train our SNN models, to benefit from it.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
克服尖峰神经网络中层同步的局限性
目前,机器学习应用中的神经网络处理依赖于层同步,即一层中的神经元在评估其激活功能之前,汇总来自前一层所有神经元的输入电流。即使在人工尖峰神经网络(SNN)中也是如此,尽管大脑中的处理过程实际上是异步的,但它却被吹捧为与神经生物学相一致。然而,真正的异步系统将允许所有神经元同时评估其阈值,并在接收到任何突触前电流时发出尖峰脉冲。省略层同步对延迟和能效有潜在好处,但异步执行以前用层同步训练的模型可能会导致网络动力学和性能的不匹配。我们提出了一项研究,在我们实现了网络异步的仿真环境中的三个数据集上记录并量化了这一问题,我们表明,使用层同步训练的模型要么在没有同步的情况下表现不理想,要么在有这种机制的情况下无法从任何能耗和延迟减少中获益。于是,我们 "亡羊补牢",采用无层反向传播(一种基于反向传播的新型训练方法)来解决这个问题,以学习适合同步处理的模型。我们用它训练了使用不同神经元执行调度策略的模型,结果表明,虽然这些模型的神经元反应更快,但它们始终表现出较低的整体尖峰密度(高达 50%),在不整合所有尖峰的情况下,能更快(高达 2 倍)做出正确决策,而且准确率更高(高达 10%)。我们的研究结果表明,基于异步事件(神经形态)的人工智能计算确实更加高效,但我们需要认真反思如何训练我们的 SNN 模型,才能从中获益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1