{"title":"克服尖峰神经网络中层同步的局限性","authors":"Roel Koopman, Amirreza Yousefzadeh, Mahyar Shahsavari, Guangzhi Tang, Manolis Sifalakis","doi":"arxiv-2408.05098","DOIUrl":null,"url":null,"abstract":"Currently, neural-network processing in machine learning applications relies\non layer synchronization, whereby neurons in a layer aggregate incoming\ncurrents from all neurons in the preceding layer, before evaluating their\nactivation function. This is practiced even in artificial Spiking Neural\nNetworks (SNNs), which are touted as consistent with neurobiology, in spite of\nprocessing in the brain being, in fact asynchronous. A truly asynchronous\nsystem however would allow all neurons to evaluate concurrently their threshold\nand emit spikes upon receiving any presynaptic current. Omitting layer\nsynchronization is potentially beneficial, for latency and energy efficiency,\nbut asynchronous execution of models previously trained with layer\nsynchronization may entail a mismatch in network dynamics and performance. We\npresent a study that documents and quantifies this problem in three datasets on\nour simulation environment that implements network asynchrony, and we show that\nmodels trained with layer synchronization either perform sub-optimally in\nabsence of the synchronization, or they will fail to benefit from any energy\nand latency reduction, when such a mechanism is in place. We then \"make ends\nmeet\" and address the problem with unlayered backprop, a novel\nbackpropagation-based training method, for learning models suitable for\nasynchronous processing. We train with it models that use different neuron\nexecution scheduling strategies, and we show that although their neurons are\nmore reactive, these models consistently exhibit lower overall spike density\n(up to 50%), reach a correct decision faster (up to 2x) without integrating all\nspikes, and achieve superior accuracy (up to 10% higher). Our findings suggest\nthat asynchronous event-based (neuromorphic) AI computing is indeed more\nefficient, but we need to seriously rethink how we train our SNN models, to\nbenefit from it.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"93 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks\",\"authors\":\"Roel Koopman, Amirreza Yousefzadeh, Mahyar Shahsavari, Guangzhi Tang, Manolis Sifalakis\",\"doi\":\"arxiv-2408.05098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Currently, neural-network processing in machine learning applications relies\\non layer synchronization, whereby neurons in a layer aggregate incoming\\ncurrents from all neurons in the preceding layer, before evaluating their\\nactivation function. This is practiced even in artificial Spiking Neural\\nNetworks (SNNs), which are touted as consistent with neurobiology, in spite of\\nprocessing in the brain being, in fact asynchronous. A truly asynchronous\\nsystem however would allow all neurons to evaluate concurrently their threshold\\nand emit spikes upon receiving any presynaptic current. Omitting layer\\nsynchronization is potentially beneficial, for latency and energy efficiency,\\nbut asynchronous execution of models previously trained with layer\\nsynchronization may entail a mismatch in network dynamics and performance. We\\npresent a study that documents and quantifies this problem in three datasets on\\nour simulation environment that implements network asynchrony, and we show that\\nmodels trained with layer synchronization either perform sub-optimally in\\nabsence of the synchronization, or they will fail to benefit from any energy\\nand latency reduction, when such a mechanism is in place. We then \\\"make ends\\nmeet\\\" and address the problem with unlayered backprop, a novel\\nbackpropagation-based training method, for learning models suitable for\\nasynchronous processing. We train with it models that use different neuron\\nexecution scheduling strategies, and we show that although their neurons are\\nmore reactive, these models consistently exhibit lower overall spike density\\n(up to 50%), reach a correct decision faster (up to 2x) without integrating all\\nspikes, and achieve superior accuracy (up to 10% higher). Our findings suggest\\nthat asynchronous event-based (neuromorphic) AI computing is indeed more\\nefficient, but we need to seriously rethink how we train our SNN models, to\\nbenefit from it.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"93 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.05098\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks
Currently, neural-network processing in machine learning applications relies
on layer synchronization, whereby neurons in a layer aggregate incoming
currents from all neurons in the preceding layer, before evaluating their
activation function. This is practiced even in artificial Spiking Neural
Networks (SNNs), which are touted as consistent with neurobiology, in spite of
processing in the brain being, in fact asynchronous. A truly asynchronous
system however would allow all neurons to evaluate concurrently their threshold
and emit spikes upon receiving any presynaptic current. Omitting layer
synchronization is potentially beneficial, for latency and energy efficiency,
but asynchronous execution of models previously trained with layer
synchronization may entail a mismatch in network dynamics and performance. We
present a study that documents and quantifies this problem in three datasets on
our simulation environment that implements network asynchrony, and we show that
models trained with layer synchronization either perform sub-optimally in
absence of the synchronization, or they will fail to benefit from any energy
and latency reduction, when such a mechanism is in place. We then "make ends
meet" and address the problem with unlayered backprop, a novel
backpropagation-based training method, for learning models suitable for
asynchronous processing. We train with it models that use different neuron
execution scheduling strategies, and we show that although their neurons are
more reactive, these models consistently exhibit lower overall spike density
(up to 50%), reach a correct decision faster (up to 2x) without integrating all
spikes, and achieve superior accuracy (up to 10% higher). Our findings suggest
that asynchronous event-based (neuromorphic) AI computing is indeed more
efficient, but we need to seriously rethink how we train our SNN models, to
benefit from it.