通过适应推进尖峰神经网络的时空处理

Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein
{"title":"通过适应推进尖峰神经网络的时空处理","authors":"Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein","doi":"arxiv-2408.07517","DOIUrl":null,"url":null,"abstract":"Efficient implementations of spiking neural networks on neuromorphic hardware\npromise orders of magnitude less power consumption than their non-spiking\ncounterparts. The standard neuron model for spike-based computation on such\nneuromorphic systems has long been the leaky integrate-and-fire (LIF) neuron.\nAs a promising advancement, a computationally light augmentation of the LIF\nneuron model with an adaptation mechanism experienced a recent upswing in\npopularity, caused by demonstrations of its superior performance on\nspatio-temporal processing tasks. The root of the superiority of these\nso-called adaptive LIF neurons however, is not well understood. In this\narticle, we thoroughly analyze the dynamical, computational, and learning\nproperties of adaptive LIF neurons and networks thereof. We find that the\nfrequently observed stability problems during training of such networks can be\novercome by applying an alternative discretization method that results in\nprovably better stability properties than the commonly used Euler-Forward\nmethod. With this discretization, we achieved a new state-of-the-art\nperformance on common event-based benchmark datasets. We also show that the\nsuperiority of networks of adaptive LIF neurons extends to the prediction and\ngeneration of complex time series. Our further analysis of the computational\nproperties of networks of adaptive LIF neurons shows that they are particularly\nwell suited to exploit the spatio-temporal structure of input sequences.\nFurthermore, these networks are surprisingly robust to shifts of the mean input\nstrength and input spike rate, even when these shifts were not observed during\ntraining. As a consequence, high-performance networks can be obtained without\nany normalization techniques such as batch normalization or batch-normalization\nthrough time.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation\",\"authors\":\"Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein\",\"doi\":\"arxiv-2408.07517\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Efficient implementations of spiking neural networks on neuromorphic hardware\\npromise orders of magnitude less power consumption than their non-spiking\\ncounterparts. The standard neuron model for spike-based computation on such\\nneuromorphic systems has long been the leaky integrate-and-fire (LIF) neuron.\\nAs a promising advancement, a computationally light augmentation of the LIF\\nneuron model with an adaptation mechanism experienced a recent upswing in\\npopularity, caused by demonstrations of its superior performance on\\nspatio-temporal processing tasks. The root of the superiority of these\\nso-called adaptive LIF neurons however, is not well understood. In this\\narticle, we thoroughly analyze the dynamical, computational, and learning\\nproperties of adaptive LIF neurons and networks thereof. We find that the\\nfrequently observed stability problems during training of such networks can be\\novercome by applying an alternative discretization method that results in\\nprovably better stability properties than the commonly used Euler-Forward\\nmethod. With this discretization, we achieved a new state-of-the-art\\nperformance on common event-based benchmark datasets. We also show that the\\nsuperiority of networks of adaptive LIF neurons extends to the prediction and\\ngeneration of complex time series. Our further analysis of the computational\\nproperties of networks of adaptive LIF neurons shows that they are particularly\\nwell suited to exploit the spatio-temporal structure of input sequences.\\nFurthermore, these networks are surprisingly robust to shifts of the mean input\\nstrength and input spike rate, even when these shifts were not observed during\\ntraining. As a consequence, high-performance networks can be obtained without\\nany normalization techniques such as batch normalization or batch-normalization\\nthrough time.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"44 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.07517\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.07517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

尖峰神经网络在神经形态硬件上的高效实现,使其功耗比非尖峰神经网络低几个数量级。长期以来,这种神经形态系统上基于尖峰计算的标准神经元模型一直是 "渗漏整合-发射(LIF)"神经元。作为一种很有前途的进步,LIF 神经元模型在计算上的轻量化增强具有适应机制,最近由于在时空处理任务上的卓越表现而大受欢迎。然而,人们对所谓自适应 LIF 神经元优越性的根源还不甚了解。在本文中,我们深入分析了自适应 LIF 神经元及其网络的动态、计算和学习特性。我们发现,应用另一种离散化方法可以克服此类网络在训练过程中经常出现的稳定性问题。通过这种离散化方法,我们在常见的基于事件的基准数据集上取得了新的先进性能。我们还证明,自适应 LIF 神经元网络的优越性还可以扩展到复杂时间序列的预测和生成。我们对自适应 LIF 神经元网络计算特性的进一步分析表明,它们特别适合利用输入序列的时空结构。此外,这些网络对平均输入强度和输入尖峰率的变化具有惊人的鲁棒性,即使在训练期间没有观察到这些变化。因此,无需任何归一化技术,如批次归一化或通过时间的批次归一化,就能获得高性能网络。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation
Efficient implementations of spiking neural networks on neuromorphic hardware promise orders of magnitude less power consumption than their non-spiking counterparts. The standard neuron model for spike-based computation on such neuromorphic systems has long been the leaky integrate-and-fire (LIF) neuron. As a promising advancement, a computationally light augmentation of the LIF neuron model with an adaptation mechanism experienced a recent upswing in popularity, caused by demonstrations of its superior performance on spatio-temporal processing tasks. The root of the superiority of these so-called adaptive LIF neurons however, is not well understood. In this article, we thoroughly analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof. We find that the frequently observed stability problems during training of such networks can be overcome by applying an alternative discretization method that results in provably better stability properties than the commonly used Euler-Forward method. With this discretization, we achieved a new state-of-the-art performance on common event-based benchmark datasets. We also show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series. Our further analysis of the computational properties of networks of adaptive LIF neurons shows that they are particularly well suited to exploit the spatio-temporal structure of input sequences. Furthermore, these networks are surprisingly robust to shifts of the mean input strength and input spike rate, even when these shifts were not observed during training. As a consequence, high-performance networks can be obtained without any normalization techniques such as batch normalization or batch-normalization through time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1