Neuromorphic on-chip reservoir computing with spiking neural network architectures

Samip Karki, Diego Chavez Arana, Andrew Sornborger, Francesco Caravelli
{"title":"Neuromorphic on-chip reservoir computing with spiking neural network architectures","authors":"Samip Karki, Diego Chavez Arana, Andrew Sornborger, Francesco Caravelli","doi":"arxiv-2407.20547","DOIUrl":null,"url":null,"abstract":"Reservoir computing is a promising approach for harnessing the computational\npower of recurrent neural networks while dramatically simplifying training.\nThis paper investigates the application of integrate-and-fire neurons within\nreservoir computing frameworks for two distinct tasks: capturing chaotic\ndynamics of the H\\'enon map and forecasting the Mackey-Glass time series.\nIntegrate-and-fire neurons can be implemented in low-power neuromorphic\narchitectures such as Intel Loihi. We explore the impact of network topologies\ncreated through random interactions on the reservoir's performance. Our study\nreveals task-specific variations in network effectiveness, highlighting the\nimportance of tailored architectures for distinct computational tasks. To\nidentify optimal network configurations, we employ a meta-learning approach\ncombined with simulated annealing. This method efficiently explores the space\nof possible network structures, identifying architectures that excel in\ndifferent scenarios. The resulting networks demonstrate a range of behaviors,\nshowcasing how inherent architectural features influence task-specific\ncapabilities. We study the reservoir computing performance using a custom\nintegrate-and-fire code, Intel's Lava neuromorphic computing software\nframework, and via an on-chip implementation in Loihi. We conclude with an\nanalysis of the energy performance of the Loihi architecture.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20547","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Reservoir computing is a promising approach for harnessing the computational power of recurrent neural networks while dramatically simplifying training. This paper investigates the application of integrate-and-fire neurons within reservoir computing frameworks for two distinct tasks: capturing chaotic dynamics of the H\'enon map and forecasting the Mackey-Glass time series. Integrate-and-fire neurons can be implemented in low-power neuromorphic architectures such as Intel Loihi. We explore the impact of network topologies created through random interactions on the reservoir's performance. Our study reveals task-specific variations in network effectiveness, highlighting the importance of tailored architectures for distinct computational tasks. To identify optimal network configurations, we employ a meta-learning approach combined with simulated annealing. This method efficiently explores the space of possible network structures, identifying architectures that excel in different scenarios. The resulting networks demonstrate a range of behaviors, showcasing how inherent architectural features influence task-specific capabilities. We study the reservoir computing performance using a custom integrate-and-fire code, Intel's Lava neuromorphic computing software framework, and via an on-chip implementation in Loihi. We conclude with an analysis of the energy performance of the Loihi architecture.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
采用尖峰神经网络架构的神经形态片上水库计算
储层计算是一种很有前途的方法,它可以利用递归神经网络的计算能力,同时大大简化训练。本文研究了在储层计算框架中应用集成-发射神经元的两个不同任务:捕捉 H\'enon map 的混沌动力学和预测 Mackey-Glass 时间序列。我们探索了通过随机交互创建的网络拓扑结构对水库性能的影响。我们的研究揭示了特定任务在网络有效性方面的差异,突出了针对不同计算任务定制架构的重要性。为了确定最佳网络配置,我们采用了元学习方法与模拟退火相结合。这种方法能有效地探索可能的网络结构空间,识别出在各种情况下都表现出色的架构。由此产生的网络表现出一系列行为,展示了固有架构特性如何影响特定任务的能力。我们使用英特尔的 Lava 神经形态计算软件框架和 Loihi 中的片上实现,研究了定制的 "集成-发射 "代码的水库计算性能。最后,我们对 Loihi 架构的能耗性能进行了分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1