Low-latency hierarchical routing of reconfigurable neuromorphic systems.

IF 3.2 3区 医学 Q2 NEUROSCIENCES Frontiers in Neuroscience Pub Date : 2025-02-04 eCollection Date: 2025-01-01 DOI:10.3389/fnins.2025.1493623
Samalika Perera, Ying Xu, André van Schaik, Runchun Wang
{"title":"Low-latency hierarchical routing of reconfigurable neuromorphic systems.","authors":"Samalika Perera, Ying Xu, André van Schaik, Runchun Wang","doi":"10.3389/fnins.2025.1493623","DOIUrl":null,"url":null,"abstract":"<p><p>A reconfigurable hardware accelerator implementation for spiking neural network (SNN) simulation using field-programmable gate arrays (FPGAs) is promising and attractive research because massive parallelism results in better execution speed. For large-scale SNN simulations, a large number of FPGAs are needed. However, inter-FPGA communication bottlenecks cause congestion, data losses, and latency inefficiencies. In this work, we employed a hierarchical tree-based interconnection architecture for multi-FPGAs. This architecture is scalable as new branches can be added to a tree, maintaining a constant local bandwidth. The tree-based approach contrasts with linear Network on Chip (NoC), where congestion can arise from numerous connections. We propose a routing architecture that introduces an arbiter mechanism by employing stochastic arbitration considering data level queues of First In, First Out (FIFO) buffers. This mechanism effectively reduces the bottleneck caused by FIFO congestion, resulting in improved overall latency. Results present measurement data collected for performance analysis of latency. We compared the performance of the design using our proposed stochastic routing scheme to a traditional round-robin architecture. The results demonstrate that the stochastic arbiters achieve lower worst-case latency and improved overall performance compared to the round-robin arbiters.</p>","PeriodicalId":12639,"journal":{"name":"Frontiers in Neuroscience","volume":"19 ","pages":"1493623"},"PeriodicalIF":3.2000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11832709/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fnins.2025.1493623","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

A reconfigurable hardware accelerator implementation for spiking neural network (SNN) simulation using field-programmable gate arrays (FPGAs) is promising and attractive research because massive parallelism results in better execution speed. For large-scale SNN simulations, a large number of FPGAs are needed. However, inter-FPGA communication bottlenecks cause congestion, data losses, and latency inefficiencies. In this work, we employed a hierarchical tree-based interconnection architecture for multi-FPGAs. This architecture is scalable as new branches can be added to a tree, maintaining a constant local bandwidth. The tree-based approach contrasts with linear Network on Chip (NoC), where congestion can arise from numerous connections. We propose a routing architecture that introduces an arbiter mechanism by employing stochastic arbitration considering data level queues of First In, First Out (FIFO) buffers. This mechanism effectively reduces the bottleneck caused by FIFO congestion, resulting in improved overall latency. Results present measurement data collected for performance analysis of latency. We compared the performance of the design using our proposed stochastic routing scheme to a traditional round-robin architecture. The results demonstrate that the stochastic arbiters achieve lower worst-case latency and improved overall performance compared to the round-robin arbiters.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可重构神经形态系统的低延迟分层路由。
利用现场可编程门阵列(fpga)实现尖峰神经网络(SNN)仿真的可重构硬件加速器是一项很有前途和吸引力的研究,因为大规模并行性可以带来更好的执行速度。对于大规模的SNN仿真,需要大量的fpga。但是,fpga之间的通信瓶颈会导致拥塞、数据丢失和延迟效率低下。在这项工作中,我们采用了基于分层树的多fpga互连架构。这种架构是可扩展的,因为新的分支可以添加到树中,保持恒定的本地带宽。基于树的方法与线性片上网络(NoC)形成鲜明对比,后者的拥塞可能由众多连接引起。我们提出了一种路由架构,该架构通过采用考虑先进先出(FIFO)缓冲区的数据级队列的随机仲裁引入了仲裁机制。这种机制有效地减少了FIFO拥塞造成的瓶颈,从而提高了总体延迟。结果显示了为延迟性能分析收集的测量数据。我们使用我们提出的随机路由方案与传统的轮循架构比较了设计的性能。结果表明,与轮循仲裁器相比,随机仲裁器具有更低的最坏情况延迟和更高的整体性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Frontiers in Neuroscience
Frontiers in Neuroscience NEUROSCIENCES-
CiteScore
6.20
自引率
4.70%
发文量
2070
审稿时长
14 weeks
期刊介绍: Neural Technology is devoted to the convergence between neurobiology and quantum-, nano- and micro-sciences. In our vision, this interdisciplinary approach should go beyond the technological development of sophisticated methods and should contribute in generating a genuine change in our discipline.
期刊最新文献
Adaptation of visual responses in degenerating rd10 and healthy mouse retinas during ongoing electrical stimulation. Toward fMRI-based hyperscanning in the study of the cocktail-party effect. Evaluating post-concussion symptom profiles using the convergence insufficiency symptom survey in a pediatric and adolescent cohort. Identifying potential inflammatory therapeutic targets and drug candidates in small fiber neuropathy: integrating Mendelian randomization, experimental validation, and deep learning. Editorial: Impact of acoustic environments and noise on auditory perception.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1