Spiking neural networks on FPGA: A survey of methodologies and recent advancements

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2025-06-01 Epub Date: 2025-02-14 DOI:10.1016/j.neunet.2025.107256
Mehrzad Karamimanesh , Ebrahim Abiri , Mahyar Shahsavari , Kourosh Hassanli , André van Schaik , Jason Eshraghian
{"title":"Spiking neural networks on FPGA: A survey of methodologies and recent advancements","authors":"Mehrzad Karamimanesh ,&nbsp;Ebrahim Abiri ,&nbsp;Mahyar Shahsavari ,&nbsp;Kourosh Hassanli ,&nbsp;André van Schaik ,&nbsp;Jason Eshraghian","doi":"10.1016/j.neunet.2025.107256","DOIUrl":null,"url":null,"abstract":"<div><div>The mimicry of the biological brain’s structure in information processing enables spiking neural networks (SNNs) to exhibit significantly reduced power consumption compared to conventional systems. Consequently, these networks have garnered heightened attention and spurred extensive research endeavors in recent years, proposing various structures to achieve low power consumption, high speed, and improved recognition ability. However, researchers are still in the early stages of developing more efficient neural networks that more closely resemble the biological brain. This development and research require suitable hardware for execution with appropriate capabilities, and field-programmable gate array (FPGA) serves as a highly qualified candidate compared to existing hardware such as central processing unit (CPU) and graphics processing unit (GPU). FPGA, with parallel processing capabilities similar to the brain, lower latency and power consumption, and higher throughput, is highly eligible hardware for assisting in the development of spiking neural networks. In this review, an attempt has been made to facilitate researchers’ path to further develop this field by collecting and examining recent works and the challenges that hinder the implementation of these networks on FPGA.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"186 ","pages":"Article 107256"},"PeriodicalIF":6.3000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025001352","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/14 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The mimicry of the biological brain’s structure in information processing enables spiking neural networks (SNNs) to exhibit significantly reduced power consumption compared to conventional systems. Consequently, these networks have garnered heightened attention and spurred extensive research endeavors in recent years, proposing various structures to achieve low power consumption, high speed, and improved recognition ability. However, researchers are still in the early stages of developing more efficient neural networks that more closely resemble the biological brain. This development and research require suitable hardware for execution with appropriate capabilities, and field-programmable gate array (FPGA) serves as a highly qualified candidate compared to existing hardware such as central processing unit (CPU) and graphics processing unit (GPU). FPGA, with parallel processing capabilities similar to the brain, lower latency and power consumption, and higher throughput, is highly eligible hardware for assisting in the development of spiking neural networks. In this review, an attempt has been made to facilitate researchers’ path to further develop this field by collecting and examining recent works and the challenges that hinder the implementation of these networks on FPGA.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FPGA上的脉冲神经网络:方法和最新进展综述
与传统系统相比,脉冲神经网络(snn)在信息处理过程中对生物大脑结构的模仿使得其功耗显著降低。因此,近年来,这些网络引起了人们的高度关注,并激发了广泛的研究努力,提出了各种结构,以实现低功耗、高速度和改进的识别能力。然而,研究人员仍处于开发更高效、更接近生物大脑的神经网络的早期阶段。这种开发和研究需要合适的硬件来执行适当的功能,与现有的硬件(如中央处理器(CPU)和图形处理单元(GPU))相比,现场可编程门阵列(FPGA)是一个非常合格的候选者。FPGA具有类似大脑的并行处理能力,更低的延迟和功耗,以及更高的吞吐量,是协助开发尖峰神经网络的非常合适的硬件。在这篇综述中,通过收集和研究最近的工作以及阻碍在FPGA上实现这些网络的挑战,试图促进研究人员进一步发展这一领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
PGMNO: A physics-Guided mamba neural operator framework for partial differential equations A hierarchical and privacy-preserving intrusion detection framework for SAGIN-enabled IIot using graph neural networks and deep Q-learning Position-Sensitive painterly image harmonization Passivity and synchronization of fractional-order coupled neural networks with multiple weights: A PD approach M3SPCL: Multi-stage multi-grained multi-view supervised prototypical contrastive learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1