大规模模式分类网络中尖峰神经元和无监督学习模块的高效硬件设计

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Engineering Applications of Artificial Intelligence Pub Date : 2024-09-05 DOI:10.1016/j.engappai.2024.109255
{"title":"大规模模式分类网络中尖峰神经元和无监督学习模块的高效硬件设计","authors":"","doi":"10.1016/j.engappai.2024.109255","DOIUrl":null,"url":null,"abstract":"<div><p>The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient hardware design of spiking neurons and unsupervised learning module in large scale pattern classification network\",\"authors\":\"\",\"doi\":\"10.1016/j.engappai.2024.109255\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.</p></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624014131\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624014131","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在具有卓越智能能力的机器中实现高精度、低能耗计算的主要目的是提高生物尖峰神经网络(SNN)的性能。本文针对这一问题,提出了一种基于突发性时间可塑性(Pow-BTDP)的新型突触权重幂律更新方法,作为具有无乘法器神经元模块的 SNN 模型中的数字学习模块。在神经网络应用中,人们要求精确、快速的计算,以减少昂贵的资源,在此推动下,本文介绍了一种基于线性近似的高效硬件方法。本文介绍的硬件设计基于学习模块(指数和分数幂)和神经元模块(第二幂)中非线性项的线性近似,经过精心设计,保证了最佳的速度、低资源消耗和准确性。我们对为指数和功率实现开发的架构进行了说明和评估,从而提出了能够实现高效、精确硬件计算的数字学习模块和神经元块。所提出的学习机制和神经元数字模块被用于构建由三层组成的大规模基于事件的尖峰神经网络,利用兴奋和抑制神经连接实现学习率可变的无监督训练。结果表明,通过在 MNIST、EMNIST 数字、EMNIST 字母和 CIFAR10 数据集上分别以 6、2、2 和 6 个训练历元进行训练,所提出的生物启发 SNN 作为尖峰模式分类网络,采用了所提出的 Pow-BTDP 学习方法,分别取得了 97.9%、97.8%、94.2% 和 93.3% 的优异准确率,表明其准确率和收敛速度均高于之前的研究成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Efficient hardware design of spiking neurons and unsupervised learning module in large scale pattern classification network

The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
期刊最新文献
Constrained multi-objective optimization assisted by convergence and diversity auxiliary tasks A deep sequence-to-sequence model for power swing blocking of distance protection in power transmission lines A Chinese named entity recognition method for landslide geological disasters based on deep learning A deep learning ensemble approach for malware detection in Internet of Things utilizing Explainable Artificial Intelligence Evaluating the financial credibility of third-party logistic providers through a novel frank operators-driven group decision-making model with dual hesitant linguistic q-rung orthopair fuzzy information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1