无乘数多层神经网络学习性能的改进

H. Hikawa
{"title":"无乘数多层神经网络学习性能的改进","authors":"H. Hikawa","doi":"10.1109/ISCAS.1997.608907","DOIUrl":null,"url":null,"abstract":"In this paper, improved multiplierless multilayer neural network (MNN) with on-chip learning is proposed. Using three-state function as the activating function, multipliers are replaced by much simpler circuit. The back-propagation algorithm is modified to have no multiplier and the algorithm is implemented with pulse mode operation. This learning circuit is modified to improve the rate of successful learning. The derivative function of neurons which is used in the learning algorithm is changed for the higher learning rate. The modification is very simple, and the additional circuit for this modification is very small. To verify the feasibility of the proposed method, the modified MNN is implemented on FPGAs and tested by experiment, and the detail of the learning performance is tested by computer simulations. These results show that the learning rate can be greatly improved by using the proposed MNN architecture. Also, the experimental result shows that the proposed MNN has a very fast operation of 17.9/spl times/10/sup 6/ connections per second (CPS) and 11.7/spl times/10/sup 6/ connection updates per second (CUPS).","PeriodicalId":68559,"journal":{"name":"电路与系统学报","volume":"125 1","pages":"641-644 vol.1"},"PeriodicalIF":0.0000,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Improvement on the learning performance of multiplierless multilayer neural network\",\"authors\":\"H. Hikawa\",\"doi\":\"10.1109/ISCAS.1997.608907\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, improved multiplierless multilayer neural network (MNN) with on-chip learning is proposed. Using three-state function as the activating function, multipliers are replaced by much simpler circuit. The back-propagation algorithm is modified to have no multiplier and the algorithm is implemented with pulse mode operation. This learning circuit is modified to improve the rate of successful learning. The derivative function of neurons which is used in the learning algorithm is changed for the higher learning rate. The modification is very simple, and the additional circuit for this modification is very small. To verify the feasibility of the proposed method, the modified MNN is implemented on FPGAs and tested by experiment, and the detail of the learning performance is tested by computer simulations. These results show that the learning rate can be greatly improved by using the proposed MNN architecture. Also, the experimental result shows that the proposed MNN has a very fast operation of 17.9/spl times/10/sup 6/ connections per second (CPS) and 11.7/spl times/10/sup 6/ connection updates per second (CUPS).\",\"PeriodicalId\":68559,\"journal\":{\"name\":\"电路与系统学报\",\"volume\":\"125 1\",\"pages\":\"641-644 vol.1\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1997-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"电路与系统学报\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS.1997.608907\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"电路与系统学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1109/ISCAS.1997.608907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

本文提出了一种具有片上学习功能的改进的无乘法器多层神经网络。采用三态函数作为激活函数,用更简单的电路代替了乘法器。将反向传播算法改进为不带乘法器,并采用脉冲模式运算实现。这个学习电路被修改以提高学习的成功率。为了获得更高的学习率,改变了学习算法中使用的神经元导数函数。修改非常简单,并且修改的附加电路非常小。为了验证所提方法的可行性,将改进后的MNN在fpga上实现并进行了实验测试,并通过计算机仿真测试了学习性能的细节。这些结果表明,使用所提出的MNN架构可以大大提高学习率。此外,实验结果表明,所提出的MNN具有非常快的运行速度,为17.9/spl次/10/sup 6/秒连接(CPS)和11.7/spl次/10/sup 6/秒连接更新(CUPS)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Improvement on the learning performance of multiplierless multilayer neural network
In this paper, improved multiplierless multilayer neural network (MNN) with on-chip learning is proposed. Using three-state function as the activating function, multipliers are replaced by much simpler circuit. The back-propagation algorithm is modified to have no multiplier and the algorithm is implemented with pulse mode operation. This learning circuit is modified to improve the rate of successful learning. The derivative function of neurons which is used in the learning algorithm is changed for the higher learning rate. The modification is very simple, and the additional circuit for this modification is very small. To verify the feasibility of the proposed method, the modified MNN is implemented on FPGAs and tested by experiment, and the detail of the learning performance is tested by computer simulations. These results show that the learning rate can be greatly improved by using the proposed MNN architecture. Also, the experimental result shows that the proposed MNN has a very fast operation of 17.9/spl times/10/sup 6/ connections per second (CPS) and 11.7/spl times/10/sup 6/ connection updates per second (CUPS).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
2463
期刊最新文献
Hysteresis quantizer Design of wide-tunable translinear second-order oscillators Design of a direct digital synthesizer with an on-chip D/A-converter Steady state analysis of SMPS Low power wireless communication and signal processing circuits for distributed microsensors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1