在FPGA架构上实现片上反向传播学习算法

H. Vo
{"title":"在FPGA架构上实现片上反向传播学习算法","authors":"H. Vo","doi":"10.1109/ICSSE.2017.8030932","DOIUrl":null,"url":null,"abstract":"Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays. Backpropagation algorithm is used for training multilayer perceptron with high degree of parallel processing. Parallel computing implementation is the best suitable on FPGA or ASIC. The on-chip back-propagation learning algorithm design is proposed to implement 2×2×1 neural network architecture on FPGA. The simulation results show that back-propagation learning algorithm is converged in 3 epochs with error target as small as 0.05. The updated weighting also makes comparison between learning on FPGA and Matlab less than 2%. The achievements extend the applications with larger neural networks to communicate with other hardware architecture.","PeriodicalId":296191,"journal":{"name":"2017 International Conference on System Science and Engineering (ICSSE)","volume":"342 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Implementing the on-chip backpropagation learning algorithm on FPGA architecture\",\"authors\":\"H. Vo\",\"doi\":\"10.1109/ICSSE.2017.8030932\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays. Backpropagation algorithm is used for training multilayer perceptron with high degree of parallel processing. Parallel computing implementation is the best suitable on FPGA or ASIC. The on-chip back-propagation learning algorithm design is proposed to implement 2×2×1 neural network architecture on FPGA. The simulation results show that back-propagation learning algorithm is converged in 3 epochs with error target as small as 0.05. The updated weighting also makes comparison between learning on FPGA and Matlab less than 2%. The achievements extend the applications with larger neural networks to communicate with other hardware architecture.\",\"PeriodicalId\":296191,\"journal\":{\"name\":\"2017 International Conference on System Science and Engineering (ICSSE)\",\"volume\":\"342 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on System Science and Engineering (ICSSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSSE.2017.8030932\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on System Science and Engineering (ICSSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSSE.2017.8030932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

在具有可重构性的复杂应用中,可缩放CMOS集成电路技术可以降低芯片价格,提高处理性能。因此,VLSI架构是目前实现神经网络模型的一个很有前途的候选者。采用反向传播算法训练具有高度并行处理的多层感知器。并行计算最适合在FPGA或ASIC上实现。为了在FPGA上实现2×2×1神经网络架构,提出了片上反向传播学习算法设计。仿真结果表明,反向传播学习算法在3个epoch内收敛,误差目标小于0.05。更新后的权重也使得FPGA和Matlab学习的对比小于2%。这一成果扩展了大型神经网络与其他硬件架构通信的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Implementing the on-chip backpropagation learning algorithm on FPGA architecture
Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays. Backpropagation algorithm is used for training multilayer perceptron with high degree of parallel processing. Parallel computing implementation is the best suitable on FPGA or ASIC. The on-chip back-propagation learning algorithm design is proposed to implement 2×2×1 neural network architecture on FPGA. The simulation results show that back-propagation learning algorithm is converged in 3 epochs with error target as small as 0.05. The updated weighting also makes comparison between learning on FPGA and Matlab less than 2%. The achievements extend the applications with larger neural networks to communicate with other hardware architecture.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design and simulate a Fuzzy autopilot for an Unmanned Surface Vessel Analysis of voltage-variation results of Taiwan Power System connected with a ring-type high-capacity offshore wind farm Nighttime vehicle detection and classification via headlights trajectories matching Determining the effects of marketing mix on customers' purchase decision using the grey model GM(0,N) - case study of the western style coffeehouse chains in Vietnam A neural control of the parallel Gas Turbine with differential link
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1