A Fast Learning Strategy for Multilayer Feedforward Neural Networks

Huawei Chen, Hualan Zhong, H. Yuan, F. Jin
{"title":"A Fast Learning Strategy for Multilayer Feedforward Neural Networks","authors":"Huawei Chen, Hualan Zhong, H. Yuan, F. Jin","doi":"10.1109/WCICA.2006.1712920","DOIUrl":null,"url":null,"abstract":"This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm","PeriodicalId":375135,"journal":{"name":"2006 6th World Congress on Intelligent Control and Automation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 6th World Congress on Intelligent Control and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCICA.2006.1712920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多层前馈神经网络的快速学习策略
提出了一种新的前馈神经网络训练算法——双阶段权值调整算法。与BP学习算法不同的是,BP算法在前向和后向阶段都可以调整权重。该算法在正向传递中计算最小范数平方解作为隐藏层与输出层之间的权值,而在反向传递中,则根据误差梯度下降法调整网络中的其他权值。基于函数逼近和分类任务的实验结果表明,与BP和Levenberg-Marquardt BP算法相比,新算法具有更快的收敛速度和良好的泛化性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Decentralized Robust H∞Output Feedback Control for Value Bounded Uncertain Large-scale Interconnected Systems Predictions of System Marginal Price of Electricity Using Recurrent Neural Network Data Association Method Based on Fractal Theory Periodicity Locomotion Control Based on Central Pattern Generator An Improved Fuzzy Fault Diagnosis Method for Complex System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1