{"title":"A Fast Learning Strategy for Multilayer Feedforward Neural Networks","authors":"Huawei Chen, Hualan Zhong, H. Yuan, F. Jin","doi":"10.1109/WCICA.2006.1712920","DOIUrl":null,"url":null,"abstract":"This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm","PeriodicalId":375135,"journal":{"name":"2006 6th World Congress on Intelligent Control and Automation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 6th World Congress on Intelligent Control and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCICA.2006.1712920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm