{"title":"On Stage-Wise Backpropagation for Improving Cheng’s Method for Fully Connected Cascade Networks","authors":"Eiji Mizutani, Naoyuki Kubota, Tam Chi Truong","doi":"10.1007/s11063-024-11655-4","DOIUrl":null,"url":null,"abstract":"<p>In this journal, Cheng has proposed a <i>backpropagation</i> (<i>BP</i>) procedure called BPFCC for deep <i>fully connected cascaded</i> (<i>FCC</i>) neural network learning in comparison with a <i>neuron-by-neuron</i> (NBN) algorithm of Wilamowski and Yu. Both BPFCC and NBN are designed to implement the Levenberg-Marquardt method, which requires an efficient evaluation of the Gauss-Newton (approximate Hessian) matrix <span>\\(\\nabla \\textbf{r}^\\textsf{T} \\nabla \\textbf{r}\\)</span>, the cross product of the Jacobian matrix <span>\\(\\nabla \\textbf{r}\\)</span> of the residual vector <span>\\(\\textbf{r}\\)</span> in <i>nonlinear least squares sense</i>. Here, the dominant cost is to form <span>\\(\\nabla \\textbf{r}^\\textsf{T} \\nabla \\textbf{r}\\)</span> by <i>rank updates on each data pattern</i>. Notably, NBN is better than BPFCC for the multiple <span>\\(q~\\!(>\\!1)\\)</span>-output FCC-learning when <i>q</i> rows (per pattern) of the Jacobian matrix <span>\\(\\nabla \\textbf{r}\\)</span> are evaluated; however, the dominant cost (for rank updates) is common to both BPFCC and NBN. The purpose of this paper is to present a new more efficient <i>stage-wise BP</i> procedure (for <i>q</i>-output FCC-learning) that <i>reduces the dominant cost</i> with no rows of <span>\\(\\nabla \\textbf{r}\\)</span> explicitly evaluated, just as standard BP evaluates the gradient vector <span>\\(\\nabla \\textbf{r}^\\textsf{T} \\textbf{r}\\)</span> with no explicit evaluation of any rows of the Jacobian matrix <span>\\(\\nabla \\textbf{r}\\)</span>.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"64 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11063-024-11655-4","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this journal, Cheng has proposed a backpropagation (BP) procedure called BPFCC for deep fully connected cascaded (FCC) neural network learning in comparison with a neuron-by-neuron (NBN) algorithm of Wilamowski and Yu. Both BPFCC and NBN are designed to implement the Levenberg-Marquardt method, which requires an efficient evaluation of the Gauss-Newton (approximate Hessian) matrix \(\nabla \textbf{r}^\textsf{T} \nabla \textbf{r}\), the cross product of the Jacobian matrix \(\nabla \textbf{r}\) of the residual vector \(\textbf{r}\) in nonlinear least squares sense. Here, the dominant cost is to form \(\nabla \textbf{r}^\textsf{T} \nabla \textbf{r}\) by rank updates on each data pattern. Notably, NBN is better than BPFCC for the multiple \(q~\!(>\!1)\)-output FCC-learning when q rows (per pattern) of the Jacobian matrix \(\nabla \textbf{r}\) are evaluated; however, the dominant cost (for rank updates) is common to both BPFCC and NBN. The purpose of this paper is to present a new more efficient stage-wise BP procedure (for q-output FCC-learning) that reduces the dominant cost with no rows of \(\nabla \textbf{r}\) explicitly evaluated, just as standard BP evaluates the gradient vector \(\nabla \textbf{r}^\textsf{T} \textbf{r}\) with no explicit evaluation of any rows of the Jacobian matrix \(\nabla \textbf{r}\).
期刊介绍:
Neural Processing Letters is an international journal publishing research results and innovative ideas on all aspects of artificial neural networks. Coverage includes theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches.
The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters