{"title":"Fast and efficient sequential learning algorithms using direct-link RBF networks","authors":"V. Asirvadam, S. McLoone, G. Irwin","doi":"10.1109/NNSP.2003.1318020","DOIUrl":null,"url":null,"abstract":"Novel fast and efficient sequential learning algorithms are proposed for direct-link radial basis function (DRBF) networks. The dynamic DRBF network is trained using the recently proposed decomposed/parallel recursive Levenberg Marquardt (PRLM) algorithm by neglecting the interneuron weight interactions. The resulting sequential learning approach enables weights to be updated in an efficient parallel manner and facilitates a minimal update extension for real-time applications. Simulation results for two benchmark problems show the feasibility of the new training algorithms.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.2003.1318020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Novel fast and efficient sequential learning algorithms are proposed for direct-link radial basis function (DRBF) networks. The dynamic DRBF network is trained using the recently proposed decomposed/parallel recursive Levenberg Marquardt (PRLM) algorithm by neglecting the interneuron weight interactions. The resulting sequential learning approach enables weights to be updated in an efficient parallel manner and facilitates a minimal update extension for real-time applications. Simulation results for two benchmark problems show the feasibility of the new training algorithms.