{"title":"A novel fast learning algorithms for time-delay neural networks","authors":"J. Minghu, Z. Xiaoyan","doi":"10.1109/IJCNN.1999.831164","DOIUrl":null,"url":null,"abstract":"To counter the drawbacks of long training time required by Waibel's time-delay neural networks (TDNN) in phoneme recognition, the paper puts forward several improved fast learning methods for TDNN. Merging the unsupervised Oja rule and the similar error backpropagation algorithm for initial training of TDNN weights can effectively increase the convergence speed. Improving the error energy function and updating the changing of weights according to size of output error, can increase the training speed. From backpropagation along layer, to average overlap part of backpropagation error of the first hidden layer along a frame, the training samples gradually increase the convergence speed increases. For multi-class phonemic modular TDNNs, we improve the architecture of Waibel's modular networks, and obtain an optimum modular TDNNs of tree structure to accelerate its learning. Its training time is less than Waibel's modular TDNNs.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1999.831164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
To counter the drawbacks of long training time required by Waibel's time-delay neural networks (TDNN) in phoneme recognition, the paper puts forward several improved fast learning methods for TDNN. Merging the unsupervised Oja rule and the similar error backpropagation algorithm for initial training of TDNN weights can effectively increase the convergence speed. Improving the error energy function and updating the changing of weights according to size of output error, can increase the training speed. From backpropagation along layer, to average overlap part of backpropagation error of the first hidden layer along a frame, the training samples gradually increase the convergence speed increases. For multi-class phonemic modular TDNNs, we improve the architecture of Waibel's modular networks, and obtain an optimum modular TDNNs of tree structure to accelerate its learning. Its training time is less than Waibel's modular TDNNs.