{"title":"Parallel batch pattern BP training algorithm of recurrent neural network","authors":"V. Turchenko, L. Grandinetti","doi":"10.1109/INES.2010.5483830","DOIUrl":null,"url":null,"abstract":"The development of parallel algorithm for batch pattern training of a recurrent neural network with the back propagation training algorithm and the research of its efficiency on general-purpose parallel computer are presented in this paper. The recurrent neural network model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is introduced. The efficiency of parallelization of the developed algorithm is investigated by progressively increasing the dimension of the parallelized problem. The results of the experimental researches show that the parallelization efficiency of the algorithm is high enough for its efficient usage on general-purpose parallel computers available within modern computational grid systems.","PeriodicalId":118326,"journal":{"name":"2010 IEEE 14th International Conference on Intelligent Engineering Systems","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE 14th International Conference on Intelligent Engineering Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INES.2010.5483830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
The development of parallel algorithm for batch pattern training of a recurrent neural network with the back propagation training algorithm and the research of its efficiency on general-purpose parallel computer are presented in this paper. The recurrent neural network model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is introduced. The efficiency of parallelization of the developed algorithm is investigated by progressively increasing the dimension of the parallelized problem. The results of the experimental researches show that the parallelization efficiency of the algorithm is high enough for its efficient usage on general-purpose parallel computers available within modern computational grid systems.