{"title":"缩放共轭梯度训练算法的权值初始化例程分析","authors":"S. Masood, M. N. Doja, Pravin Chandra","doi":"10.1109/CICT.2016.111","DOIUrl":null,"url":null,"abstract":"The choice of weight initialization routines is one of the important choices to be made for improving the training efficiency of an artificial neural network. In this paper, we analyze the affect of many known weight initialization routines, on training of an artificial neural network, when it was trained with a second order scaled conjugate gradient training algorithm. A number of experiments were conducted to perform this analysis over eight selected function approximation problems. The results suggest that the partially deterministic weight initialization method and the Nguyen-Widrow initialization technique performed equally well and helped the network train and generalize better by achieving better training and simulation error values.","PeriodicalId":118509,"journal":{"name":"2016 Second International Conference on Computational Intelligence & Communication Technology (CICT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Analysis of Weight Initialization Routines for Scaled Conjugate Gradient Training Algorithm\",\"authors\":\"S. Masood, M. N. Doja, Pravin Chandra\",\"doi\":\"10.1109/CICT.2016.111\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The choice of weight initialization routines is one of the important choices to be made for improving the training efficiency of an artificial neural network. In this paper, we analyze the affect of many known weight initialization routines, on training of an artificial neural network, when it was trained with a second order scaled conjugate gradient training algorithm. A number of experiments were conducted to perform this analysis over eight selected function approximation problems. The results suggest that the partially deterministic weight initialization method and the Nguyen-Widrow initialization technique performed equally well and helped the network train and generalize better by achieving better training and simulation error values.\",\"PeriodicalId\":118509,\"journal\":{\"name\":\"2016 Second International Conference on Computational Intelligence & Communication Technology (CICT)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Second International Conference on Computational Intelligence & Communication Technology (CICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CICT.2016.111\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Second International Conference on Computational Intelligence & Communication Technology (CICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICT.2016.111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis of Weight Initialization Routines for Scaled Conjugate Gradient Training Algorithm
The choice of weight initialization routines is one of the important choices to be made for improving the training efficiency of an artificial neural network. In this paper, we analyze the affect of many known weight initialization routines, on training of an artificial neural network, when it was trained with a second order scaled conjugate gradient training algorithm. A number of experiments were conducted to perform this analysis over eight selected function approximation problems. The results suggest that the partially deterministic weight initialization method and the Nguyen-Widrow initialization technique performed equally well and helped the network train and generalize better by achieving better training and simulation error values.