{"title":"对反向传播算法的参数进行微调以获得最佳的学习性能","authors":"Viral Nagori","doi":"10.1109/IC3I.2016.7917926","DOIUrl":null,"url":null,"abstract":"The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based systems without emphasizing on how to fine tune the parameters of the algorithm. The paper throws the light on how researchers can manipulate and experiment with the parameters of the back propagation algorithm to achieve the optimum learning performance. The paper presents the results of the laboratory experiments of fine tuning the parameters of the back propagation algorithm. The process of fine tuning the parameters was applied on the neural network based expert system prototype. The prototype aims to analyze and design customized motivational strategies based on employees' perspective. The laboratory experiments were conducted on the following parameters of back propagation algorithm: learning rate, momentum rate and activation functions. Learning performance are measured and recorded. At the same time, the impact of activation function on the final output is also measured. Based on the results, the values of the above parameters which provide the optimum learning performance is chosen for the full scale system implementation.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Fine tuning the parameters of back propagation algorithm for optimum learning performance\",\"authors\":\"Viral Nagori\",\"doi\":\"10.1109/IC3I.2016.7917926\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based systems without emphasizing on how to fine tune the parameters of the algorithm. The paper throws the light on how researchers can manipulate and experiment with the parameters of the back propagation algorithm to achieve the optimum learning performance. The paper presents the results of the laboratory experiments of fine tuning the parameters of the back propagation algorithm. The process of fine tuning the parameters was applied on the neural network based expert system prototype. The prototype aims to analyze and design customized motivational strategies based on employees' perspective. The laboratory experiments were conducted on the following parameters of back propagation algorithm: learning rate, momentum rate and activation functions. Learning performance are measured and recorded. At the same time, the impact of activation function on the final output is also measured. Based on the results, the values of the above parameters which provide the optimum learning performance is chosen for the full scale system implementation.\",\"PeriodicalId\":305971,\"journal\":{\"name\":\"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3I.2016.7917926\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3I.2016.7917926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fine tuning the parameters of back propagation algorithm for optimum learning performance
The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based systems without emphasizing on how to fine tune the parameters of the algorithm. The paper throws the light on how researchers can manipulate and experiment with the parameters of the back propagation algorithm to achieve the optimum learning performance. The paper presents the results of the laboratory experiments of fine tuning the parameters of the back propagation algorithm. The process of fine tuning the parameters was applied on the neural network based expert system prototype. The prototype aims to analyze and design customized motivational strategies based on employees' perspective. The laboratory experiments were conducted on the following parameters of back propagation algorithm: learning rate, momentum rate and activation functions. Learning performance are measured and recorded. At the same time, the impact of activation function on the final output is also measured. Based on the results, the values of the above parameters which provide the optimum learning performance is chosen for the full scale system implementation.