{"title":"利用动态粒子群优化-反向传播学习人工神经网络:经验评价与比较","authors":"Swagatika Devi, A. Jagadev, S. Patnaik","doi":"10.6109/jicce.2015.13.2.123","DOIUrl":null,"url":null,"abstract":"Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input?output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison\",\"authors\":\"Swagatika Devi, A. Jagadev, S. Patnaik\",\"doi\":\"10.6109/jicce.2015.13.2.123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input?output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.\",\"PeriodicalId\":272551,\"journal\":{\"name\":\"J. Inform. and Commun. Convergence Engineering\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Inform. and Commun. Convergence Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.6109/jicce.2015.13.2.123\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Inform. and Commun. Convergence Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.6109/jicce.2015.13.2.123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison
Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input?output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.