{"title":"一种新的基于放大梯度函数的自适应学习算法","authors":"S. Ng, C. Cheung, S. Leung, A. Luk","doi":"10.1109/IJCNN.2001.939009","DOIUrl":null,"url":null,"abstract":"An algorithm is proposed to solve the \"flat spot\" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A new adaptive learning algorithm using magnified gradient function\",\"authors\":\"S. Ng, C. Cheung, S. Leung, A. Luk\",\"doi\":\"10.1109/IJCNN.2001.939009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An algorithm is proposed to solve the \\\"flat spot\\\" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.\",\"PeriodicalId\":346955,\"journal\":{\"name\":\"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)\",\"volume\":\"150 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2001.939009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2001.939009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A new adaptive learning algorithm using magnified gradient function
An algorithm is proposed to solve the "flat spot" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.