Mobarakol Islam, Arifur Rahaman, M. K. Hasan, M. Shahjahan
{"title":"基于最大梯度函数和调制混沌的高效神经网络训练算法","authors":"Mobarakol Islam, Arifur Rahaman, M. K. Hasan, M. Shahjahan","doi":"10.1109/ISCID.2011.18","DOIUrl":null,"url":null,"abstract":"Biological brain involves chaos and the structure of artificial neural networks (ANNs) is similar to human brain. In order to imitate the structure and the function of human brain better, it is more logical to combine chaos with neural networks. In this paper we proposed a chaotic learning algorithm called Maximized Gradient function and Modulated Chaos (MGMC). MGMC maximizes the gradient function and also added a modulated version of chaos in learning rate (LR) as well as in activation function. Activation function made adaptive by using chaos as gain factor. MGMC generates a chaotic time series as modulated form of Mackey Glass, Logistic Map and Lorenz Attractor. A rescaled version of this series is used as learning rate (LR) called Modulated Learning Rate (MLR) during NN training. As a result neural network becomes biologically plausible and may get escaped from local minima zone and faster convergence rate is obtained as maximizing the derivative of activation function together with minimizing the error function. MGMC is extensively tested on three real world benchmark classification problems such as australian credit card, wine and soybean identification. The proposed MGMC outperforms the existing BP and BPfast in terms of generalization ability and also convergence rate.","PeriodicalId":224504,"journal":{"name":"2011 Fourth International Symposium on Computational Intelligence and Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An Efficient Neural Network Training Algorithm with Maximized Gradient Function and Modulated Chaos\",\"authors\":\"Mobarakol Islam, Arifur Rahaman, M. K. Hasan, M. Shahjahan\",\"doi\":\"10.1109/ISCID.2011.18\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Biological brain involves chaos and the structure of artificial neural networks (ANNs) is similar to human brain. In order to imitate the structure and the function of human brain better, it is more logical to combine chaos with neural networks. In this paper we proposed a chaotic learning algorithm called Maximized Gradient function and Modulated Chaos (MGMC). MGMC maximizes the gradient function and also added a modulated version of chaos in learning rate (LR) as well as in activation function. Activation function made adaptive by using chaos as gain factor. MGMC generates a chaotic time series as modulated form of Mackey Glass, Logistic Map and Lorenz Attractor. A rescaled version of this series is used as learning rate (LR) called Modulated Learning Rate (MLR) during NN training. As a result neural network becomes biologically plausible and may get escaped from local minima zone and faster convergence rate is obtained as maximizing the derivative of activation function together with minimizing the error function. MGMC is extensively tested on three real world benchmark classification problems such as australian credit card, wine and soybean identification. The proposed MGMC outperforms the existing BP and BPfast in terms of generalization ability and also convergence rate.\",\"PeriodicalId\":224504,\"journal\":{\"name\":\"2011 Fourth International Symposium on Computational Intelligence and Design\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 Fourth International Symposium on Computational Intelligence and Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCID.2011.18\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Fourth International Symposium on Computational Intelligence and Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCID.2011.18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Efficient Neural Network Training Algorithm with Maximized Gradient Function and Modulated Chaos
Biological brain involves chaos and the structure of artificial neural networks (ANNs) is similar to human brain. In order to imitate the structure and the function of human brain better, it is more logical to combine chaos with neural networks. In this paper we proposed a chaotic learning algorithm called Maximized Gradient function and Modulated Chaos (MGMC). MGMC maximizes the gradient function and also added a modulated version of chaos in learning rate (LR) as well as in activation function. Activation function made adaptive by using chaos as gain factor. MGMC generates a chaotic time series as modulated form of Mackey Glass, Logistic Map and Lorenz Attractor. A rescaled version of this series is used as learning rate (LR) called Modulated Learning Rate (MLR) during NN training. As a result neural network becomes biologically plausible and may get escaped from local minima zone and faster convergence rate is obtained as maximizing the derivative of activation function together with minimizing the error function. MGMC is extensively tested on three real world benchmark classification problems such as australian credit card, wine and soybean identification. The proposed MGMC outperforms the existing BP and BPfast in terms of generalization ability and also convergence rate.