{"title":"A new multifunctional neural network with high performance and low energy consumption","authors":"L. M. Zhang","doi":"10.1109/ICCI-CC.2016.7862082","DOIUrl":null,"url":null,"abstract":"A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.","PeriodicalId":135701,"journal":{"name":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI-CC.2016.7862082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.