A new multifunctional neural network with high performance and low energy consumption

L. M. Zhang
{"title":"A new multifunctional neural network with high performance and low energy consumption","authors":"L. M. Zhang","doi":"10.1109/ICCI-CC.2016.7862082","DOIUrl":null,"url":null,"abstract":"A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.","PeriodicalId":135701,"journal":{"name":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI-CC.2016.7862082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种新型高性能、低能耗的多功能神经网络
常见的人工神经网络(ANN)对所有隐藏神经元和输出神经元使用相同的激活函数。因此,由于数学功能单一,对复杂的大数据分析存在优化限制。此外,激活函数复杂的人工神经网络训练时间长,能量消耗大。为了解决这些问题,本文提出了一种新的节能“多功能神经网络”(MNN),该网络使用多种不同的激活函数来有效提高性能并显着降低能耗。设计了一种通用的训练算法来优化权重、偏置和函数选择,以提高性能,同时仍然实现相对较快的计算时间和减少能量使用。提出了一种新的通用学习算法来训练新型节能MNN。在性能分析方面,一种新的“遗传深度多功能神经网络”(Genetic Deep Multifunctional Neural Network, GDMNN)利用遗传算法对权重和偏置进行优化,并为所有神经元选择性能最佳的节能激活函数集。大量的仿真结果表明,优化后的GDMNN在实现高性能(预测精度)、低能耗和快速训练时间方面优于其他GDMNN。未来的工作包括(1)为MNN开发更有效节能的学习算法,用于数据挖掘应用问题,以及(2)使用并行云计算方法显着加快MNN的训练速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Autonomous robot controller using bitwise gibbs sampling Learnings and innovations in speech recognition Qualitative analysis of pre-performance routines in throwing using simple brain-wave sensor Improving pattern classification by nonlinearly combined classifiers Feature extraction of video using deep neural network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1