Mihirini Wagarachchi, A. Karunananda, Dinithi Navodya
{"title":"Determine the Architecture of ANNs by Using the Peak Search Algorithm and Delta Values","authors":"Mihirini Wagarachchi, A. Karunananda, Dinithi Navodya","doi":"10.1109/SLAAI-ICAI54477.2021.9664680","DOIUrl":null,"url":null,"abstract":"The solution obtained by an Artificial Neural Network does not guarantee that it always yields with the simplest neural network architecture for particular problem. This causes computational complexity of training, deployment, and usage of the trained of an artificial neural network. It has observed that the hidden layer architecture of an artificial neural network significantly influences on its solution. However, still modeling of the hidden layer architecture of an artificial neural network remains as a research challenge. This paper presents a theoretically-based approach to prune hidden layers of trained artificial neural networks, ensuring better or the same performance of a simpler network as compared with the original network and then discusses how to extend the proposed method to deep learning nets. The method was inspired by the facts of neuroplasticity. It achieves the solution by two phases. First, the number of hidden layers is determined by using a peak search algorithm and then newly discovered simpler network with lesser number of hidden layers and highest generalization power considered for pruning of its hidden neurons. Experiments have shown that the resultant architecture generated by this approach exhibits same or better performance as compared with the original network architecture.","PeriodicalId":252006,"journal":{"name":"2021 5th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLAAI-ICAI54477.2021.9664680","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The solution obtained by an Artificial Neural Network does not guarantee that it always yields with the simplest neural network architecture for particular problem. This causes computational complexity of training, deployment, and usage of the trained of an artificial neural network. It has observed that the hidden layer architecture of an artificial neural network significantly influences on its solution. However, still modeling of the hidden layer architecture of an artificial neural network remains as a research challenge. This paper presents a theoretically-based approach to prune hidden layers of trained artificial neural networks, ensuring better or the same performance of a simpler network as compared with the original network and then discusses how to extend the proposed method to deep learning nets. The method was inspired by the facts of neuroplasticity. It achieves the solution by two phases. First, the number of hidden layers is determined by using a peak search algorithm and then newly discovered simpler network with lesser number of hidden layers and highest generalization power considered for pruning of its hidden neurons. Experiments have shown that the resultant architecture generated by this approach exhibits same or better performance as compared with the original network architecture.