{"title":"小批模式下神经网络学习的自适应自然梯度方法","authors":"Hyeyoung Park, Kwanyong Lee","doi":"10.1109/ICAIIC.2019.8669082","DOIUrl":null,"url":null,"abstract":"Natural gradient learning, which is one of gradient descent learning methods, is known to have ideal convergence properties in the learning of hierarchical machines such as layered neural networks. However, there are a few limitations that degrades its practical usability: necessity of true probability density function of input variables and heavy computational cost due to matrix inversion. Though its adaptive approximation have been developed, it is basically derived for online learning mode, in which a single update is done for a single data sample. Noting that the on-line learning mode is not appropriate for the tasks with huge number of training data, this paper proposes a practical implementation of natural gradient for mini-batch learning mode, which is the most common setting in the real application with large data set. Computational experiments on benchmark datasets shows the efficiency of the proposed methods.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Adaptive Natural Gradient Method for Learning Neural Networks with Large Data set in Mini-Batch Mode\",\"authors\":\"Hyeyoung Park, Kwanyong Lee\",\"doi\":\"10.1109/ICAIIC.2019.8669082\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural gradient learning, which is one of gradient descent learning methods, is known to have ideal convergence properties in the learning of hierarchical machines such as layered neural networks. However, there are a few limitations that degrades its practical usability: necessity of true probability density function of input variables and heavy computational cost due to matrix inversion. Though its adaptive approximation have been developed, it is basically derived for online learning mode, in which a single update is done for a single data sample. Noting that the on-line learning mode is not appropriate for the tasks with huge number of training data, this paper proposes a practical implementation of natural gradient for mini-batch learning mode, which is the most common setting in the real application with large data set. Computational experiments on benchmark datasets shows the efficiency of the proposed methods.\",\"PeriodicalId\":273383,\"journal\":{\"name\":\"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAIIC.2019.8669082\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC.2019.8669082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive Natural Gradient Method for Learning Neural Networks with Large Data set in Mini-Batch Mode
Natural gradient learning, which is one of gradient descent learning methods, is known to have ideal convergence properties in the learning of hierarchical machines such as layered neural networks. However, there are a few limitations that degrades its practical usability: necessity of true probability density function of input variables and heavy computational cost due to matrix inversion. Though its adaptive approximation have been developed, it is basically derived for online learning mode, in which a single update is done for a single data sample. Noting that the on-line learning mode is not appropriate for the tasks with huge number of training data, this paper proposes a practical implementation of natural gradient for mini-batch learning mode, which is the most common setting in the real application with large data set. Computational experiments on benchmark datasets shows the efficiency of the proposed methods.