{"title":"增量学习的预算和快速计算方法,使用树搜索算法","authors":"Akihisa Kato, Hirohito Kawahara, K. Yamauchi","doi":"10.1109/IJCNN.2015.7280805","DOIUrl":null,"url":null,"abstract":"In this study, a lightweight kernel regression algorithm for embedded systems is proposed. In our previous study, we proposed an online learning method with a limited number of kernels based on a kernel regression model known as a limited general regression neural network (LGRNN). The LGRNN behavior is similar to that of k-nearest neighbors except for its continual interpolation between learned samples. The output of kernel regression to an input is dominant for the closest kernel output. This is in contrast to the output of kernel perceptrons, which is determined by the combination of several nested kernels. This means that the output of a kernel regression model can be lightly weighted by omitting calculations for the other kernels. Therefore, we have to find the closest kernel and its neighbors to the current input vector quickly. To realize this, we introduce a tree-search-based calculation method for LGRNN. In the LGRNN learning method, the kernels are clustered into k groups and organized as tree-structured data for the tree-search algorithm.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"6 1","pages":"1-7"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Incremental learning on a budget and a quick calculation method using a tree-search algorithm\",\"authors\":\"Akihisa Kato, Hirohito Kawahara, K. Yamauchi\",\"doi\":\"10.1109/IJCNN.2015.7280805\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, a lightweight kernel regression algorithm for embedded systems is proposed. In our previous study, we proposed an online learning method with a limited number of kernels based on a kernel regression model known as a limited general regression neural network (LGRNN). The LGRNN behavior is similar to that of k-nearest neighbors except for its continual interpolation between learned samples. The output of kernel regression to an input is dominant for the closest kernel output. This is in contrast to the output of kernel perceptrons, which is determined by the combination of several nested kernels. This means that the output of a kernel regression model can be lightly weighted by omitting calculations for the other kernels. Therefore, we have to find the closest kernel and its neighbors to the current input vector quickly. To realize this, we introduce a tree-search-based calculation method for LGRNN. In the LGRNN learning method, the kernels are clustered into k groups and organized as tree-structured data for the tree-search algorithm.\",\"PeriodicalId\":6539,\"journal\":{\"name\":\"2015 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"6 1\",\"pages\":\"1-7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2015.7280805\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2015.7280805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incremental learning on a budget and a quick calculation method using a tree-search algorithm
In this study, a lightweight kernel regression algorithm for embedded systems is proposed. In our previous study, we proposed an online learning method with a limited number of kernels based on a kernel regression model known as a limited general regression neural network (LGRNN). The LGRNN behavior is similar to that of k-nearest neighbors except for its continual interpolation between learned samples. The output of kernel regression to an input is dominant for the closest kernel output. This is in contrast to the output of kernel perceptrons, which is determined by the combination of several nested kernels. This means that the output of a kernel regression model can be lightly weighted by omitting calculations for the other kernels. Therefore, we have to find the closest kernel and its neighbors to the current input vector quickly. To realize this, we introduce a tree-search-based calculation method for LGRNN. In the LGRNN learning method, the kernels are clustered into k groups and organized as tree-structured data for the tree-search algorithm.