{"title":"Efficient subspace learning using a large scale neural network CombNet-II","authors":"A.A. Ghaibeh, S. Kuroyanagi, A. Iwata","doi":"10.1109/ICONIP.2002.1202210","DOIUrl":null,"url":null,"abstract":"In the field of artificial neural networks, large-scale classification problems are still challenging due to many obstacles such as local minima state, long time computation, and the requirement of large amount of memory. The large-scale network CombNET-II overcomes the local minima state and proves to give good recognition rate in many applications. However CombNET-II still requires a large amount of memory used for the training database and feature space. We propose a revised version of CombNET-II with a considerably lower memory requirement, which makes the problem of large-scale classification more tractable. The memory reduction is achieved by adding a preprocessing stage at the input of each branch network. The purpose of this stage is to select the different features that have the most classification power for each subspace generated by the stem network. Testing our proposed model using Japanese kanji characters shows that the required memory might be reduced by almost 50% without significant decrease in the recognition rate.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICONIP.2002.1202210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In the field of artificial neural networks, large-scale classification problems are still challenging due to many obstacles such as local minima state, long time computation, and the requirement of large amount of memory. The large-scale network CombNET-II overcomes the local minima state and proves to give good recognition rate in many applications. However CombNET-II still requires a large amount of memory used for the training database and feature space. We propose a revised version of CombNET-II with a considerably lower memory requirement, which makes the problem of large-scale classification more tractable. The memory reduction is achieved by adding a preprocessing stage at the input of each branch network. The purpose of this stage is to select the different features that have the most classification power for each subspace generated by the stem network. Testing our proposed model using Japanese kanji characters shows that the required memory might be reduced by almost 50% without significant decrease in the recognition rate.