Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue
{"title":"Chunk incremental IDR/QR LDA learning","authors":"Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue","doi":"10.1109/IJCNN.2013.6707018","DOIUrl":null,"url":null,"abstract":"Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6707018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.