Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue
{"title":"块增量IDR/QR LDA学习","authors":"Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue","doi":"10.1109/IJCNN.2013.6707018","DOIUrl":null,"url":null,"abstract":"Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Chunk incremental IDR/QR LDA learning\",\"authors\":\"Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue\",\"doi\":\"10.1109/IJCNN.2013.6707018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.\",\"PeriodicalId\":376975,\"journal\":{\"name\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2013.6707018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6707018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
摘要
现实世界中的训练数据通常以随机块的形式呈现。而现有的顺序增量式IDR/QR LDA (s-QR/IncLDA)只能一个样本接一个样本地处理数据。本文提出了一种用于多数据样本增量学习的构造块增量IDR/QR LDA (c-QR/IncLDA)算法。给定s个样本块用于增量学习,所提出的c-QR/IncLDA通过在压缩残差矩阵Δ御Rd×n上实现计算,而不是在η≤s成立的整个传入数据块X御Rd×s上实现对当前区分模型Ω的增量计算。同时,我们导出了一个更精确的类内散点矩阵W,以最小化每个增量学习周期的判别信息损失。对于单样本处理,c-QR/IncLDA的计算复杂度可能比s-QR/IncLDA要高。然而,对于多样本处理,当块大小较大时,c-QR/IncLDA的计算效率确定性地优于s- qr /IncLDA,即s比η保持不变。实验结果表明,c-QR/IncLDA的精度水平与批量QR/LDA相当,且始终高于s-QR/IncLDA。
Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.