Chunk incremental IDR/QR LDA learning

Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue
{"title":"Chunk incremental IDR/QR LDA learning","authors":"Yiming Peng, Shaoning Pang, Gang Chen, A. Sarrafzadeh, Tao Ban, D. Inoue","doi":"10.1109/IJCNN.2013.6707018","DOIUrl":null,"url":null,"abstract":"Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6707018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Training data in real world is often presented in random chunks. Yet existing sequential Incremental IDR/QR LDA (s-QR/IncLDA) can only process data one sample after another. This paper proposes a constructive chunk Incremental IDR/QR LDA (c-QR/IncLDA) for multiple data samples incremental learning. Given a chunk of s samples for incremental learning, the proposed c-QR/IncLDA increments current discriminant model Ω, by implementing computation on the compressed the residue matrix Δ ϵ Rd×n, instead of the entire incoming data chunk X ϵ Rd×s, where η ≤ s holds. Meanwhile, we derive a more accurate reduced within-class scatter matrix W to minimize the discriminative information loss at every incremental learning cycle. It is noted that the computational complexity of c-QR/IncLDA can be more expensive than s-QR/IncLDA for single sample processing. However, for multiple samples processing, the computational efficiency of c-QR/IncLDA deterministically surpasses s-QR/IncLDA when the chunk size is large, i.e., s ≫ η holds. Moreover, experiments evaluation shows that the proposed c-QR/IncLDA can achieve an accuracy level that is competitive to batch QR/LDA and is consistently higher than s-QR/IncLDA.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
块增量IDR/QR LDA学习
现实世界中的训练数据通常以随机块的形式呈现。而现有的顺序增量式IDR/QR LDA (s-QR/IncLDA)只能一个样本接一个样本地处理数据。本文提出了一种用于多数据样本增量学习的构造块增量IDR/QR LDA (c-QR/IncLDA)算法。给定s个样本块用于增量学习,所提出的c-QR/IncLDA通过在压缩残差矩阵Δ御Rd×n上实现计算,而不是在η≤s成立的整个传入数据块X御Rd×s上实现对当前区分模型Ω的增量计算。同时,我们导出了一个更精确的类内散点矩阵W,以最小化每个增量学习周期的判别信息损失。对于单样本处理,c-QR/IncLDA的计算复杂度可能比s-QR/IncLDA要高。然而,对于多样本处理,当块大小较大时,c-QR/IncLDA的计算效率确定性地优于s- qr /IncLDA,即s比η保持不变。实验结果表明,c-QR/IncLDA的精度水平与批量QR/LDA相当,且始终高于s-QR/IncLDA。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An SVM-based approach for stock market trend prediction Spiking neural networks for financial data prediction Improving multi-label classification performance by label constraints Biologically inspired intensity and range image feature extraction A location-independent direct link neuromorphic interface
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1