Incremental Learning of Temporally-Coherent Gaussian Mixture Models

Ognjen Arandjelovic, R. Cipolla
{"title":"Incremental Learning of Temporally-Coherent Gaussian Mixture Models","authors":"Ognjen Arandjelovic, R. Cipolla","doi":"10.5244/C.19.59","DOIUrl":null,"url":null,"abstract":"In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive oneby- one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed, which is increased (or reduced) when enough evidence for a new component is seen. This is deduced from the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions","PeriodicalId":196845,"journal":{"name":"Procedings of the British Machine Vision Conference 2005","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"77","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedings of the British Machine Vision Conference 2005","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5244/C.19.59","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 77

Abstract

In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive oneby- one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed, which is increased (or reduced) when enough evidence for a new component is seen. This is deduced from the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
时间相干高斯混合模型的增量学习
本文主要研究高斯混合模型(GMMs)的增量学习问题。与以前的方法不同,以前的方法普遍假设新数据是由gmm表示的块,然后与当前模型估计合并,我们的方法适用于新数据点一个接一个到达的情况,同时需要很少的额外内存。我们在内存中只保留了两个gmm,没有历史数据。假设组件的数量是固定的,那么当前的拟合就会更新,当有足够的证据表明出现了一个新组件时,组件的数量就会增加(或减少)。这是从相同复杂性的最古老拟合的变化中推断出来的,称为历史GMM,其概念是我们方法的核心。在多个合成数据集和真实成像条件下获取的人脸视频序列上定性和定量地验证了该方法的性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Real-Time 3-D Human Body Tracking using Variable Length Markov Models Detection and Tracking of Humans by Probabilistic Body Part Assembly Incremental Learning of Temporally-Coherent Gaussian Mixture Models Generalized 2D Fisher Discriminant Analysis Discriminant Low-dimensional Subspace Analysis for Face Recognition with Small Number of Training Samples
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1