Learning semantic scene models by object classification and trajectory clustering

Tianzhu Zhang, Hanqing Lu, S. Li
{"title":"Learning semantic scene models by object classification and trajectory clustering","authors":"Tianzhu Zhang, Hanqing Lu, S. Li","doi":"10.1109/CVPR.2009.5206809","DOIUrl":null,"url":null,"abstract":"Activity analysis is a basic task in video surveillance and has become an active research area. However, due to the diversity of moving objects category and their motion patterns, developing robust semantic scene models for activity analysis remains a challenging problem in traffic scenarios. This paper proposes a novel framework to learn semantic scene models. In this framework, the detected moving objects are first classified as pedestrians or vehicles via a co-trained classifier which takes advantage of the multiview information of objects. As a result, the framework can automatically learn motion patterns respectively for pedestrians and vehicles. Then, a graph is proposed to learn and cluster the motion patterns. To this end, trajectory is parameterized and the image is cut into multiple blocks which are taken as the nodes in the graph. Based on the parameters of trajectories, the primary motion patterns in each node (block) are extracted via Gaussian mixture model (GMM), and supplied to this graph. The graph cut algorithm is finally employed to group the motion patterns together, and trajectories are clustered to learn semantic scene models. Experimental results and applications to real world scenes show the validity of our proposed method.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"117","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Conference on Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2009.5206809","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 117

Abstract

Activity analysis is a basic task in video surveillance and has become an active research area. However, due to the diversity of moving objects category and their motion patterns, developing robust semantic scene models for activity analysis remains a challenging problem in traffic scenarios. This paper proposes a novel framework to learn semantic scene models. In this framework, the detected moving objects are first classified as pedestrians or vehicles via a co-trained classifier which takes advantage of the multiview information of objects. As a result, the framework can automatically learn motion patterns respectively for pedestrians and vehicles. Then, a graph is proposed to learn and cluster the motion patterns. To this end, trajectory is parameterized and the image is cut into multiple blocks which are taken as the nodes in the graph. Based on the parameters of trajectories, the primary motion patterns in each node (block) are extracted via Gaussian mixture model (GMM), and supplied to this graph. The graph cut algorithm is finally employed to group the motion patterns together, and trajectories are clustered to learn semantic scene models. Experimental results and applications to real world scenes show the validity of our proposed method.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过对象分类和轨迹聚类学习语义场景模型
活动分析是视频监控的一项基本任务,已成为一个活跃的研究领域。然而,由于运动物体类别及其运动模式的多样性,开发鲁棒的语义场景模型用于交通场景的活动分析仍然是一个具有挑战性的问题。本文提出了一种新的语义场景模型学习框架。在该框架中,首先利用物体的多视图信息,通过共同训练的分类器将检测到的运动物体分类为行人或车辆。因此,该框架可以自动学习行人和车辆的运动模式。然后,提出了一个图来学习和聚类运动模式。为此,将轨迹参数化,并将图像切割成多个块作为图中的节点。基于轨迹参数,通过高斯混合模型(GMM)提取每个节点(块)的主要运动模式,并提供给该图。最后利用图切算法对运动模式进行分组,并对轨迹进行聚类学习语义场景模型。实验结果和对真实场景的应用表明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On bias correction for geometric parameter estimation in computer vision Learning multi-modal densities on Discriminative Temporal Interaction Manifold for group activity recognition Nonrigid shape recovery by Gaussian process regression Combining powerful local and global statistics for texture description Observe locally, infer globally: A space-time MRF for detecting abnormal activities with incremental updates
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1