视频中人类动作识别的动作场景模型

Yifei Zhang, Wen Qu, Daling Wang
{"title":"视频中人类动作识别的动作场景模型","authors":"Yifei Zhang,&nbsp;Wen Qu,&nbsp;Daling Wang","doi":"10.1016/j.aasri.2014.05.016","DOIUrl":null,"url":null,"abstract":"<div><p>Human action recognition from realistic videos attracts more attention in many practical applications such as on-line video surveillance and content-based video management. Single action recognition always fails to distinguish similar action categories due to the complex background settings in realistic videos. In this paper, a novel action-scene model is explored to learn contextual relationship between actions and scenes in realistic videos. With little prior knowledge on scene categories, a generative probabilistic framework is used for action inference from background directly based on visual words. Experimental results on a realistic video dataset validate the effectiveness of the action-scene model for action recognition from background settings. Extensive experiments were conducted on different feature extracted methods, and the results show the learned model has good robustness when the features are noisy.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"6 ","pages":"Pages 111-117"},"PeriodicalIF":0.0000,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2014.05.016","citationCount":"10","resultStr":"{\"title\":\"Action-scene Model for Human Action Recognition from Videos\",\"authors\":\"Yifei Zhang,&nbsp;Wen Qu,&nbsp;Daling Wang\",\"doi\":\"10.1016/j.aasri.2014.05.016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Human action recognition from realistic videos attracts more attention in many practical applications such as on-line video surveillance and content-based video management. Single action recognition always fails to distinguish similar action categories due to the complex background settings in realistic videos. In this paper, a novel action-scene model is explored to learn contextual relationship between actions and scenes in realistic videos. With little prior knowledge on scene categories, a generative probabilistic framework is used for action inference from background directly based on visual words. Experimental results on a realistic video dataset validate the effectiveness of the action-scene model for action recognition from background settings. Extensive experiments were conducted on different feature extracted methods, and the results show the learned model has good robustness when the features are noisy.</p></div>\",\"PeriodicalId\":100008,\"journal\":{\"name\":\"AASRI Procedia\",\"volume\":\"6 \",\"pages\":\"Pages 111-117\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/j.aasri.2014.05.016\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AASRI Procedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2212671614000171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AASRI Procedia","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2212671614000171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

基于真实视频的人体动作识别在在线视频监控、基于内容的视频管理等实际应用中受到越来越多的关注。在现实视频中,由于背景设置复杂,单动作识别往往无法区分相似的动作类别。本文探索了一种新的动作场景模型来学习实景视频中动作与场景之间的语境关系。在对场景类别缺乏先验知识的情况下,采用生成概率框架直接基于视觉词从背景进行动作推理。在真实视频数据集上的实验结果验证了动作场景模型从背景设置中识别动作的有效性。对不同的特征提取方法进行了大量的实验,结果表明,学习到的模型在特征有噪声的情况下具有良好的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Action-scene Model for Human Action Recognition from Videos

Human action recognition from realistic videos attracts more attention in many practical applications such as on-line video surveillance and content-based video management. Single action recognition always fails to distinguish similar action categories due to the complex background settings in realistic videos. In this paper, a novel action-scene model is explored to learn contextual relationship between actions and scenes in realistic videos. With little prior knowledge on scene categories, a generative probabilistic framework is used for action inference from background directly based on visual words. Experimental results on a realistic video dataset validate the effectiveness of the action-scene model for action recognition from background settings. Extensive experiments were conducted on different feature extracted methods, and the results show the learned model has good robustness when the features are noisy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Preface Preface Preface Preface Classification of Wild Animals based on SVM and Local Descriptors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1