基于内容-运动对应的超帧分割用于社交视频摘要

Tao Zhuo, Peng Zhang, Kangli Chen, Yanning Zhang
{"title":"基于内容-运动对应的超帧分割用于社交视频摘要","authors":"Tao Zhuo, Peng Zhang, Kangli Chen, Yanning Zhang","doi":"10.1109/ACII.2015.7344674","DOIUrl":null,"url":null,"abstract":"The goal of video summarization is to turn large volume of video data into a compact visual summary that can be easily interpreted by users in a while. Existing summarization strategies employed the point based feature correspondence for the superframe segmentation. Unfortunately, the information carried by those sparse points is far from sufficiency and stability to describe the change of interesting regions of each frame. Therefore, in order to overcome the limitations of point feature, we propose a region correspondence based superframe segmentation to achieve more effective video summarization. Instead of utilizing the motion of feature points, we calculate the similarity of content-motion to obtain the strength of change between the consecutive frames. With the help of circulant structure kernel, the proposed method is able to perform more accurate motion estimation efficiently. Experimental testing on the videos from benchmark database has demonstrate the effectiveness of the proposed method.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"24 1","pages":"857-862"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Superframe segmentation based on content-motion correspondence for social video summarization\",\"authors\":\"Tao Zhuo, Peng Zhang, Kangli Chen, Yanning Zhang\",\"doi\":\"10.1109/ACII.2015.7344674\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The goal of video summarization is to turn large volume of video data into a compact visual summary that can be easily interpreted by users in a while. Existing summarization strategies employed the point based feature correspondence for the superframe segmentation. Unfortunately, the information carried by those sparse points is far from sufficiency and stability to describe the change of interesting regions of each frame. Therefore, in order to overcome the limitations of point feature, we propose a region correspondence based superframe segmentation to achieve more effective video summarization. Instead of utilizing the motion of feature points, we calculate the similarity of content-motion to obtain the strength of change between the consecutive frames. With the help of circulant structure kernel, the proposed method is able to perform more accurate motion estimation efficiently. Experimental testing on the videos from benchmark database has demonstrate the effectiveness of the proposed method.\",\"PeriodicalId\":6863,\"journal\":{\"name\":\"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)\",\"volume\":\"24 1\",\"pages\":\"857-862\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACII.2015.7344674\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACII.2015.7344674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

视频摘要的目标是将大量的视频数据转化为用户可以在短时间内理解的简洁的视觉摘要。现有的摘要策略采用基于点的特征对应进行超帧分割。遗憾的是,这些稀疏点所携带的信息远远不够充分和稳定,无法描述每帧感兴趣区域的变化。因此,为了克服点特征的局限性,我们提出了一种基于区域对应的超帧分割方法来实现更有效的视频摘要。我们不是利用特征点的运动,而是计算内容运动的相似度来获得连续帧之间的变化强度。在循环结构核的帮助下,该方法能够有效地进行更精确的运动估计。通过对基准数据库视频的实验测试,证明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Superframe segmentation based on content-motion correspondence for social video summarization
The goal of video summarization is to turn large volume of video data into a compact visual summary that can be easily interpreted by users in a while. Existing summarization strategies employed the point based feature correspondence for the superframe segmentation. Unfortunately, the information carried by those sparse points is far from sufficiency and stability to describe the change of interesting regions of each frame. Therefore, in order to overcome the limitations of point feature, we propose a region correspondence based superframe segmentation to achieve more effective video summarization. Instead of utilizing the motion of feature points, we calculate the similarity of content-motion to obtain the strength of change between the consecutive frames. With the help of circulant structure kernel, the proposed method is able to perform more accurate motion estimation efficiently. Experimental testing on the videos from benchmark database has demonstrate the effectiveness of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Avatar and participant gender differences in the perception of uncanniness of virtual humans Neural conditional ordinal random fields for agreement level estimation Fundamental frequency modeling using wavelets for emotional voice conversion Bimodal feature-based fusion for real-time emotion recognition in a mobile context Harmony search for feature selection in speech emotion recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1