谁是我的父母?从部分匹配的镜头重建视频序列

S. Lameri, Paolo Bestagini, A. Melloni, S. Milani, A. Rocha, M. Tagliasacchi, S. Tubaro
{"title":"谁是我的父母?从部分匹配的镜头重建视频序列","authors":"S. Lameri, Paolo Bestagini, A. Melloni, S. Milani, A. Rocha, M. Tagliasacchi, S. Tubaro","doi":"10.1109/ICIP.2014.7026081","DOIUrl":null,"url":null,"abstract":"Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Who is my parent? Reconstructing video sequences from partially matching shots\",\"authors\":\"S. Lameri, Paolo Bestagini, A. Melloni, S. Milani, A. Rocha, M. Tagliasacchi, S. Tubaro\",\"doi\":\"10.1109/ICIP.2014.7026081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.\",\"PeriodicalId\":6856,\"journal\":{\"name\":\"2014 IEEE International Conference on Image Processing (ICIP)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP.2014.7026081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2014.7026081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

摘要

如今,可用视频内容的很大一部分是通过重用已经存在的在线视频创建的。在这些情况下,源视频很少按原样重复使用。相反,它通常是时间剪辑,只提取原始帧的子集,并且通常应用其他转换(例如,裁剪,徽标插入等)。在本文中,我们分析了与同一事件或主题相关的视频池。我们提出了一种方法,旨在自动重建原始源视频的内容,即父序列,通过将看似从同一父序列中提取的近重复镜头拼接在一起。分析的结果显示了内容是如何被重用的,从而揭示了内容创建者的意图,并使我们能够在不再在线时重建父序列。在此过程中,我们使用了一种鲁棒哈希算法,该算法允许我们检测帧组是否接近重复。在此基础上,我们开发了一种算法来自动查找多个序列的多个部分之间的近重复匹配。最后对所有接近重复的部分进行临时对齐,以重建父序列。该方法通过从YouTube下载的合成数据集和真实数据集进行了验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Who is my parent? Reconstructing video sequences from partially matching shots
Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Joint source and channel coding of view and rate scalable multi-view video Inter-view consistent hole filling in view extrapolation for multi-view image generation Cost-aware depth map estimation for Lytro camera SVM with feature selection and smooth prediction in images: Application to CAD of prostate cancer Model based clustering for 3D directional features: Application to depth image analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1