S. Lameri, Paolo Bestagini, A. Melloni, S. Milani, A. Rocha, M. Tagliasacchi, S. Tubaro
{"title":"谁是我的父母?从部分匹配的镜头重建视频序列","authors":"S. Lameri, Paolo Bestagini, A. Melloni, S. Milani, A. Rocha, M. Tagliasacchi, S. Tubaro","doi":"10.1109/ICIP.2014.7026081","DOIUrl":null,"url":null,"abstract":"Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"32 1","pages":"5342-5346"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Who is my parent? Reconstructing video sequences from partially matching shots\",\"authors\":\"S. Lameri, Paolo Bestagini, A. Melloni, S. Milani, A. Rocha, M. Tagliasacchi, S. Tubaro\",\"doi\":\"10.1109/ICIP.2014.7026081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.\",\"PeriodicalId\":6856,\"journal\":{\"name\":\"2014 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"32 1\",\"pages\":\"5342-5346\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP.2014.7026081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2014.7026081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Who is my parent? Reconstructing video sequences from partially matching shots
Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.