Yi Zhou, Mingjun Cao, Jingdi You, Ming Meng, Yuehua Wang, Zhong Zhou
{"title":"MR video fusion: interactive 3D modeling and stitching on wide-baseline videos","authors":"Yi Zhou, Mingjun Cao, Jingdi You, Ming Meng, Yuehua Wang, Zhong Zhou","doi":"10.1145/3281505.3281513","DOIUrl":null,"url":null,"abstract":"A major challenge facing camera networks today is how to effectively organizing and visualizing videos in the presence of complicated network connection and overwhelming and even increasing amount of data. Previous works focus on 2D stitching or dynamic projection to 3D models, such as panorama and Augmented Virtual Environment (AVE), and haven't given an ideal solution. We present a novel method of multiple video fusion in 3D environment, which produces a highly comprehensive imagery and yields a spatio-temporal consistent scene. User initially interact with a newly designed background model named video model to register and stitch videos' background frames offline. The method then fuses the offline results to render videos in a real time manner. We demonstrate our system on 3 real scenes, each of which contains dozens of wide-baseline videos. The experimental results show that, our 3D modeling interface developed with the our presented model and method can efficiently assist the users to seamlessly integrate videos by comparing to commercial-off-the-shelf software with less operating complexity and more accurate 3D environment. The stitching method proposed by us is much more robust against the position, orientation, attribute differences among videos than the start-of-the-art methods. More importantly, this study sheds light on how to use the 3D techniques to solve 2D problems in realistic and we validate its feasibility.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3281505.3281513","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
A major challenge facing camera networks today is how to effectively organizing and visualizing videos in the presence of complicated network connection and overwhelming and even increasing amount of data. Previous works focus on 2D stitching or dynamic projection to 3D models, such as panorama and Augmented Virtual Environment (AVE), and haven't given an ideal solution. We present a novel method of multiple video fusion in 3D environment, which produces a highly comprehensive imagery and yields a spatio-temporal consistent scene. User initially interact with a newly designed background model named video model to register and stitch videos' background frames offline. The method then fuses the offline results to render videos in a real time manner. We demonstrate our system on 3 real scenes, each of which contains dozens of wide-baseline videos. The experimental results show that, our 3D modeling interface developed with the our presented model and method can efficiently assist the users to seamlessly integrate videos by comparing to commercial-off-the-shelf software with less operating complexity and more accurate 3D environment. The stitching method proposed by us is much more robust against the position, orientation, attribute differences among videos than the start-of-the-art methods. More importantly, this study sheds light on how to use the 3D techniques to solve 2D problems in realistic and we validate its feasibility.