{"title":"Dense 3D Scene Reconstruction from Multiple Spherical Images for 3-DoF+ VR Applications","authors":"T. L. T. D. Silveira, C. Jung","doi":"10.1109/VR.2019.8798281","DOIUrl":null,"url":null,"abstract":"We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2019.8798281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.