Xing-Chen Pan, Hong-Ren Su, S. Lai, Kai-Che Liu, Hurng-Sheng Wu
We propose a novel framework for reconstructing 3D liver vessel model from CT images. The proposed algorithm consists of vessel detection, vessel tree reconstruction and vessel radius estimation. First, we employ the tubular-filter based approach to detect vessel structure and construct the minimum spanning tree to bridge all the gaps between vessels. Then, we propose an approach to estimate the radius of the vessel at all vessel centerline voxels based on the local patch descriptors. Using the proposed 3D vessel reconstruction system can provide detailed 3D liver vessel model very efficiently. Our experimental results demonstrate the accuracy of the proposed system for 3D liver vessel reconstruction from 3D CT images.
{"title":"3D Liver Vessel Reconstruction from CT Images","authors":"Xing-Chen Pan, Hong-Ren Su, S. Lai, Kai-Che Liu, Hurng-Sheng Wu","doi":"10.1109/3DV.2014.96","DOIUrl":"https://doi.org/10.1109/3DV.2014.96","url":null,"abstract":"We propose a novel framework for reconstructing 3D liver vessel model from CT images. The proposed algorithm consists of vessel detection, vessel tree reconstruction and vessel radius estimation. First, we employ the tubular-filter based approach to detect vessel structure and construct the minimum spanning tree to bridge all the gaps between vessels. Then, we propose an approach to estimate the radius of the vessel at all vessel centerline voxels based on the local patch descriptors. Using the proposed 3D vessel reconstruction system can provide detailed 3D liver vessel model very efficiently. Our experimental results demonstrate the accuracy of the proposed system for 3D liver vessel reconstruction from 3D CT images.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129506217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jizhou Yan, Dongdong Chen, Heesoo Myeong, Takaaki Shiratori, Yi Ma
Detecting and segmenting moving objects in an image sequence has always been a crucial task for many computer vision applications. This task becomes especially challenging for real-world image sequences of busy street scenes, where moving objects are ubiquitous. Although it remains technologically elusive to develop an effective and scalable image-based moving object detection, modern street side imagery are often augmented with sparse point clouds captured with depth sensors. This paper develops a simple but effective system for moving object detection that fully harnesses the complementary nature of 2D image and 3D LIDAR point clouds. We demonstrate how moving objects can be much more easily and reliably detected with sparse 3D measurements and how such information can significantly improve segmentation for moving objects in the image sequences. The results of our system are highly accurate "joint segmentation" of 2D images and 3D points for all moving objects in street scenes, which can serve many subsequent tasks such as object removal in images, 3D reconstruction and rendering.
{"title":"Automatic Extraction of Moving Objects from Image and LIDAR Sequences","authors":"Jizhou Yan, Dongdong Chen, Heesoo Myeong, Takaaki Shiratori, Yi Ma","doi":"10.1109/3DV.2014.94","DOIUrl":"https://doi.org/10.1109/3DV.2014.94","url":null,"abstract":"Detecting and segmenting moving objects in an image sequence has always been a crucial task for many computer vision applications. This task becomes especially challenging for real-world image sequences of busy street scenes, where moving objects are ubiquitous. Although it remains technologically elusive to develop an effective and scalable image-based moving object detection, modern street side imagery are often augmented with sparse point clouds captured with depth sensors. This paper develops a simple but effective system for moving object detection that fully harnesses the complementary nature of 2D image and 3D LIDAR point clouds. We demonstrate how moving objects can be much more easily and reliably detected with sparse 3D measurements and how such information can significantly improve segmentation for moving objects in the image sequences. The results of our system are highly accurate \"joint segmentation\" of 2D images and 3D points for all moving objects in street scenes, which can serve many subsequent tasks such as object removal in images, 3D reconstruction and rendering.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115096532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many methods for 3D reconstruction of deformable surfaces from a monocular view rely on inextensibility constraints. An interesting application with commercial potential lies in augmented reality in portable and wearable devices. Such applications add an additional challenge to the 3D reconstruction, since in portable platforms the availability of resources is limited and not always guaranteed. Towards this goal, we introduce a method to deliver the best possible 3D reconstruction of the deformable surface at any time. Since computational resources may vary, it is decided on-the-fly when to stop the reconstruction algorithm. We use an efficient optimization method to quickly deliver the reconstructed surface. We introduce bootstrapping to improve the robustness of the efficient 3D reconstruction algorithm by merging multiple versions of the reconstructed surface. Also, these multiple 3D surfaces can be used to estimate the confidence of the reconstruction. In a series of experiments, in both synthetic and real data, we show that our method is effective for timely reconstruction of 3D surfaces.
{"title":"Reconstruction of Inextensible Surfaces on a Budget via Bootstrapping","authors":"Alex Locher, Lennart Elsen, X. Boix, L. Gool","doi":"10.1109/3DV.2014.98","DOIUrl":"https://doi.org/10.1109/3DV.2014.98","url":null,"abstract":"Many methods for 3D reconstruction of deformable surfaces from a monocular view rely on inextensibility constraints. An interesting application with commercial potential lies in augmented reality in portable and wearable devices. Such applications add an additional challenge to the 3D reconstruction, since in portable platforms the availability of resources is limited and not always guaranteed. Towards this goal, we introduce a method to deliver the best possible 3D reconstruction of the deformable surface at any time. Since computational resources may vary, it is decided on-the-fly when to stop the reconstruction algorithm. We use an efficient optimization method to quickly deliver the reconstructed surface. We introduce bootstrapping to improve the robustness of the efficient 3D reconstruction algorithm by merging multiple versions of the reconstructed surface. Also, these multiple 3D surfaces can be used to estimate the confidence of the reconstruction. In a series of experiments, in both synthetic and real data, we show that our method is effective for timely reconstruction of 3D surfaces.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115511868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Ackermann, Fabian Langguth, Simon Fuhrmann, Arjan Kuijper, M. Goesele
We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.
{"title":"Multi-view Photometric Stereo by Example","authors":"J. Ackermann, Fabian Langguth, Simon Fuhrmann, Arjan Kuijper, M. Goesele","doi":"10.1109/3DV.2014.63","DOIUrl":"https://doi.org/10.1109/3DV.2014.63","url":null,"abstract":"We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While processing material in rotating vessels such as rotary kilns, adhesions on the inner vessel wall can occur. Large adhesions usually affect the process negatively and need to be prevented. An online detection and analysis of adhesions inside the vessel during operation could allow the process control to deploy counter-measures that prevent additional adhesions or reduce the adhesion's sizes. In this paper, we present a new method that enables an image-based online detection, tracking and characterization of adhesions inside a rotating vessel. Our algorithm makes use of the rotational movements of adhesions in a structure from motion approach which allows for the measurement of the positions and heights of adhesions with a single camera. The applicability of our method is shown by means of image sequences from a rotating vessel model as well as from an industrially used cement rotary kiln.
{"title":"A Structure from Motion Approach for the Analysis of Adhesions in Rotating Vessels","authors":"P. Waibel, J. Matthes, L. Gröll, H. Keller","doi":"10.1109/3DV.2014.38","DOIUrl":"https://doi.org/10.1109/3DV.2014.38","url":null,"abstract":"While processing material in rotating vessels such as rotary kilns, adhesions on the inner vessel wall can occur. Large adhesions usually affect the process negatively and need to be prevented. An online detection and analysis of adhesions inside the vessel during operation could allow the process control to deploy counter-measures that prevent additional adhesions or reduce the adhesion's sizes. In this paper, we present a new method that enables an image-based online detection, tracking and characterization of adhesions inside a rotating vessel. Our algorithm makes use of the rotational movements of adhesions in a structure from motion approach which allows for the measurement of the positions and heights of adhesions with a single camera. The applicability of our method is shown by means of image sequences from a rotating vessel model as well as from an industrially used cement rotary kiln.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a multi-view stereo reconstruction method in which the surface is represented by CAD-compatible T-splines. Our method hinges on the principle of is geometric analysis, formulating an energy functional that can be directly computed in terms of the T-spline basis. Paying attention to the idiosyncracies of this basis, we derive an analytic formula for the gradient of the functional which is then used in photo-consistency optimization. The numbers of degrees of freedom our model requires is drastically reduced compared to the state of the art. Gains in efficiency can firstly be attributed to the fact that T-splines are particularly suited for adaptive refinement. Secondly, evaluation of the proposed energy functional is highly parallelizable as demonstrated by means of a T-spline-specific GPU implementation. Our experiments indicate the superiority of T-spline surfaces over the widely-used triangular meshes in terms of memory efficiency and numerical stability, without relying on dedicated regularizers.
{"title":"Direct Optimization of T-Splines Based on Multiview Stereo","authors":"Thomas Morwald, Jonathan Balzer, M. Vincze","doi":"10.1109/3DV.2014.42","DOIUrl":"https://doi.org/10.1109/3DV.2014.42","url":null,"abstract":"We propose a multi-view stereo reconstruction method in which the surface is represented by CAD-compatible T-splines. Our method hinges on the principle of is geometric analysis, formulating an energy functional that can be directly computed in terms of the T-spline basis. Paying attention to the idiosyncracies of this basis, we derive an analytic formula for the gradient of the functional which is then used in photo-consistency optimization. The numbers of degrees of freedom our model requires is drastically reduced compared to the state of the art. Gains in efficiency can firstly be attributed to the fact that T-splines are particularly suited for adaptive refinement. Secondly, evaluation of the proposed energy functional is highly parallelizable as demonstrated by means of a T-spline-specific GPU implementation. Our experiments indicate the superiority of T-spline surfaces over the widely-used triangular meshes in terms of memory efficiency and numerical stability, without relying on dedicated regularizers.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124032126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. Two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS - a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties.
{"title":"Colour Helmholtz Stereopsis for Reconstruction of Complex Dynamic Scenes","authors":"Nadejda Roubtsova, Jean-Yves Guillemaut","doi":"10.1109/3DV.2014.59","DOIUrl":"https://doi.org/10.1109/3DV.2014.59","url":null,"abstract":"Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. Two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS - a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132374488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel sketch-based 3D model retrieval algorithm that is scalable as well as accurate. Accuracy is achieved by a combination of (1) a set of state-of-the-art visual features for comparing sketches and 3D models, and (2) an efficient algorithm to learn data-driven similarity across heterogeneous domains of sketches and 3D models. For the latter, we adopted the algorithm [18] by Furuya et al., which fuses, for more accurate similarity computation, three kinds of similarities, i.e., Those among sketches, those among 3D models, and those between sketches and 3D models. While the algorithm by Furuya et al. [18] does improve accuracy, it does not scale. We accelerate, without loss of accuracy, retrieval result ranking stage of [18] by embedding its cross-modal similarity graph into Hamming space. The embedding is performed by a combination of spectral embedding and hashing into compact binary codes. Experiments show that our proposed algorithm is more accurate and much faster than previous sketch-based 3D model retrieval algorithms.
{"title":"Hashing Cross-Modal Manifold for Scalable Sketch-Based 3D Model Retrieval","authors":"T. Furuya, Ryutarou Ohbuchi","doi":"10.1109/3DV.2014.72","DOIUrl":"https://doi.org/10.1109/3DV.2014.72","url":null,"abstract":"This paper proposes a novel sketch-based 3D model retrieval algorithm that is scalable as well as accurate. Accuracy is achieved by a combination of (1) a set of state-of-the-art visual features for comparing sketches and 3D models, and (2) an efficient algorithm to learn data-driven similarity across heterogeneous domains of sketches and 3D models. For the latter, we adopted the algorithm [18] by Furuya et al., which fuses, for more accurate similarity computation, three kinds of similarities, i.e., Those among sketches, those among 3D models, and those between sketches and 3D models. While the algorithm by Furuya et al. [18] does improve accuracy, it does not scale. We accelerate, without loss of accuracy, retrieval result ranking stage of [18] by embedding its cross-modal similarity graph into Hamming space. The embedding is performed by a combination of spectral embedding and hashing into compact binary codes. Experiments show that our proposed algorithm is more accurate and much faster than previous sketch-based 3D model retrieval algorithms.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa
In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.
{"title":"MCOV: A Covariance Descriptor for Fusion of Texture and Shape Features in 3D Point Clouds","authors":"Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa","doi":"10.1109/3DV.2014.11","DOIUrl":"https://doi.org/10.1109/3DV.2014.11","url":null,"abstract":"In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127385071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo
In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.
{"title":"4D Capture Using Visibility Information of Multiple Projector Camera System","authors":"R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo","doi":"10.1109/3DV.2014.70","DOIUrl":"https://doi.org/10.1109/3DV.2014.70","url":null,"abstract":"In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}