The segmentation of Lung lesions is a challenging task because of the complexity of lung lesions surroundings. Lung lesions can be categorized into two types: solid and non-solid. Lots of works have been developed previously to segment one of two types, but only a few are proposed to handle two types at the same time and these methods may be over-segmented or sub-segmented. Therefore, in this study, an effective framework is designed to segment two types of lung lesions in 3-dimension (3D). In the proposed framework, we use a Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) method to produce a 3D rough segmentation which is used as the initial contour of Geodesic Active Contour (GAC) method. SBGFRLS method can deal with the non-solid lung lesions very well because it can use the global information to segment inhomogeneous entities. GAC method can accurately locate the edge using the local information. Finally, we reconstruct and visualize the 3D segmentation results of lung lesions using visualization toolkit (VTK). All of our work is based on the Image Segmentation and Registration Toolkit (ITK) platform. We evaluate our method on the lung lesions CT data sets from 300 patients (280 for solid and 20 for non-solid). Experimental results show that our method can achieve better segmentation and more accurate calculation of 3D volume measurement compared to other two methods, especially in the non-solid type lesions.
{"title":"A 3D Segmentation and Visualization Scheme for Solid and Non-solid Lung Lesions Based on Gaussian Filtering Regularized Level Set","authors":"Liansheng Wang, Huangjing Lin, Xiaoyang Huang, Boliang Wang, Yiping Chen","doi":"10.1109/3DV.2014.110","DOIUrl":"https://doi.org/10.1109/3DV.2014.110","url":null,"abstract":"The segmentation of Lung lesions is a challenging task because of the complexity of lung lesions surroundings. Lung lesions can be categorized into two types: solid and non-solid. Lots of works have been developed previously to segment one of two types, but only a few are proposed to handle two types at the same time and these methods may be over-segmented or sub-segmented. Therefore, in this study, an effective framework is designed to segment two types of lung lesions in 3-dimension (3D). In the proposed framework, we use a Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) method to produce a 3D rough segmentation which is used as the initial contour of Geodesic Active Contour (GAC) method. SBGFRLS method can deal with the non-solid lung lesions very well because it can use the global information to segment inhomogeneous entities. GAC method can accurately locate the edge using the local information. Finally, we reconstruct and visualize the 3D segmentation results of lung lesions using visualization toolkit (VTK). All of our work is based on the Image Segmentation and Registration Toolkit (ITK) platform. We evaluate our method on the lung lesions CT data sets from 300 patients (280 for solid and 20 for non-solid). Experimental results show that our method can achieve better segmentation and more accurate calculation of 3D volume measurement compared to other two methods, especially in the non-solid type lesions.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129281318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt
We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.
{"title":"Efficient Multi-view Performance Capture of Fine-Scale Surface Detail","authors":"Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt","doi":"10.1109/3DV.2014.46","DOIUrl":"https://doi.org/10.1109/3DV.2014.46","url":null,"abstract":"We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117315279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton
This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.
{"title":"Structured Representation of Non-Rigid Surfaces from Single View 3D Point Tracks","authors":"Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton","doi":"10.1109/3DV.2014.13","DOIUrl":"https://doi.org/10.1109/3DV.2014.13","url":null,"abstract":"This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122076835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa
In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.
{"title":"MCOV: A Covariance Descriptor for Fusion of Texture and Shape Features in 3D Point Clouds","authors":"Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa","doi":"10.1109/3DV.2014.11","DOIUrl":"https://doi.org/10.1109/3DV.2014.11","url":null,"abstract":"In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127385071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo
In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.
{"title":"4D Capture Using Visibility Information of Multiple Projector Camera System","authors":"R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo","doi":"10.1109/3DV.2014.70","DOIUrl":"https://doi.org/10.1109/3DV.2014.70","url":null,"abstract":"In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.
{"title":"Interactive Mapping of Indoor Building Structures through Mobile Devices","authors":"G. Pintore, Marco Agus, E. Gobbetti","doi":"10.1109/3DV.2014.40","DOIUrl":"https://doi.org/10.1109/3DV.2014.40","url":null,"abstract":"We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a method for non-invasive, accurate and efficient 3D reconstruction of occluded scenes, from a minimal number of X-ray and range scan image acquisitions. The residuals of generalised epipolar constraints (GEC) are incorporated in a highly efficient bundle adjustment minimization, to obtain maximum likelihood estimations of the X-ray image calibration parameters from correspondences between scene points, image points and apparent contours of scene objects. Furthermore, we propose a multimodal template adequate for accurate joint calibration of X-ray and range scan images. It offers crucial advantages for security applications, such as minimal scene occlusion and an agile data acquisition. Finally we describe a shape-from-silhouettes method based on the state of the art, able to reconstruct scene objects with general 3D shapes. We combine these proposals in a full system for 3D reconstruction of occluded scenes, and use it to demonstrate the practical and computational advantages of the methods herein described, with respect to previous proposals, using both synthetic and real data experiments.
{"title":"Multimodal Calibration of Portable X-Ray Capture Systems for 3D Reconstruction","authors":"Antonio L. Rodríguez, P. Taddei, V. Sequeira","doi":"10.1109/3DV.2014.64","DOIUrl":"https://doi.org/10.1109/3DV.2014.64","url":null,"abstract":"We describe a method for non-invasive, accurate and efficient 3D reconstruction of occluded scenes, from a minimal number of X-ray and range scan image acquisitions. The residuals of generalised epipolar constraints (GEC) are incorporated in a highly efficient bundle adjustment minimization, to obtain maximum likelihood estimations of the X-ray image calibration parameters from correspondences between scene points, image points and apparent contours of scene objects. Furthermore, we propose a multimodal template adequate for accurate joint calibration of X-ray and range scan images. It offers crucial advantages for security applications, such as minimal scene occlusion and an agile data acquisition. Finally we describe a shape-from-silhouettes method based on the state of the art, able to reconstruct scene objects with general 3D shapes. We combine these proposals in a full system for 3D reconstruction of occluded scenes, and use it to demonstrate the practical and computational advantages of the methods herein described, with respect to previous proposals, using both synthetic and real data experiments.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131502496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.
{"title":"Multistage SFM: Revisiting Incremental Structure from Motion","authors":"R. Shah, A. Deshpande, P J Narayanan","doi":"10.1109/3DV.2014.95","DOIUrl":"https://doi.org/10.1109/3DV.2014.95","url":null,"abstract":"In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"13 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131752139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten
In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.
{"title":"Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects","authors":"Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten","doi":"10.1109/3DV.2014.114","DOIUrl":"https://doi.org/10.1109/3DV.2014.114","url":null,"abstract":"In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132760557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier
We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.
{"title":"A Methodology for Creating Large Scale Reference Models with Known Uncertainty for Evaluating Imaging Solution","authors":"M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier","doi":"10.1109/3DV.2014.104","DOIUrl":"https://doi.org/10.1109/3DV.2014.104","url":null,"abstract":"We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130558728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}