Xing-Chen Pan, Hong-Ren Su, S. Lai, Kai-Che Liu, Hurng-Sheng Wu
We propose a novel framework for reconstructing 3D liver vessel model from CT images. The proposed algorithm consists of vessel detection, vessel tree reconstruction and vessel radius estimation. First, we employ the tubular-filter based approach to detect vessel structure and construct the minimum spanning tree to bridge all the gaps between vessels. Then, we propose an approach to estimate the radius of the vessel at all vessel centerline voxels based on the local patch descriptors. Using the proposed 3D vessel reconstruction system can provide detailed 3D liver vessel model very efficiently. Our experimental results demonstrate the accuracy of the proposed system for 3D liver vessel reconstruction from 3D CT images.
{"title":"3D Liver Vessel Reconstruction from CT Images","authors":"Xing-Chen Pan, Hong-Ren Su, S. Lai, Kai-Che Liu, Hurng-Sheng Wu","doi":"10.1109/3DV.2014.96","DOIUrl":"https://doi.org/10.1109/3DV.2014.96","url":null,"abstract":"We propose a novel framework for reconstructing 3D liver vessel model from CT images. The proposed algorithm consists of vessel detection, vessel tree reconstruction and vessel radius estimation. First, we employ the tubular-filter based approach to detect vessel structure and construct the minimum spanning tree to bridge all the gaps between vessels. Then, we propose an approach to estimate the radius of the vessel at all vessel centerline voxels based on the local patch descriptors. Using the proposed 3D vessel reconstruction system can provide detailed 3D liver vessel model very efficiently. Our experimental results demonstrate the accuracy of the proposed system for 3D liver vessel reconstruction from 3D CT images.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129506217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many methods for 3D reconstruction of deformable surfaces from a monocular view rely on inextensibility constraints. An interesting application with commercial potential lies in augmented reality in portable and wearable devices. Such applications add an additional challenge to the 3D reconstruction, since in portable platforms the availability of resources is limited and not always guaranteed. Towards this goal, we introduce a method to deliver the best possible 3D reconstruction of the deformable surface at any time. Since computational resources may vary, it is decided on-the-fly when to stop the reconstruction algorithm. We use an efficient optimization method to quickly deliver the reconstructed surface. We introduce bootstrapping to improve the robustness of the efficient 3D reconstruction algorithm by merging multiple versions of the reconstructed surface. Also, these multiple 3D surfaces can be used to estimate the confidence of the reconstruction. In a series of experiments, in both synthetic and real data, we show that our method is effective for timely reconstruction of 3D surfaces.
{"title":"Reconstruction of Inextensible Surfaces on a Budget via Bootstrapping","authors":"Alex Locher, Lennart Elsen, X. Boix, L. Gool","doi":"10.1109/3DV.2014.98","DOIUrl":"https://doi.org/10.1109/3DV.2014.98","url":null,"abstract":"Many methods for 3D reconstruction of deformable surfaces from a monocular view rely on inextensibility constraints. An interesting application with commercial potential lies in augmented reality in portable and wearable devices. Such applications add an additional challenge to the 3D reconstruction, since in portable platforms the availability of resources is limited and not always guaranteed. Towards this goal, we introduce a method to deliver the best possible 3D reconstruction of the deformable surface at any time. Since computational resources may vary, it is decided on-the-fly when to stop the reconstruction algorithm. We use an efficient optimization method to quickly deliver the reconstructed surface. We introduce bootstrapping to improve the robustness of the efficient 3D reconstruction algorithm by merging multiple versions of the reconstructed surface. Also, these multiple 3D surfaces can be used to estimate the confidence of the reconstruction. In a series of experiments, in both synthetic and real data, we show that our method is effective for timely reconstruction of 3D surfaces.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115511868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a multi-view stereo reconstruction method in which the surface is represented by CAD-compatible T-splines. Our method hinges on the principle of is geometric analysis, formulating an energy functional that can be directly computed in terms of the T-spline basis. Paying attention to the idiosyncracies of this basis, we derive an analytic formula for the gradient of the functional which is then used in photo-consistency optimization. The numbers of degrees of freedom our model requires is drastically reduced compared to the state of the art. Gains in efficiency can firstly be attributed to the fact that T-splines are particularly suited for adaptive refinement. Secondly, evaluation of the proposed energy functional is highly parallelizable as demonstrated by means of a T-spline-specific GPU implementation. Our experiments indicate the superiority of T-spline surfaces over the widely-used triangular meshes in terms of memory efficiency and numerical stability, without relying on dedicated regularizers.
{"title":"Direct Optimization of T-Splines Based on Multiview Stereo","authors":"Thomas Morwald, Jonathan Balzer, M. Vincze","doi":"10.1109/3DV.2014.42","DOIUrl":"https://doi.org/10.1109/3DV.2014.42","url":null,"abstract":"We propose a multi-view stereo reconstruction method in which the surface is represented by CAD-compatible T-splines. Our method hinges on the principle of is geometric analysis, formulating an energy functional that can be directly computed in terms of the T-spline basis. Paying attention to the idiosyncracies of this basis, we derive an analytic formula for the gradient of the functional which is then used in photo-consistency optimization. The numbers of degrees of freedom our model requires is drastically reduced compared to the state of the art. Gains in efficiency can firstly be attributed to the fact that T-splines are particularly suited for adaptive refinement. Secondly, evaluation of the proposed energy functional is highly parallelizable as demonstrated by means of a T-spline-specific GPU implementation. Our experiments indicate the superiority of T-spline surfaces over the widely-used triangular meshes in terms of memory efficiency and numerical stability, without relying on dedicated regularizers.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124032126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jizhou Yan, Dongdong Chen, Heesoo Myeong, Takaaki Shiratori, Yi Ma
Detecting and segmenting moving objects in an image sequence has always been a crucial task for many computer vision applications. This task becomes especially challenging for real-world image sequences of busy street scenes, where moving objects are ubiquitous. Although it remains technologically elusive to develop an effective and scalable image-based moving object detection, modern street side imagery are often augmented with sparse point clouds captured with depth sensors. This paper develops a simple but effective system for moving object detection that fully harnesses the complementary nature of 2D image and 3D LIDAR point clouds. We demonstrate how moving objects can be much more easily and reliably detected with sparse 3D measurements and how such information can significantly improve segmentation for moving objects in the image sequences. The results of our system are highly accurate "joint segmentation" of 2D images and 3D points for all moving objects in street scenes, which can serve many subsequent tasks such as object removal in images, 3D reconstruction and rendering.
{"title":"Automatic Extraction of Moving Objects from Image and LIDAR Sequences","authors":"Jizhou Yan, Dongdong Chen, Heesoo Myeong, Takaaki Shiratori, Yi Ma","doi":"10.1109/3DV.2014.94","DOIUrl":"https://doi.org/10.1109/3DV.2014.94","url":null,"abstract":"Detecting and segmenting moving objects in an image sequence has always been a crucial task for many computer vision applications. This task becomes especially challenging for real-world image sequences of busy street scenes, where moving objects are ubiquitous. Although it remains technologically elusive to develop an effective and scalable image-based moving object detection, modern street side imagery are often augmented with sparse point clouds captured with depth sensors. This paper develops a simple but effective system for moving object detection that fully harnesses the complementary nature of 2D image and 3D LIDAR point clouds. We demonstrate how moving objects can be much more easily and reliably detected with sparse 3D measurements and how such information can significantly improve segmentation for moving objects in the image sequences. The results of our system are highly accurate \"joint segmentation\" of 2D images and 3D points for all moving objects in street scenes, which can serve many subsequent tasks such as object removal in images, 3D reconstruction and rendering.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115096532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While processing material in rotating vessels such as rotary kilns, adhesions on the inner vessel wall can occur. Large adhesions usually affect the process negatively and need to be prevented. An online detection and analysis of adhesions inside the vessel during operation could allow the process control to deploy counter-measures that prevent additional adhesions or reduce the adhesion's sizes. In this paper, we present a new method that enables an image-based online detection, tracking and characterization of adhesions inside a rotating vessel. Our algorithm makes use of the rotational movements of adhesions in a structure from motion approach which allows for the measurement of the positions and heights of adhesions with a single camera. The applicability of our method is shown by means of image sequences from a rotating vessel model as well as from an industrially used cement rotary kiln.
{"title":"A Structure from Motion Approach for the Analysis of Adhesions in Rotating Vessels","authors":"P. Waibel, J. Matthes, L. Gröll, H. Keller","doi":"10.1109/3DV.2014.38","DOIUrl":"https://doi.org/10.1109/3DV.2014.38","url":null,"abstract":"While processing material in rotating vessels such as rotary kilns, adhesions on the inner vessel wall can occur. Large adhesions usually affect the process negatively and need to be prevented. An online detection and analysis of adhesions inside the vessel during operation could allow the process control to deploy counter-measures that prevent additional adhesions or reduce the adhesion's sizes. In this paper, we present a new method that enables an image-based online detection, tracking and characterization of adhesions inside a rotating vessel. Our algorithm makes use of the rotational movements of adhesions in a structure from motion approach which allows for the measurement of the positions and heights of adhesions with a single camera. The applicability of our method is shown by means of image sequences from a rotating vessel model as well as from an industrially used cement rotary kiln.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel sketch-based 3D model retrieval algorithm that is scalable as well as accurate. Accuracy is achieved by a combination of (1) a set of state-of-the-art visual features for comparing sketches and 3D models, and (2) an efficient algorithm to learn data-driven similarity across heterogeneous domains of sketches and 3D models. For the latter, we adopted the algorithm [18] by Furuya et al., which fuses, for more accurate similarity computation, three kinds of similarities, i.e., Those among sketches, those among 3D models, and those between sketches and 3D models. While the algorithm by Furuya et al. [18] does improve accuracy, it does not scale. We accelerate, without loss of accuracy, retrieval result ranking stage of [18] by embedding its cross-modal similarity graph into Hamming space. The embedding is performed by a combination of spectral embedding and hashing into compact binary codes. Experiments show that our proposed algorithm is more accurate and much faster than previous sketch-based 3D model retrieval algorithms.
{"title":"Hashing Cross-Modal Manifold for Scalable Sketch-Based 3D Model Retrieval","authors":"T. Furuya, Ryutarou Ohbuchi","doi":"10.1109/3DV.2014.72","DOIUrl":"https://doi.org/10.1109/3DV.2014.72","url":null,"abstract":"This paper proposes a novel sketch-based 3D model retrieval algorithm that is scalable as well as accurate. Accuracy is achieved by a combination of (1) a set of state-of-the-art visual features for comparing sketches and 3D models, and (2) an efficient algorithm to learn data-driven similarity across heterogeneous domains of sketches and 3D models. For the latter, we adopted the algorithm [18] by Furuya et al., which fuses, for more accurate similarity computation, three kinds of similarities, i.e., Those among sketches, those among 3D models, and those between sketches and 3D models. While the algorithm by Furuya et al. [18] does improve accuracy, it does not scale. We accelerate, without loss of accuracy, retrieval result ranking stage of [18] by embedding its cross-modal similarity graph into Hamming space. The embedding is performed by a combination of spectral embedding and hashing into compact binary codes. Experiments show that our proposed algorithm is more accurate and much faster than previous sketch-based 3D model retrieval algorithms.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Ackermann, Fabian Langguth, Simon Fuhrmann, Arjan Kuijper, M. Goesele
We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.
{"title":"Multi-view Photometric Stereo by Example","authors":"J. Ackermann, Fabian Langguth, Simon Fuhrmann, Arjan Kuijper, M. Goesele","doi":"10.1109/3DV.2014.63","DOIUrl":"https://doi.org/10.1109/3DV.2014.63","url":null,"abstract":"We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. Two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS - a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties.
{"title":"Colour Helmholtz Stereopsis for Reconstruction of Complex Dynamic Scenes","authors":"Nadejda Roubtsova, Jean-Yves Guillemaut","doi":"10.1109/3DV.2014.59","DOIUrl":"https://doi.org/10.1109/3DV.2014.59","url":null,"abstract":"Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. Two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS - a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132374488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.
{"title":"Placeless Place-Recognition","authors":"Simon Lynen, M. Bosse, P. Furgale, R. Siegwart","doi":"10.1109/3DV.2014.36","DOIUrl":"https://doi.org/10.1109/3DV.2014.36","url":null,"abstract":"Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123032356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The architectural, engineering, construction and facilities management (AEC-FM) industry is going through a transformative phase by adapting new technologies and tools into its change management practices. AEC-FM Industry has adopted Building Information Modeling (BIM) and three-dimensional (3D) laser scanning technologies in tracking changes in the whole lifecycle of building and infrastructure projects, from planning to design and construction, and finally to facilities management. One of the challenges of using these technologies in change management is the difficulties of reliably detecting changes of densely located objects, such as Mechanical, Electrical, and Plumbing (MEP) objects in building systems. This paper presents a novel relational-graph-based framework for automated spatial change analysis of MEP components. This framework extract objects and spatial relationships from 3D laser scanned point clouds, and use relational structures of objects in data and designed BIM models for fusing 3D data and as-designed BIM. The authors validated the proposed change analysis approach using data acquired from real building construction sites.
{"title":"Toward Automated Spatial Change Analysis of MEP Components Using 3D Point Clouds and As-Designed BIM Models","authors":"V. Kalasapudi, Y. Turkan, P. Tang","doi":"10.1109/3DV.2014.105","DOIUrl":"https://doi.org/10.1109/3DV.2014.105","url":null,"abstract":"The architectural, engineering, construction and facilities management (AEC-FM) industry is going through a transformative phase by adapting new technologies and tools into its change management practices. AEC-FM Industry has adopted Building Information Modeling (BIM) and three-dimensional (3D) laser scanning technologies in tracking changes in the whole lifecycle of building and infrastructure projects, from planning to design and construction, and finally to facilities management. One of the challenges of using these technologies in change management is the difficulties of reliably detecting changes of densely located objects, such as Mechanical, Electrical, and Plumbing (MEP) objects in building systems. This paper presents a novel relational-graph-based framework for automated spatial change analysis of MEP components. This framework extract objects and spatial relationships from 3D laser scanned point clouds, and use relational structures of objects in data and designed BIM models for fusing 3D data and as-designed BIM. The authors validated the proposed change analysis approach using data acquired from real building construction sites.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129518872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}