首页 > 最新文献

2014 2nd International Conference on 3D Vision最新文献

英文 中文
A 3D Segmentation and Visualization Scheme for Solid and Non-solid Lung Lesions Based on Gaussian Filtering Regularized Level Set 一种基于高斯滤波正则化水平集的肺实体与非实体病灶三维分割与可视化方案
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.110
Liansheng Wang, Huangjing Lin, Xiaoyang Huang, Boliang Wang, Yiping Chen
The segmentation of Lung lesions is a challenging task because of the complexity of lung lesions surroundings. Lung lesions can be categorized into two types: solid and non-solid. Lots of works have been developed previously to segment one of two types, but only a few are proposed to handle two types at the same time and these methods may be over-segmented or sub-segmented. Therefore, in this study, an effective framework is designed to segment two types of lung lesions in 3-dimension (3D). In the proposed framework, we use a Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) method to produce a 3D rough segmentation which is used as the initial contour of Geodesic Active Contour (GAC) method. SBGFRLS method can deal with the non-solid lung lesions very well because it can use the global information to segment inhomogeneous entities. GAC method can accurately locate the edge using the local information. Finally, we reconstruct and visualize the 3D segmentation results of lung lesions using visualization toolkit (VTK). All of our work is based on the Image Segmentation and Registration Toolkit (ITK) platform. We evaluate our method on the lung lesions CT data sets from 300 patients (280 for solid and 20 for non-solid). Experimental results show that our method can achieve better segmentation and more accurate calculation of 3D volume measurement compared to other two methods, especially in the non-solid type lesions.
由于肺病变周围环境的复杂性,肺病变的分割是一项具有挑战性的任务。肺病变可分为实性和非实性两类。以前已经开发了许多工作来分割两种类型中的一种,但只有少数方法被提出同时处理两种类型,这些方法可能是过度分割或亚分段的。因此,本研究设计了一个有效的框架,对两种类型的肺部病变进行三维(3D)分割。在提出的框架中,我们使用选择性二值高斯滤波正则化水平集(SBGFRLS)方法生成三维粗糙分割,并将其用作测地线主动轮廓(GAC)方法的初始轮廓。SBGFRLS方法可以利用全局信息分割非均匀实体,可以很好地处理非实性肺病变。GAC方法可以利用局部信息准确定位边缘。最后,利用可视化工具箱(VTK)对肺病变的三维分割结果进行重建和可视化。我们所有的工作都是基于图像分割和配准工具包(ITK)平台。我们对来自300名患者的肺病变CT数据集(实性280例,非实性20例)进行了评估。实验结果表明,与其他两种方法相比,我们的方法可以实现更好的分割和更准确的三维体积测量计算,特别是在非实体型病变中。
{"title":"A 3D Segmentation and Visualization Scheme for Solid and Non-solid Lung Lesions Based on Gaussian Filtering Regularized Level Set","authors":"Liansheng Wang, Huangjing Lin, Xiaoyang Huang, Boliang Wang, Yiping Chen","doi":"10.1109/3DV.2014.110","DOIUrl":"https://doi.org/10.1109/3DV.2014.110","url":null,"abstract":"The segmentation of Lung lesions is a challenging task because of the complexity of lung lesions surroundings. Lung lesions can be categorized into two types: solid and non-solid. Lots of works have been developed previously to segment one of two types, but only a few are proposed to handle two types at the same time and these methods may be over-segmented or sub-segmented. Therefore, in this study, an effective framework is designed to segment two types of lung lesions in 3-dimension (3D). In the proposed framework, we use a Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) method to produce a 3D rough segmentation which is used as the initial contour of Geodesic Active Contour (GAC) method. SBGFRLS method can deal with the non-solid lung lesions very well because it can use the global information to segment inhomogeneous entities. GAC method can accurately locate the edge using the local information. Finally, we reconstruct and visualize the 3D segmentation results of lung lesions using visualization toolkit (VTK). All of our work is based on the Image Segmentation and Registration Toolkit (ITK) platform. We evaluate our method on the lung lesions CT data sets from 300 patients (280 for solid and 20 for non-solid). Experimental results show that our method can achieve better segmentation and more accurate calculation of 3D volume measurement compared to other two methods, especially in the non-solid type lesions.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129281318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient Multi-view Performance Capture of Fine-Scale Surface Detail 精细尺度表面细节的高效多视图性能捕获
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.46
Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt
We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.
针对多视点视频中具有精细尺度时变表面细节的变形网格,提出了一种有效的性能捕获新方法。我们的方法建立在粗糙的四维表面重建上,这是用常用的基于模板的方法获得的。由于它们只捕获粗到中等尺度细节的模型,精细尺度的变形细节通常通过使用立体约束、特征或基于阴影的细化在第二次通过中完成。在本文中,我们提出了一种新的有效且稳定的解决方案。我们的框架使用表面上密集的3D高斯函数集合和图像的一组2D高斯函数来创建可变形网格的隐式表示。通过在所有顶点位置上密集优化新的模型-图像一致性能量,可以有效地找到最大化光一致性的所有网格顶点的精细尺度变形。一个主要的优点是,我们的问题公式产生一个平滑的封闭形式的能量与隐式遮挡处理和解析导数。也不需要容易出错的对应查找或表面位移值的离散采样。我们展示了几个穿着宽松衣服的人类受试者的重建,我们定性和定量地表明,我们比相关方法更可靠地捕获了更多的细节。
{"title":"Efficient Multi-view Performance Capture of Fine-Scale Surface Detail","authors":"Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt","doi":"10.1109/3DV.2014.46","DOIUrl":"https://doi.org/10.1109/3DV.2014.46","url":null,"abstract":"We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117315279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Structured Representation of Non-Rigid Surfaces from Single View 3D Point Tracks 从单视图3D点轨迹的非刚性表面的结构化表示
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.13
Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton
This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.
这项工作考虑了从单一视点的不完整的三维点轨迹的动态表面的结构化表示问题。表面被分割成一组相连的区域,每个区域都可以用固定的内在形状和参数化的刚性/非刚性运动轨迹来表示。模型参数和点到模型的分配都不是预先知道的。运动参数和几何形状参数通过基于图切割的点到模型分配交替估计。这种建模过程有利于缺失数据的填充以及通过时间积分对测量进行降噪,同时为几何形状添加有意义的结构,并通过数量级降低存储成本。给出了真实序列和合成序列的实验来验证该方法,并展示了如何使用单个调谐参数来处理外推水平和存储成本的建模误差。
{"title":"Structured Representation of Non-Rigid Surfaces from Single View 3D Point Tracks","authors":"Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton","doi":"10.1109/3DV.2014.13","DOIUrl":"https://doi.org/10.1109/3DV.2014.13","url":null,"abstract":"This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122076835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
MCOV: A Covariance Descriptor for Fusion of Texture and Shape Features in 3D Point Clouds 三维点云纹理和形状特征融合的协方差描述子
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.11
Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa
In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.
在本文中,我们提出了一种基于协方差的描述符MCOV,用于融合三维表面的形状和颜色信息以及相关纹理,旨在对三维点云中的区域进行鲁棒表征和匹配。提出的描述符基于协方差的概念,以便在径向邻域中创建纹理和表面特征变化的紧凑表示,而不是使用绝对特征本身。即使这种表示是紧凑和低维的,它仍然为复杂的场景提供了判别能力。从简单的公式角度来看,在一个点的封闭环境中对特征变化进行编码可以提供刚性空间变换的不变性和对噪声和场景分辨率变化的鲁棒性。通过在选定的数据库上测试该方法的性能,验证了该方法在具有挑战性的条件下的充分性,并且优于其他最先进的3D点描述符方法,从而验证了3D点识别的结果。还提供了一种用普通深度传感器设备获取的场景中物体匹配的定性测试应用。
{"title":"MCOV: A Covariance Descriptor for Fusion of Texture and Shape Features in 3D Point Clouds","authors":"Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa","doi":"10.1109/3DV.2014.11","DOIUrl":"https://doi.org/10.1109/3DV.2014.11","url":null,"abstract":"In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127385071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
4D Capture Using Visibility Information of Multiple Projector Camera System 利用多投影仪摄像机系统的可视性信息进行4D捕捉
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.70
R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo
In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.
在本文中,我们提出了一种使用多摄像机和投影仪进行4D运动物体捕获的方法。以前的4D捕捉系统的问题是相机数量有限,图像数量非常大,无法以高帧率捕捉序列。我们提出了一个多投影机-摄像机系统来解决这个问题。多视点立体的问题之一是确定相机对表面每个点的可见性。对于被动多视立体来说,场景几何形状及其可见性的估计是一个先有鸡还是先有蛋的问题,采用迭代法等方法反复进行场景几何形状的估计和重建。该方法利用投影模式独立解决可见性问题,在不考虑可见性问题的情况下,有效地恢复了形状。此外,可视化信息不仅用于多视图立体重建,还用于合并3D形状,以消除设备之间的不一致性。实验验证了该方法的有效性,证明合并后的网格适用于4Dreconstruction。
{"title":"4D Capture Using Visibility Information of Multiple Projector Camera System","authors":"R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo","doi":"10.1109/3DV.2014.70","DOIUrl":"https://doi.org/10.1109/3DV.2014.70","url":null,"abstract":"In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Mapping of Indoor Building Structures through Mobile Devices 利用移动设备进行室内建筑结构的交互式测绘
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.40
G. Pintore, Marco Agus, E. Gobbetti
We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.
我们提出了一个实用的系统来绘制和重建多房间的室内结构,利用传感器在普通智能手机。我们的方法结合并扩展了最先进的结果,以自动生成按实际度量尺寸缩放的平面图,并重建不一定局限于曼哈顿世界假设的场景。与之前的作品相比,我们的方法引入了一种基于统计指标的交互方法来细化墙壁朝向,并引入了一种专门的合并算法来构建最终的房间形状。该方法的低CPU成本使得它可以支持商品智能手机的全面执行,而不需要将它们连接到计算服务器。我们证明了我们的技术在各种多房间室内场景中的有效性,取得了比以前的方法更好的结果。
{"title":"Interactive Mapping of Indoor Building Structures through Mobile Devices","authors":"G. Pintore, Marco Agus, E. Gobbetti","doi":"10.1109/3DV.2014.40","DOIUrl":"https://doi.org/10.1109/3DV.2014.40","url":null,"abstract":"We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multimodal Calibration of Portable X-Ray Capture Systems for 3D Reconstruction 用于三维重建的便携式x射线捕获系统的多模态校准
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.64
Antonio L. Rodríguez, P. Taddei, V. Sequeira
We describe a method for non-invasive, accurate and efficient 3D reconstruction of occluded scenes, from a minimal number of X-ray and range scan image acquisitions. The residuals of generalised epipolar constraints (GEC) are incorporated in a highly efficient bundle adjustment minimization, to obtain maximum likelihood estimations of the X-ray image calibration parameters from correspondences between scene points, image points and apparent contours of scene objects. Furthermore, we propose a multimodal template adequate for accurate joint calibration of X-ray and range scan images. It offers crucial advantages for security applications, such as minimal scene occlusion and an agile data acquisition. Finally we describe a shape-from-silhouettes method based on the state of the art, able to reconstruct scene objects with general 3D shapes. We combine these proposals in a full system for 3D reconstruction of occluded scenes, and use it to demonstrate the practical and computational advantages of the methods herein described, with respect to previous proposals, using both synthetic and real data experiments.
我们描述了一种非侵入性,准确和高效的遮挡场景3D重建方法,从最小数量的x射线和距离扫描图像采集。将广义极约束的残差(GEC)整合到高效的束平差最小化中,从场景点、图像点和场景物体的视轮廓之间的对应关系中获得x射线图像校准参数的最大似然估计。此外,我们提出了一个多模态模板,足以精确联合校准x射线和距离扫描图像。它为安全应用程序提供了至关重要的优势,例如最小的场景遮挡和敏捷的数据采集。最后,我们描述了一种基于当前技术水平的形状-轮廓方法,能够重建具有一般3D形状的场景对象。我们将这些建议结合在一个完整的系统中,用于遮挡场景的三维重建,并使用合成和真实数据实验来证明本文所述方法相对于先前建议的实用性和计算优势。
{"title":"Multimodal Calibration of Portable X-Ray Capture Systems for 3D Reconstruction","authors":"Antonio L. Rodríguez, P. Taddei, V. Sequeira","doi":"10.1109/3DV.2014.64","DOIUrl":"https://doi.org/10.1109/3DV.2014.64","url":null,"abstract":"We describe a method for non-invasive, accurate and efficient 3D reconstruction of occluded scenes, from a minimal number of X-ray and range scan image acquisitions. The residuals of generalised epipolar constraints (GEC) are incorporated in a highly efficient bundle adjustment minimization, to obtain maximum likelihood estimations of the X-ray image calibration parameters from correspondences between scene points, image points and apparent contours of scene objects. Furthermore, we propose a multimodal template adequate for accurate joint calibration of X-ray and range scan images. It offers crucial advantages for security applications, such as minimal scene occlusion and an agile data acquisition. Finally we describe a shape-from-silhouettes method based on the state of the art, able to reconstruct scene objects with general 3D shapes. We combine these proposals in a full system for 3D reconstruction of occluded scenes, and use it to demonstrate the practical and computational advantages of the methods herein described, with respect to previous proposals, using both synthetic and real data experiments.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131502496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multistage SFM: Revisiting Incremental Structure from Motion 多阶段SFM:从运动中重新审视增量结构
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.95
R. Shah, A. Deshpande, P J Narayanan
In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.
在本文中,我们提出了一种新的多阶段的单分量SfM重建方法。我们的方法首先使用给定图像的高尺度特征构建粗糙的3D重建。这一步只使用了一小部分特征,而且速度很快。我们通过将剩余图像定位到模型中,并对剩余特征进行匹配和三角化,逐步丰富模型。与传统的增量SfM不同,我们的方法中的定位和三角测量步骤是高效的,并且使用粗糙模型的几何结构令人尴尬地并行。粗模型允许我们使用基于3D-2D对应的直接定位技术来注册剩余的图像。我们进一步利用粗糙模型的几何特性来减少成对图像匹配的工作量,并对大多数特征进行快速引导特征匹配。与增量SfM方法相比,我们的方法产生了类似质量的模型,同时显着快速和并行。我们的算法可以在15小时内使用单核重建1000个图像数据集,在2小时内使用8个核,在几分钟内利用大约200个核的完全并行性。
{"title":"Multistage SFM: Revisiting Incremental Structure from Motion","authors":"R. Shah, A. Deshpande, P J Narayanan","doi":"10.1109/3DV.2014.95","DOIUrl":"https://doi.org/10.1109/3DV.2014.95","url":null,"abstract":"In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"13 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131752139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects Kinect变形:增强非刚性变形物体的3D重建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.114
Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten
In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.
在这项工作中,我们提出了KinectDeform,这是一种针对包含非刚性变形对象的场景增强3D重建的算法。它为现有的算法提供了一种创新,这些算法要么只针对具有刚性对象的场景,要么允许非常有限的非刚性变形,要么使用预先计算的模板来跟踪它们。KinectDeform结合了基于八叉树数据表示和分层体素关联的快速非刚性场景跟踪算法以及递归数据过滤机制。我们分析了它在真实和模拟数据上的性能,并显示了在平滑和特征保留方面的改进结果,并且降低了噪声。
{"title":"Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects","authors":"Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten","doi":"10.1109/3DV.2014.114","DOIUrl":"https://doi.org/10.1109/3DV.2014.114","url":null,"abstract":"In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132760557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Methodology for Creating Large Scale Reference Models with Known Uncertainty for Evaluating Imaging Solution 为评估成像解决方案创建具有已知不确定度的大尺度参考模型的方法
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.104
M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier
We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.
我们提出了一种方法来获取具有已知不确定性的复杂建筑物大小物体的参考模型。这些可用于定量评估大规模被动三维重建的性能。所提出的方法结合了飞行时间扫描仪、激光跟踪器、球形伪影和对比目标的使用。为了证明所提出方法的有效性,我们建立了一个参考模型,该模型由一个130米× 55米× 20米建筑的外墙和庭院的3D模型组成。计算了三维参考模型的扩展不确定度和空间分辨率。
{"title":"A Methodology for Creating Large Scale Reference Models with Known Uncertainty for Evaluating Imaging Solution","authors":"M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier","doi":"10.1109/3DV.2014.104","DOIUrl":"https://doi.org/10.1109/3DV.2014.104","url":null,"abstract":"We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130558728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2014 2nd International Conference on 3D Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1