首页 > 最新文献

2014 2nd International Conference on 3D Vision最新文献

英文 中文
Interactive Mapping of Indoor Building Structures through Mobile Devices 利用移动设备进行室内建筑结构的交互式测绘
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.40
G. Pintore, Marco Agus, E. Gobbetti
We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.
我们提出了一个实用的系统来绘制和重建多房间的室内结构,利用传感器在普通智能手机。我们的方法结合并扩展了最先进的结果,以自动生成按实际度量尺寸缩放的平面图,并重建不一定局限于曼哈顿世界假设的场景。与之前的作品相比,我们的方法引入了一种基于统计指标的交互方法来细化墙壁朝向,并引入了一种专门的合并算法来构建最终的房间形状。该方法的低CPU成本使得它可以支持商品智能手机的全面执行,而不需要将它们连接到计算服务器。我们证明了我们的技术在各种多房间室内场景中的有效性,取得了比以前的方法更好的结果。
{"title":"Interactive Mapping of Indoor Building Structures through Mobile Devices","authors":"G. Pintore, Marco Agus, E. Gobbetti","doi":"10.1109/3DV.2014.40","DOIUrl":"https://doi.org/10.1109/3DV.2014.40","url":null,"abstract":"We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Toward Automated Spatial Change Analysis of MEP Components Using 3D Point Clouds and As-Designed BIM Models 使用3D点云和已设计BIM模型实现MEP组件的自动化空间变化分析
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.105
V. Kalasapudi, Y. Turkan, P. Tang
The architectural, engineering, construction and facilities management (AEC-FM) industry is going through a transformative phase by adapting new technologies and tools into its change management practices. AEC-FM Industry has adopted Building Information Modeling (BIM) and three-dimensional (3D) laser scanning technologies in tracking changes in the whole lifecycle of building and infrastructure projects, from planning to design and construction, and finally to facilities management. One of the challenges of using these technologies in change management is the difficulties of reliably detecting changes of densely located objects, such as Mechanical, Electrical, and Plumbing (MEP) objects in building systems. This paper presents a novel relational-graph-based framework for automated spatial change analysis of MEP components. This framework extract objects and spatial relationships from 3D laser scanned point clouds, and use relational structures of objects in data and designed BIM models for fusing 3D data and as-designed BIM. The authors validated the proposed change analysis approach using data acquired from real building construction sites.
建筑、工程、施工和设施管理(AEC-FM)行业正在经历一个变革阶段,将新技术和工具应用到其变革管理实践中。AEC-FM Industry采用建筑信息模型(BIM)和三维(3D)激光扫描技术跟踪建筑和基础设施项目从规划到设计和施工,最后到设施管理的整个生命周期的变化。在变更管理中使用这些技术的挑战之一是难以可靠地检测密集位置对象的变更,例如建筑系统中的机械、电气和管道(MEP)对象。提出了一种新的基于关系图的机电组件空间变化自动化分析框架。该框架从三维激光扫描的点云中提取物体和空间关系,并利用数据中物体的关系结构和设计的BIM模型来融合三维数据和设计的BIM。作者使用从实际建筑施工现场获得的数据验证了提出的变化分析方法。
{"title":"Toward Automated Spatial Change Analysis of MEP Components Using 3D Point Clouds and As-Designed BIM Models","authors":"V. Kalasapudi, Y. Turkan, P. Tang","doi":"10.1109/3DV.2014.105","DOIUrl":"https://doi.org/10.1109/3DV.2014.105","url":null,"abstract":"The architectural, engineering, construction and facilities management (AEC-FM) industry is going through a transformative phase by adapting new technologies and tools into its change management practices. AEC-FM Industry has adopted Building Information Modeling (BIM) and three-dimensional (3D) laser scanning technologies in tracking changes in the whole lifecycle of building and infrastructure projects, from planning to design and construction, and finally to facilities management. One of the challenges of using these technologies in change management is the difficulties of reliably detecting changes of densely located objects, such as Mechanical, Electrical, and Plumbing (MEP) objects in building systems. This paper presents a novel relational-graph-based framework for automated spatial change analysis of MEP components. This framework extract objects and spatial relationships from 3D laser scanned point clouds, and use relational structures of objects in data and designed BIM models for fusing 3D data and as-designed BIM. The authors validated the proposed change analysis approach using data acquired from real building construction sites.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129518872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A Methodology for Creating Large Scale Reference Models with Known Uncertainty for Evaluating Imaging Solution 为评估成像解决方案创建具有已知不确定度的大尺度参考模型的方法
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.104
M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier
We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.
我们提出了一种方法来获取具有已知不确定性的复杂建筑物大小物体的参考模型。这些可用于定量评估大规模被动三维重建的性能。所提出的方法结合了飞行时间扫描仪、激光跟踪器、球形伪影和对比目标的使用。为了证明所提出方法的有效性,我们建立了一个参考模型,该模型由一个130米× 55米× 20米建筑的外墙和庭院的3D模型组成。计算了三维参考模型的扩展不确定度和空间分辨率。
{"title":"A Methodology for Creating Large Scale Reference Models with Known Uncertainty for Evaluating Imaging Solution","authors":"M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier","doi":"10.1109/3DV.2014.104","DOIUrl":"https://doi.org/10.1109/3DV.2014.104","url":null,"abstract":"We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130558728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D Tracking of Multiple Objects with Identical Appearance Using RGB-D Input 使用RGB-D输入的具有相同外观的多个对象的3D跟踪
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.39
C. Ren, V. Prisacariu, O. Kähler, I. Reid, D. W. Murray
Most current approaches for 3D object tracking rely on distinctive object appearances. While several such trackers can be instantiated to track multiple objects independently, this not only neglects that objects should not occupy the same space in 3D, but also fails when objects have highly similar or identical appearances. In this paper we develop a probabilistic graphical model that accounts for similarity and proximity and leads to robust real-time tracking of multiple objects from RGB-D data, without recourse to bolton collision detection.
目前大多数3D对象跟踪方法依赖于不同的对象外观。虽然可以实例化几个这样的跟踪器来独立跟踪多个对象,但这不仅忽略了对象在3D中不应该占据相同的空间,而且当对象具有高度相似或相同的外观时也会失败。在本文中,我们开发了一个概率图形模型,该模型考虑了相似性和接近性,并导致来自RGB-D数据的多个对象的鲁棒实时跟踪,而无需求助于博尔顿碰撞检测。
{"title":"3D Tracking of Multiple Objects with Identical Appearance Using RGB-D Input","authors":"C. Ren, V. Prisacariu, O. Kähler, I. Reid, D. W. Murray","doi":"10.1109/3DV.2014.39","DOIUrl":"https://doi.org/10.1109/3DV.2014.39","url":null,"abstract":"Most current approaches for 3D object tracking rely on distinctive object appearances. While several such trackers can be instantiated to track multiple objects independently, this not only neglects that objects should not occupy the same space in 3D, but also fails when objects have highly similar or identical appearances. In this paper we develop a probabilistic graphical model that accounts for similarity and proximity and leads to robust real-time tracking of multiple objects from RGB-D data, without recourse to bolton collision detection.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129593807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Structured Representation of Non-Rigid Surfaces from Single View 3D Point Tracks 从单视图3D点轨迹的非刚性表面的结构化表示
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.13
Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton
This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.
这项工作考虑了从单一视点的不完整的三维点轨迹的动态表面的结构化表示问题。表面被分割成一组相连的区域,每个区域都可以用固定的内在形状和参数化的刚性/非刚性运动轨迹来表示。模型参数和点到模型的分配都不是预先知道的。运动参数和几何形状参数通过基于图切割的点到模型分配交替估计。这种建模过程有利于缺失数据的填充以及通过时间积分对测量进行降噪,同时为几何形状添加有意义的结构,并通过数量级降低存储成本。给出了真实序列和合成序列的实验来验证该方法,并展示了如何使用单个调谐参数来处理外推水平和存储成本的建模误差。
{"title":"Structured Representation of Non-Rigid Surfaces from Single View 3D Point Tracks","authors":"Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton","doi":"10.1109/3DV.2014.13","DOIUrl":"https://doi.org/10.1109/3DV.2014.13","url":null,"abstract":"This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122076835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Efficient Multi-view Performance Capture of Fine-Scale Surface Detail 精细尺度表面细节的高效多视图性能捕获
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.46
Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt
We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.
针对多视点视频中具有精细尺度时变表面细节的变形网格,提出了一种有效的性能捕获新方法。我们的方法建立在粗糙的四维表面重建上,这是用常用的基于模板的方法获得的。由于它们只捕获粗到中等尺度细节的模型,精细尺度的变形细节通常通过使用立体约束、特征或基于阴影的细化在第二次通过中完成。在本文中,我们提出了一种新的有效且稳定的解决方案。我们的框架使用表面上密集的3D高斯函数集合和图像的一组2D高斯函数来创建可变形网格的隐式表示。通过在所有顶点位置上密集优化新的模型-图像一致性能量,可以有效地找到最大化光一致性的所有网格顶点的精细尺度变形。一个主要的优点是,我们的问题公式产生一个平滑的封闭形式的能量与隐式遮挡处理和解析导数。也不需要容易出错的对应查找或表面位移值的离散采样。我们展示了几个穿着宽松衣服的人类受试者的重建,我们定性和定量地表明,我们比相关方法更可靠地捕获了更多的细节。
{"title":"Efficient Multi-view Performance Capture of Fine-Scale Surface Detail","authors":"Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt","doi":"10.1109/3DV.2014.46","DOIUrl":"https://doi.org/10.1109/3DV.2014.46","url":null,"abstract":"We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117315279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Placeless Place-Recognition 没有固定位置的Place-Recognition
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.36
Simon Lynen, M. Bosse, P. Furgale, R. Siegwart
Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.
位置识别是任何视觉同步定位和地图系统的核心能力。识别以前访问过的地方可以创建全球精确的地图,强大的重新定位和多用户映射。为了将一个地方匹配到另一个地方,大多数最先进的方法必须先验地决定什么构成了一个地方,通常是根据有多少连续的视图应该重叠,或者有多少连续的图像应该一起考虑。不幸的是,依赖于这些阈值,限制了它们在不同类型场景中的普遍性。在本文中,我们提出了一种使用新颖的投票密度估计技术的无位置识别算法,该算法避免了启发式地离散空间。相反,我们的方法将位置识别视为图像流之间的连续匹配问题,自动发现代表重叠轨迹段的高投票密度区域。生成的算法只有一个自由参数,所有剩余的阈值都是使用经过充分研究的统计测试自动设置的。我们在三个户外序列上展示了我们方法的效率和准确性:对公开可用数据集的真实情况进行全面评估表明,我们的方法优于几种最先进的位置识别算法。
{"title":"Placeless Place-Recognition","authors":"Simon Lynen, M. Bosse, P. Furgale, R. Siegwart","doi":"10.1109/3DV.2014.36","DOIUrl":"https://doi.org/10.1109/3DV.2014.36","url":null,"abstract":"Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123032356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Efficient Colorization of Large-Scale Point Cloud Using Multi-pass Z-Ordering 使用多通道z排序的大规模点云的有效着色
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.33
Sunyoung Cho, Jizhou Yan, Y. Matsushita, H. Byun
We present an efficient colorization method for a large scale point cloud using multi-view images. To address the practical issues of noisy camera parameters and color inconsistencies across multi-view images, our method takes an optimization approach for achieving visually pleasing point cloud colorization. We introduce a multi-pass Z-ordering technique that efficiently defines a graph structure to a large-scale and un-ordered set of 3D points, and use the graph structure for optimizing the point colors to be assigned. Our technique is useful for defining minimal but sufficient connectivities among 3D points so that the optimization can exploit the sparsity for efficiently solving the problem. We demonstrate the effectiveness of our method using synthetic datasets and a large-scale real-world data in comparison with other graph construction techniques.
提出了一种基于多视图图像的大规模点云的高效着色方法。为了解决相机参数噪声和多视图图像颜色不一致的实际问题,我们的方法采用了一种优化方法来实现视觉上令人愉悦的点云着色。我们引入了一种多通道z排序技术,该技术可以有效地为大规模无序的3D点集定义图形结构,并使用图形结构来优化要分配的点颜色。我们的技术有助于在三维点之间定义最小但足够的连通性,以便优化可以利用稀疏性来有效地解决问题。与其他图构建技术相比,我们使用合成数据集和大规模真实世界数据证明了我们的方法的有效性。
{"title":"Efficient Colorization of Large-Scale Point Cloud Using Multi-pass Z-Ordering","authors":"Sunyoung Cho, Jizhou Yan, Y. Matsushita, H. Byun","doi":"10.1109/3DV.2014.33","DOIUrl":"https://doi.org/10.1109/3DV.2014.33","url":null,"abstract":"We present an efficient colorization method for a large scale point cloud using multi-view images. To address the practical issues of noisy camera parameters and color inconsistencies across multi-view images, our method takes an optimization approach for achieving visually pleasing point cloud colorization. We introduce a multi-pass Z-ordering technique that efficiently defines a graph structure to a large-scale and un-ordered set of 3D points, and use the graph structure for optimizing the point colors to be assigned. Our technique is useful for defining minimal but sufficient connectivities among 3D points so that the optimization can exploit the sparsity for efficiently solving the problem. We demonstrate the effectiveness of our method using synthetic datasets and a large-scale real-world data in comparison with other graph construction techniques.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multistage SFM: Revisiting Incremental Structure from Motion 多阶段SFM:从运动中重新审视增量结构
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.95
R. Shah, A. Deshpande, P J Narayanan
In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.
在本文中,我们提出了一种新的多阶段的单分量SfM重建方法。我们的方法首先使用给定图像的高尺度特征构建粗糙的3D重建。这一步只使用了一小部分特征,而且速度很快。我们通过将剩余图像定位到模型中,并对剩余特征进行匹配和三角化,逐步丰富模型。与传统的增量SfM不同,我们的方法中的定位和三角测量步骤是高效的,并且使用粗糙模型的几何结构令人尴尬地并行。粗模型允许我们使用基于3D-2D对应的直接定位技术来注册剩余的图像。我们进一步利用粗糙模型的几何特性来减少成对图像匹配的工作量,并对大多数特征进行快速引导特征匹配。与增量SfM方法相比,我们的方法产生了类似质量的模型,同时显着快速和并行。我们的算法可以在15小时内使用单核重建1000个图像数据集,在2小时内使用8个核,在几分钟内利用大约200个核的完全并行性。
{"title":"Multistage SFM: Revisiting Incremental Structure from Motion","authors":"R. Shah, A. Deshpande, P J Narayanan","doi":"10.1109/3DV.2014.95","DOIUrl":"https://doi.org/10.1109/3DV.2014.95","url":null,"abstract":"In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"13 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131752139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects Kinect变形:增强非刚性变形物体的3D重建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.114
Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten
In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.
在这项工作中,我们提出了KinectDeform,这是一种针对包含非刚性变形对象的场景增强3D重建的算法。它为现有的算法提供了一种创新,这些算法要么只针对具有刚性对象的场景,要么允许非常有限的非刚性变形,要么使用预先计算的模板来跟踪它们。KinectDeform结合了基于八叉树数据表示和分层体素关联的快速非刚性场景跟踪算法以及递归数据过滤机制。我们分析了它在真实和模拟数据上的性能,并显示了在平滑和特征保留方面的改进结果,并且降低了噪声。
{"title":"Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects","authors":"Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten","doi":"10.1109/3DV.2014.114","DOIUrl":"https://doi.org/10.1109/3DV.2014.114","url":null,"abstract":"In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132760557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2014 2nd International Conference on 3D Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1