首页 > 最新文献

2016 Fourth International Conference on 3D Vision (3DV)最新文献

英文 中文
Room Layout Estimation with Object and Material Attributes Information Using a Spherical Camera 使用球面相机估算物体和材料属性信息的房间布局
Pub Date : 2016-12-19 DOI: 10.1109/3DV.2016.83
Hansung Kim, T. D. Campos, A. Hilton
In this paper we propose a pipeline for estimating 3D room layout with object and material attribute prediction using a spherical stereo image pair. We assume that the room and objects can be represented as cuboids aligned to the main axes of the room coordinate (Manhattan world). A spherical stereo alignment algorithm is proposed to align two spherical images to the global world coordinate system. Depth information of the scene is estimated by stereo matching between images. Cubic projection images of the spherical RGB and estimated depth are used for object and material attribute detection. A single Convolutional Neural Network is designed to assign object and attribute labels to geometrical elements built from the spherical image. Finally simplified room layout is reconstructed by cuboid fitting. The reconstructed cuboid-based model shows the structure of the scene with object information and material attributes.
本文提出了一种利用球面立体图像对预测物体和材料属性来估计三维房间布局的管道。我们假设房间和物体可以表示为与房间坐标的主轴对齐的长方体(曼哈顿世界)。提出了一种球面立体对准算法,将两幅球面图像对准全球坐标系。通过图像之间的立体匹配来估计场景的深度信息。使用球面RGB的三次投影图像和估计深度进行物体和材料属性检测。一个单独的卷积神经网络被设计用来分配对象和属性标签到从球面图像构建的几何元素。最后通过长方体拟合对简化后的房间布局进行重构。基于长方体的重构模型通过物体信息和材料属性显示场景的结构。
{"title":"Room Layout Estimation with Object and Material Attributes Information Using a Spherical Camera","authors":"Hansung Kim, T. D. Campos, A. Hilton","doi":"10.1109/3DV.2016.83","DOIUrl":"https://doi.org/10.1109/3DV.2016.83","url":null,"abstract":"In this paper we propose a pipeline for estimating 3D room layout with object and material attribute prediction using a spherical stereo image pair. We assume that the room and objects can be represented as cuboids aligned to the main axes of the room coordinate (Manhattan world). A spherical stereo alignment algorithm is proposed to align two spherical images to the global world coordinate system. Depth information of the scene is estimated by stereo matching between images. Cubic projection images of the spherical RGB and estimated depth are used for object and material attribute detection. A single Convolutional Neural Network is designed to assign object and attribute labels to geometrical elements built from the spherical image. Finally simplified room layout is reconstructed by cuboid fitting. The reconstructed cuboid-based model shows the structure of the scene with object information and material attributes.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126633493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Monocular, Real-Time Surface Reconstruction Using Dynamic Level of Detail 使用动态细节水平的单目实时表面重建
Pub Date : 2016-12-19 DOI: 10.1109/3DV.2016.82
J. Zienkiewicz, Akis Tsiotsios, A. Davison, Stefan Leutenegger
We present a scalable, real-time capable method for robust surface reconstruction that explicitly handles multiple scales. As a monocular camera browses a scene, our algorithm processes images as they arrive and incrementally builds a detailed surface model.While most of the existing reconstruction approaches rely on volumetric or point-cloud representations of the environment, we perform depth-map and colour fusion directly into a multi-resolution triangular mesh that can be adaptively tessellated using the concept of Dynamic Level of Detail. Our method relies on least-squares optimisation, which enables a probabilistically sound and principled formulation of the fusion algorithm.We demonstrate that our method is capable of obtaining high quality, close-up reconstruction, as well as capturing overall scene geometry, while being memory and computationally efficient.
我们提出了一种可扩展的、实时的鲁棒表面重建方法,可以明确地处理多个尺度。当单目相机浏览场景时,我们的算法在图像到达时进行处理,并逐步建立详细的表面模型。虽然大多数现有的重建方法依赖于环境的体积或点云表示,但我们直接将深度图和颜色融合到一个多分辨率三角形网格中,该网格可以使用动态细节水平的概念自适应地镶嵌。我们的方法依赖于最小二乘优化,这使得融合算法的概率健全和原则性的公式。我们证明了我们的方法能够获得高质量的近距离重建,以及捕获整体场景几何,同时具有内存和计算效率。
{"title":"Monocular, Real-Time Surface Reconstruction Using Dynamic Level of Detail","authors":"J. Zienkiewicz, Akis Tsiotsios, A. Davison, Stefan Leutenegger","doi":"10.1109/3DV.2016.82","DOIUrl":"https://doi.org/10.1109/3DV.2016.82","url":null,"abstract":"We present a scalable, real-time capable method for robust surface reconstruction that explicitly handles multiple scales. As a monocular camera browses a scene, our algorithm processes images as they arrive and incrementally builds a detailed surface model.While most of the existing reconstruction approaches rely on volumetric or point-cloud representations of the environment, we perform depth-map and colour fusion directly into a multi-resolution triangular mesh that can be adaptively tessellated using the concept of Dynamic Level of Detail. Our method relies on least-squares optimisation, which enables a probabilistically sound and principled formulation of the fusion algorithm.We demonstrate that our method is capable of obtaining high quality, close-up reconstruction, as well as capturing overall scene geometry, while being memory and computationally efficient.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124225126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Real-Time Surface of Revolution Reconstruction on Dense SLAM 稠密SLAM的实时旋转曲面重建
Pub Date : 2016-12-15 DOI: 10.1109/3DV.2016.13
Liming Yang, Hideaki Uchiyama, Jean-Marie Normand, G. Moreau, H. Nagahara, R. Taniguchi
We present a fast and accurate method for reconstructing surfaces of revolution (SoR) on 3D data and its application to structural modeling of a cluttered scene in real-time. To estimate a SoR axis, we derive an approximately linear cost function for fast convergence. Also, we design a framework for reconstructing SoR on dense SLAM. In the experiment results, we show our method is accurate, robust to noise and runs in real-time.
提出了一种基于三维数据快速准确地重建旋转曲面的方法,并将其应用于混乱场景的实时结构建模。为了估计SoR轴,我们推导了一个近似线性的代价函数来快速收敛。此外,我们还设计了一个在密集SLAM上重构SoR的框架。实验结果表明,该方法具有精度高、抗噪声强、实时性好的特点。
{"title":"Real-Time Surface of Revolution Reconstruction on Dense SLAM","authors":"Liming Yang, Hideaki Uchiyama, Jean-Marie Normand, G. Moreau, H. Nagahara, R. Taniguchi","doi":"10.1109/3DV.2016.13","DOIUrl":"https://doi.org/10.1109/3DV.2016.13","url":null,"abstract":"We present a fast and accurate method for reconstructing surfaces of revolution (SoR) on 3D data and its application to structural modeling of a cluttered scene in real-time. To estimate a SoR axis, we derive an approximately linear cost function for fast convergence. Also, we design a framework for reconstructing SoR on dense SLAM. In the experiment results, we show our method is accurate, robust to noise and runs in real-time.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132596217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D Data Acquisition and Registration Using Two Opposing Kinects 三维数据采集和注册使用两个相对的运动
Pub Date : 2016-10-28 DOI: 10.1109/3DV.2016.21
Vahid Soleimani, M. Mirmehdi, D. Damen, S. Hannuna, M. Camplani
We present an automatic, open source data acquisition and calibration approach using two opposing RGBD sensors (Kinect V2) and demonstrate its efficacy for dynamic object reconstruction in the context of monitoring for remote lung function assessment. First, the relative pose of the two RGBD sensors is estimated through a calibration stage and rigid transformation parameters are computed. These are then used to align and register point clouds obtained from the sensors at frame level. We validated the proposed system by performing experiments on known-size box objects with the results demonstrating accurate measurements. We also report on dynamic object reconstruction by way of human subjects undergoing respiratory functional assessment.
我们提出了一种自动、开源的数据采集和校准方法,使用两个相反的RGBD传感器(Kinect V2),并证明了其在远程肺功能评估监测背景下动态物体重建的有效性。首先,通过标定阶段估计两个RGBD传感器的相对位姿,并计算刚性变换参数;然后用这些来对齐和配准从帧级传感器获得的点云。我们通过在已知尺寸的盒子物体上进行实验来验证所提出的系统,结果显示出精确的测量结果。我们还报道了通过人体受试者进行呼吸功能评估的动态物体重建方法。
{"title":"3D Data Acquisition and Registration Using Two Opposing Kinects","authors":"Vahid Soleimani, M. Mirmehdi, D. Damen, S. Hannuna, M. Camplani","doi":"10.1109/3DV.2016.21","DOIUrl":"https://doi.org/10.1109/3DV.2016.21","url":null,"abstract":"We present an automatic, open source data acquisition and calibration approach using two opposing RGBD sensors (Kinect V2) and demonstrate its efficacy for dynamic object reconstruction in the context of monitoring for remote lung function assessment. First, the relative pose of the two RGBD sensors is estimated through a calibration stage and rigid transformation parameters are computed. These are then used to align and register point clouds obtained from the sensors at frame level. We validated the proposed system by performing experiments on known-size box objects with the results demonstrating accurate measurements. We also report on dynamic object reconstruction by way of human subjects undergoing respiratory functional assessment.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134313143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Cotemporal Multi-View Video Segmentation 同时多视点视频分割
Pub Date : 2016-10-25 DOI: 10.1109/3DV.2016.45
Abdelaziz Djelouah, Jean-Sébastien Franco, Edmond Boyer, P. Pérez, G. Drettakis
We address the problem of multi-view video segmentation of dynamic scenes in general and outdoor environments with possibly moving cameras. Multi-view methods for dynamic scenes usually rely on geometric calibration to impose spatial shape constraints between viewpoints. In this paper, we show that the calibration constraint can be relaxed while still getting competitive segmentation results using multi-view constraints. We introduce new multi-view cotemporality constraints through motion correlation cues, in addition to common appearance features used by co-segmentation methods to identify co-instances of objects. We also take advantage of learning based segmentation strategies by casting the problem as the selection of monocular proposals that satisfy multi-view constraints. This yields a fully automated method that can segment subjects of interest without any particular pre-processing stage. Results on several challenging outdoor datasets demonstrate the feasibility and robustness of our approach.
我们解决了一般动态场景和可能有移动摄像机的室外环境中的多视图视频分割问题。动态场景的多视图方法通常依赖于几何校准来施加视点之间的空间形状约束。在本文中,我们证明了可以放松校准约束,同时仍然可以使用多视图约束获得有竞争力的分割结果。我们通过运动相关线索引入了新的多视图共时性约束,以及共同分割方法使用的常见外观特征来识别对象的共实例。我们还利用基于学习的分割策略,将问题转换为满足多视图约束的单目提案的选择。这产生了一种完全自动化的方法,可以在没有任何特定预处理阶段的情况下分割感兴趣的主题。在几个具有挑战性的室外数据集上的结果证明了我们方法的可行性和鲁棒性。
{"title":"Cotemporal Multi-View Video Segmentation","authors":"Abdelaziz Djelouah, Jean-Sébastien Franco, Edmond Boyer, P. Pérez, G. Drettakis","doi":"10.1109/3DV.2016.45","DOIUrl":"https://doi.org/10.1109/3DV.2016.45","url":null,"abstract":"We address the problem of multi-view video segmentation of dynamic scenes in general and outdoor environments with possibly moving cameras. Multi-view methods for dynamic scenes usually rely on geometric calibration to impose spatial shape constraints between viewpoints. In this paper, we show that the calibration constraint can be relaxed while still getting competitive segmentation results using multi-view constraints. We introduce new multi-view cotemporality constraints through motion correlation cues, in addition to common appearance features used by co-segmentation methods to identify co-instances of objects. We also take advantage of learning based segmentation strategies by casting the problem as the selection of monocular proposals that satisfy multi-view constraints. This yields a fully automated method that can segment subjects of interest without any particular pre-processing stage. Results on several challenging outdoor datasets demonstrate the feasibility and robustness of our approach.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"13 4-5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123731845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Computing Temporal Alignments of Human Motion Sequences in Wide Clothing Using Geodesic Patches 使用测地线补丁计算宽衣服中人体运动序列的时间对齐
Pub Date : 2016-10-25 DOI: 10.1109/3DV.2016.27
Aurela Shehu, Jinlong Yang, Jean-Sébastien Franco, Franck Hétroy-Wheeler, S. Wuhrer
In this paper, we address the problem of temporal alignment of surfaces for subjects dressed in wide clothing, as acquired by calibrated multi-camera systems. Most existing methods solve the alignment by fitting a single surface template to each instant's 3D observations, relying on a dense point-to-point correspondence scheme, e.g. by matching individual surface points based on local geometric features or proximity. The wide clothing situation yields more geometric and topological difficulties in observed sequences, such as apparent merging of surface components, misreconstructions, and partial surface observation, resulting in overly sparse, erroneous point-to-point correspondences, and thus alignment failures. To resolve these issues, we propose an alignment framework where point-to-point correspondences are obtained by growing isometric patches from a set of reliably obtained body landmarks. This correspondence decreases the reliance on local geometric features subject to instability, instead emphasizing the surface neighborhood coherence of matches, while improving density given sufficient landmark coverage. We validate and verify the resulting improved alignment performance in our experiments.
在本文中,我们解决了由校准的多相机系统获得的穿着宽衣服的受试者表面的时间对齐问题。大多数现有方法通过将单个表面模板拟合到每个瞬间的3D观测值来解决对齐问题,依赖于密集的点对点对应方案,例如基于局部几何特征或邻近度匹配单个表面点。宽覆盖的情况在观测序列中产生了更多的几何和拓扑困难,例如表面成分的明显合并,错误重建和部分表面观测,导致过度稀疏,错误的点对点对应,从而导致对准失败。为了解决这些问题,我们提出了一个对齐框架,其中通过从一组可靠获得的身体地标中生长等距补丁来获得点对点对应。这种对应减少了对不稳定的局部几何特征的依赖,而不是强调匹配的表面邻域一致性,同时在足够的地标覆盖下提高密度。我们在实验中验证了改进后的对准性能。
{"title":"Computing Temporal Alignments of Human Motion Sequences in Wide Clothing Using Geodesic Patches","authors":"Aurela Shehu, Jinlong Yang, Jean-Sébastien Franco, Franck Hétroy-Wheeler, S. Wuhrer","doi":"10.1109/3DV.2016.27","DOIUrl":"https://doi.org/10.1109/3DV.2016.27","url":null,"abstract":"In this paper, we address the problem of temporal alignment of surfaces for subjects dressed in wide clothing, as acquired by calibrated multi-camera systems. Most existing methods solve the alignment by fitting a single surface template to each instant's 3D observations, relying on a dense point-to-point correspondence scheme, e.g. by matching individual surface points based on local geometric features or proximity. The wide clothing situation yields more geometric and topological difficulties in observed sequences, such as apparent merging of surface components, misreconstructions, and partial surface observation, resulting in overly sparse, erroneous point-to-point correspondences, and thus alignment failures. To resolve these issues, we propose an alignment framework where point-to-point correspondences are obtained by growing isometric patches from a set of reliably obtained body landmarks. This correspondence decreases the reliance on local geometric features subject to instability, instead emphasizing the surface neighborhood coherence of matches, while improving density given sufficient landmark coverage. We validate and verify the resulting improved alignment performance in our experiments.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Regularized 3D Modeling from Noisy Building Reconstructions 基于噪声建筑重建的正则化三维建模
Pub Date : 2016-10-25 DOI: 10.1109/3DV.2016.62
Thomas Holzmann, F. Fraundorfer, H. Bischof
In this paper, we present a method for regularizing noisy 3D reconstructions, which is especially well suited for scenes containing planar structures like buildings. At horizontal structures, the input model is divided into slices and for each slice, an inside/outside labeling is computed. With the outlines of each slice labeling, we create an irregularly shaped volumetric cell decomposition of the whole scene. Then, an optimized inside/outside labeling of these cells is computed by solving an energy minimization problem. For the cell labeling optimization we introduce a novel smoothness term, where lines in the images are used to improve the regularization result. We show that our approach can take arbitrary dense meshed point clouds as input and delivers well regularized building models, which can be textured afterwards.
在本文中,我们提出了一种正则化噪声三维重建的方法,该方法特别适合于包含平面结构(如建筑物)的场景。在水平结构中,输入模型被划分为切片,对于每个切片,计算一个内/外标记。通过标记每个切片的轮廓,我们创建了整个场景的不规则形状的体积单元分解。然后,通过求解能量最小化问题,计算出这些细胞的优化内外标记。对于细胞标记优化,我们引入了一个新的平滑项,其中图像中的线条用于改善正则化结果。我们表明,我们的方法可以采用任意密集的网格点云作为输入,并提供良好的正则化建筑模型,之后可以对其进行纹理化。
{"title":"Regularized 3D Modeling from Noisy Building Reconstructions","authors":"Thomas Holzmann, F. Fraundorfer, H. Bischof","doi":"10.1109/3DV.2016.62","DOIUrl":"https://doi.org/10.1109/3DV.2016.62","url":null,"abstract":"In this paper, we present a method for regularizing noisy 3D reconstructions, which is especially well suited for scenes containing planar structures like buildings. At horizontal structures, the input model is divided into slices and for each slice, an inside/outside labeling is computed. With the outlines of each slice labeling, we create an irregularly shaped volumetric cell decomposition of the whole scene. Then, an optimized inside/outside labeling of these cells is computed by solving an energy minimization problem. For the cell labeling optimization we introduce a novel smoothness term, where lines in the images are used to improve the regularization result. We show that our approach can take arbitrary dense meshed point clouds as input and delivers well regularized building models, which can be textured afterwards.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125120728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Real-Time Halfway Domain Reconstruction of Motion and Geometry 运动和几何的实时半域重建
Pub Date : 2016-10-23 DOI: 10.1109/3DV.2016.55
Lucas Thies, M. Zollhöfer, Christian Richardt, C. Theobalt, G. Greiner
We present a novel approach for real-time joint reconstruction of 3D scene motion and geometry from binocular stereo videos. Our approach is based on a novel variational halfway-domain scene flow formulation, which allows us to obtain highly accurate spatiotemporal reconstructions of shape and motion. We solve the underlying optimization problem at real-time frame rates using a novel data-parallel robust non-linear optimization strategy. Fast convergence and large displacement flows are achieved by employing a novel hierarchy that stores delta flows between hierarchy levels. High performance is obtained by the introduction of a coarser warp grid that decouples the number of unknowns from the input resolution of the images. We demonstrate our approach in a live setup that is based on two commodity webcams, as well as on publicly available video data. Our extensive experiments and evaluations show that our approach produces high-quality dense reconstructions of 3D geometry and scene flow at real-time frame rates, and compares favorably to the state of the art.
我们提出了一种从双目立体视频中实时重建三维场景运动和几何形状的新方法。我们的方法是基于一种新颖的变分半域场景流公式,它使我们能够获得高度精确的形状和运动的时空重建。我们使用一种新颖的数据并行鲁棒非线性优化策略来解决实时帧率下的底层优化问题。通过采用一种新的层次结构,在层次之间存储三角洲流,可以实现快速收敛和大位移流。通过引入粗糙的经纱网格,将未知的数量与图像的输入分辨率解耦,从而获得高性能。我们在基于两个商品网络摄像头以及公开可用视频数据的现场设置中演示了我们的方法。我们广泛的实验和评估表明,我们的方法在实时帧率下产生高质量的3D几何和场景流的密集重建,并且与最先进的技术相比具有优势。
{"title":"Real-Time Halfway Domain Reconstruction of Motion and Geometry","authors":"Lucas Thies, M. Zollhöfer, Christian Richardt, C. Theobalt, G. Greiner","doi":"10.1109/3DV.2016.55","DOIUrl":"https://doi.org/10.1109/3DV.2016.55","url":null,"abstract":"We present a novel approach for real-time joint reconstruction of 3D scene motion and geometry from binocular stereo videos. Our approach is based on a novel variational halfway-domain scene flow formulation, which allows us to obtain highly accurate spatiotemporal reconstructions of shape and motion. We solve the underlying optimization problem at real-time frame rates using a novel data-parallel robust non-linear optimization strategy. Fast convergence and large displacement flows are achieved by employing a novel hierarchy that stores delta flows between hierarchy levels. High performance is obtained by the introduction of a coarser warp grid that decouples the number of unknowns from the input resolution of the images. We demonstrate our approach in a live setup that is based on two commodity webcams, as well as on publicly available video data. Our extensive experiments and evaluations show that our approach produces high-quality dense reconstructions of 3D geometry and scene flow at real-time frame rates, and compares favorably to the state of the art.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129215349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Single-Shot Multi-Path Interference Resolution for Mirror-Based Full 3D Shape Measurement with a Correlation-Based ToF Camera 基于相关的ToF相机单镜头多路径干涉分辨率的镜面全三维形状测量
Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.43
S. Nobuhara, T. Kashino, T. Matsuyama, Kouta Takeuchi, K. Fujii
This paper is aimed at presenting a new algorithm for multi-path interference resolutions under mirror-based full 3D capture using a single correlation-based ToF camera. Our algorithm does not require additional captures or device modifications, and resolves the interference using a single ToF sensing that is also used for the 3D reconstruction as well. Evaluations with real images prove the concept of the proposed algorithm qualitatively and quantitatively.
本文提出了一种利用单台基于相关的ToF相机在基于反光镜的全三维捕获条件下多径干涉分辨率的新算法。我们的算法不需要额外的捕获或设备修改,并且使用单个ToF感测来解决干扰,也用于3D重建。实际图像的评价从定性和定量上证明了所提算法的概念。
{"title":"A Single-Shot Multi-Path Interference Resolution for Mirror-Based Full 3D Shape Measurement with a Correlation-Based ToF Camera","authors":"S. Nobuhara, T. Kashino, T. Matsuyama, Kouta Takeuchi, K. Fujii","doi":"10.1109/3DV.2016.43","DOIUrl":"https://doi.org/10.1109/3DV.2016.43","url":null,"abstract":"This paper is aimed at presenting a new algorithm for multi-path interference resolutions under mirror-based full 3D capture using a single correlation-based ToF camera. Our algorithm does not require additional captures or device modifications, and resolves the interference using a single ToF sensing that is also used for the 3D reconstruction as well. Evaluations with real images prove the concept of the proposed algorithm qualitatively and quantitatively.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115065811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proceduralization for Editing 3D Architectural Models 程序化编辑3D建筑模型
Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.28
Ilke Demir, Daniel G. Aliaga, Bedrich Benes
Inverse procedural modeling discovers a procedural representation of an existing geometric model and the discovered procedural model then supports synthesizing new similar models. We introduce an automatic approach that generates a compact, efficient, and re-usable procedural representation of a polygonal 3D architectural model. This representation is then used for structure-aware editing and synthesis of new geometric models that resemble the original. Our framework captures the pattern hierarchy of the input model into a split tree data representation. A context-free split grammar, supporting a hierarchical nesting of procedural rules, is extracted from the tree, which establishes the base of our interactive procedural editing engine. We show the application of our approach to a variety of architectural structures obtained by procedurally editing web-sourced models. The grammar generation takes a few minutes even for the most complex input and synthesis is fully interactive for buildings composed of up to 200k polygons.
逆向过程建模发现现有几何模型的过程表示,然后发现的过程模型支持合成新的相似模型。我们介绍了一种自动方法,该方法可以生成一个多边形3D建筑模型的紧凑、高效和可重用的过程表示。然后,这种表示用于结构感知编辑和合成类似于原始几何模型的新几何模型。我们的框架将输入模型的模式层次结构捕获为一个拆分树数据表示。从树中提取了支持过程规则分层嵌套的与上下文无关的分割语法,它建立了交互式过程编辑引擎的基础。我们展示了我们的方法应用于通过程序编辑网络源模型获得的各种建筑结构。即使是最复杂的输入,语法生成也需要几分钟,而对于由多达20万个多边形组成的建筑来说,合成是完全交互式的。
{"title":"Proceduralization for Editing 3D Architectural Models","authors":"Ilke Demir, Daniel G. Aliaga, Bedrich Benes","doi":"10.1109/3DV.2016.28","DOIUrl":"https://doi.org/10.1109/3DV.2016.28","url":null,"abstract":"Inverse procedural modeling discovers a procedural representation of an existing geometric model and the discovered procedural model then supports synthesizing new similar models. We introduce an automatic approach that generates a compact, efficient, and re-usable procedural representation of a polygonal 3D architectural model. This representation is then used for structure-aware editing and synthesis of new geometric models that resemble the original. Our framework captures the pattern hierarchy of the input model into a split tree data representation. A context-free split grammar, supporting a hierarchical nesting of procedural rules, is extracted from the tree, which establishes the base of our interactive procedural editing engine. We show the application of our approach to a variety of architectural structures obtained by procedurally editing web-sourced models. The grammar generation takes a few minutes even for the most complex input and synthesis is fully interactive for buildings composed of up to 200k polygons.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125977711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
期刊
2016 Fourth International Conference on 3D Vision (3DV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1