首页 > 最新文献

2014 2nd International Conference on 3D Vision最新文献

英文 中文
Non-rigid Registration Meets Surface Reconstruction 非刚性配准满足曲面重建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.80
Mohammad Rouhani, Edmond Boyer, A. Sappa
Non rigid registration is an important task in computer vision with many applications in shape and motion modeling. A fundamental step of the registration is the data association between the source and the target sets. Such association proves difficult in practice, due to the discrete nature of the information and its corruption by various types of noise, e.g. Outliers and missing data. In this paper we investigate the benefit of the implicit representations for the non-rigid registration of 3D point clouds. First, the target points are described with small quadratic patches that are blended through partition of unity weighting. Then, the discrete association between the source and the target can be replaced by a continuous distance field induced by the interface. By combining this distance field with a proper deformation term, the registration energy can be expressed in a linear least square form that is easy and fast to solve. This significantly eases the registration by avoiding direct association between points. Moreover, a hierarchical approach can be easily implemented by employing coarse-to-fine representations. Experimental results are provided for point clouds from multi-view data sets. The qualitative and quantitative comparisons show the out performance and robustness of our framework.
非刚性配准是计算机视觉中的一项重要任务,在形状和运动建模中有着广泛的应用。注册的一个基本步骤是源集和目标集之间的数据关联。这种关联在实践中被证明是困难的,因为信息的离散性质及其被各种类型的噪声(例如异常值和缺失数据)破坏。在本文中,我们研究了隐式表示在三维点云的非刚性配准中的好处。首先,用二次小块描述目标点,并通过单位加权分割进行混合。然后,源和目标之间的离散关联可以被界面诱导的连续距离场所取代。将该距离场与适当的变形项结合,可以将配准能量表示为线性最小二乘形式,求解方便、快捷。通过避免点之间的直接关联,这大大简化了配准。此外,通过采用从粗到精的表示,可以很容易地实现分层方法。给出了多视点云的实验结果。定性和定量比较表明了我们的框架的性能和鲁棒性。
{"title":"Non-rigid Registration Meets Surface Reconstruction","authors":"Mohammad Rouhani, Edmond Boyer, A. Sappa","doi":"10.1109/3DV.2014.80","DOIUrl":"https://doi.org/10.1109/3DV.2014.80","url":null,"abstract":"Non rigid registration is an important task in computer vision with many applications in shape and motion modeling. A fundamental step of the registration is the data association between the source and the target sets. Such association proves difficult in practice, due to the discrete nature of the information and its corruption by various types of noise, e.g. Outliers and missing data. In this paper we investigate the benefit of the implicit representations for the non-rigid registration of 3D point clouds. First, the target points are described with small quadratic patches that are blended through partition of unity weighting. Then, the discrete association between the source and the target can be replaced by a continuous distance field induced by the interface. By combining this distance field with a proper deformation term, the registration energy can be expressed in a linear least square form that is easy and fast to solve. This significantly eases the registration by avoiding direct association between points. Moreover, a hierarchical approach can be easily implemented by employing coarse-to-fine representations. Experimental results are provided for point clouds from multi-view data sets. The qualitative and quantitative comparisons show the out performance and robustness of our framework.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"917 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116179825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Scalable 3D HOG Model for Fast Object Detection and Viewpoint Estimation 一种用于快速目标检测和视点估计的可扩展3D HOG模型
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.82
M. Pedersoli, T. Tuytelaars
In this paper we present a scalable way to learn and detect objects using a 3D representation based on HOG patches placed on a 3D cuboid. The model consists of a single 3D representation that is shared among views. Similarly to the work of Fidler et al. [5], at detection time this representation is projected on the image plane over the desired viewpoints. However, whereas in [5] the projection is done at image-level and therefore the computational cost is linear in the number of views, in our model every view is approximated at feature level as a linear combination of the pre-computed fron to-parallel views. As a result, once the fron to-parallel views have been computed, the cost of computing new views is almost negligible. This allows the model to be evaluated on many more viewpoints. In the experimental results we show that the proposed model has a comparable detection and pose estimation performance to standard multiview HOG detectors, but it is faster, it scales very well with the number of views and can better generalize to unseen views. Finally, we also show that with a procedure similar to label propagation it is possible to train the model even without using pose annotations at training time.
在本文中,我们提出了一种可扩展的方法,使用基于放置在三维长方体上的HOG补丁的3D表示来学习和检测物体。该模型由视图之间共享的单一3D表示组成。与Fidler等人的工作类似[5],在检测时,该表示在所需视点上投影到图像平面上。然而,在[5]中,投影是在图像级别上完成的,因此计算成本在视图数量上是线性的,而在我们的模型中,每个视图在特征级别上近似为预先计算的平行视图的线性组合。因此,一旦计算了从并行视图,计算新视图的成本几乎可以忽略不计。这允许在更多的视点上对模型进行评估。实验结果表明,该模型具有与标准多视图HOG检测器相当的检测和姿态估计性能,但速度更快,随着视图数量的增加,它可以很好地扩展,并且可以更好地推广到未见视图。最后,我们还表明,使用类似于标签传播的过程,即使在训练时不使用姿态注释,也可以训练模型。
{"title":"A Scalable 3D HOG Model for Fast Object Detection and Viewpoint Estimation","authors":"M. Pedersoli, T. Tuytelaars","doi":"10.1109/3DV.2014.82","DOIUrl":"https://doi.org/10.1109/3DV.2014.82","url":null,"abstract":"In this paper we present a scalable way to learn and detect objects using a 3D representation based on HOG patches placed on a 3D cuboid. The model consists of a single 3D representation that is shared among views. Similarly to the work of Fidler et al. [5], at detection time this representation is projected on the image plane over the desired viewpoints. However, whereas in [5] the projection is done at image-level and therefore the computational cost is linear in the number of views, in our model every view is approximated at feature level as a linear combination of the pre-computed fron to-parallel views. As a result, once the fron to-parallel views have been computed, the cost of computing new views is almost negligible. This allows the model to be evaluated on many more viewpoints. In the experimental results we show that the proposed model has a comparable detection and pose estimation performance to standard multiview HOG detectors, but it is faster, it scales very well with the number of views and can better generalize to unseen views. Finally, we also show that with a procedure similar to label propagation it is possible to train the model even without using pose annotations at training time.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124819363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Decision-Theoretic Formulation for Sparse Stereo Correspondence Problems 稀疏立体对应问题的决策理论表述
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.34
T. Botterill, R. Green, S. Mills
Stereo reconstruction is challenging in scenes with many similar-looking objects, as matches between features are often ambiguous. Features matched incorrectly lead to an incorrect 3D reconstruction, whereas if correct matches are missed, the reconstruction will be incomplete. Previous systems for selecting a correspondence (set of matched features) select either a maximum likelihood correspondence, which may contain many incorrect matches, or use some heuristic for discarding ambiguous matches. In this paper we propose a new method for selecting a correspondence: we select the correspondence which minimises an expected loss function. Match probabilities are computed by Gibbs sampling, then the minimum expected loss correspondence is selected based on these probabilities. A parameter of the loss function controls the trade off between selecting incorrect matches versus missing correct matches. The proposed correspondence selection method is evaluated in a model-based framework for reconstructing branching plants, and on simulated data. In both cases it outperforms alternative approaches in terms of precision and recall, giving more complete and accurate 3D models.
在具有许多相似物体的场景中,立体重建是具有挑战性的,因为特征之间的匹配通常是模糊的。特征匹配不正确会导致不正确的3D重建,而如果错过了正确的匹配,重建将是不完整的。以前用于选择对应(匹配特征集)的系统要么选择最大似然对应(可能包含许多不正确的匹配),要么使用一些启发式方法来丢弃不明确的匹配。在本文中,我们提出了一种选择对应的新方法:我们选择最小期望损失函数的对应。通过Gibbs抽样计算匹配概率,然后根据这些概率选择最小期望损失对应。损失函数的一个参数控制选择错误匹配与丢失正确匹配之间的权衡。在基于模型的分支植物重建框架和模拟数据上对所提出的对应选择方法进行了评估。在这两种情况下,它在精度和召回率方面都优于其他方法,提供更完整和准确的3D模型。
{"title":"A Decision-Theoretic Formulation for Sparse Stereo Correspondence Problems","authors":"T. Botterill, R. Green, S. Mills","doi":"10.1109/3DV.2014.34","DOIUrl":"https://doi.org/10.1109/3DV.2014.34","url":null,"abstract":"Stereo reconstruction is challenging in scenes with many similar-looking objects, as matches between features are often ambiguous. Features matched incorrectly lead to an incorrect 3D reconstruction, whereas if correct matches are missed, the reconstruction will be incomplete. Previous systems for selecting a correspondence (set of matched features) select either a maximum likelihood correspondence, which may contain many incorrect matches, or use some heuristic for discarding ambiguous matches. In this paper we propose a new method for selecting a correspondence: we select the correspondence which minimises an expected loss function. Match probabilities are computed by Gibbs sampling, then the minimum expected loss correspondence is selected based on these probabilities. A parameter of the loss function controls the trade off between selecting incorrect matches versus missing correct matches. The proposed correspondence selection method is evaluated in a model-based framework for reconstructing branching plants, and on simulated data. In both cases it outperforms alternative approaches in terms of precision and recall, giving more complete and accurate 3D models.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"18 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Probabilistic Phase Unwrapping for Single-Frequency Time-of-Flight Range Cameras 单频飞行时间距离相机的概率相位展开
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.89
Ryan Crabb, R. Manduchi
This paper proposes a solution to the 2-D phase unwrapping problem, inherent to time-of-flight range sensing technology due to the cyclic nature of phase. Our method uses a single frequency capture period to improve frame rate and decrease the presence of motion artifacts encountered in multiple frequency solutions. We present a probabilistic framework that considers intensity image in addition to the phase image. The phase unwrapping problem is cast in terms of global optimization of a carefully chosen objective function. Comparative experimental results confirm the effectiveness of the proposed approach.
本文提出了一种二维相位展开问题的解决方案,该问题是由于相位的周期性而导致飞行时间距离传感技术所固有的。我们的方法使用单个频率捕获周期来提高帧率并减少在多个频率解决方案中遇到的运动伪影的存在。我们提出了一个概率框架,除了考虑相位图像外,还考虑了强度图像。相位展开问题是根据一个精心选择的目标函数的全局优化来进行的。对比实验结果证实了该方法的有效性。
{"title":"Probabilistic Phase Unwrapping for Single-Frequency Time-of-Flight Range Cameras","authors":"Ryan Crabb, R. Manduchi","doi":"10.1109/3DV.2014.89","DOIUrl":"https://doi.org/10.1109/3DV.2014.89","url":null,"abstract":"This paper proposes a solution to the 2-D phase unwrapping problem, inherent to time-of-flight range sensing technology due to the cyclic nature of phase. Our method uses a single frequency capture period to improve frame rate and decrease the presence of motion artifacts encountered in multiple frequency solutions. We present a probabilistic framework that considers intensity image in addition to the phase image. The phase unwrapping problem is cast in terms of global optimization of a carefully chosen objective function. Comparative experimental results confirm the effectiveness of the proposed approach.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130281111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Tackling Shapes and BRDFs Head-On 解决形状和brdf正面
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.81
Stamatios Georgoulis, M. Proesmans, L. Gool
In this work, we investigate the use of simple flash-based photography to capture an object's 3D shape and reflectance characteristics at the same time. The presented method is based on the principles of Structure from Motion (SfM) and Photometric Stereo (PS), yet, we make sure not to use more than readily-available consumer equipment, like a camera with flash. Starting from a SfM-generated mesh, we apply PS to refine both geometry and reflectance, where the latter is expressed in terms of data-driven Bidirectional Reflectance Distribution Function (BRDF) representations. We also introduce a novel approach to infer complete BRDFs starting from the sparsely sampled data-driven reflectance information captured with this setup. Our approach is experimentally validated by modeling several challenging objects, both synthetic and real.
在这项工作中,我们研究了使用简单的基于闪光灯的摄影来同时捕捉物体的3D形状和反射特性。所提出的方法是基于运动结构(SfM)和光度立体(PS)的原理,然而,我们确保不使用超过现成的消费设备,如带闪光灯的相机。从sfm生成的网格开始,我们应用PS来改进几何形状和反射率,其中反射率用数据驱动的双向反射分布函数(BRDF)表示。我们还介绍了一种新的方法,从这种设置捕获的稀疏采样数据驱动的反射率信息开始推断完整的brdf。我们的方法是通过模拟几个具有挑战性的对象,包括合成和真实的实验验证。
{"title":"Tackling Shapes and BRDFs Head-On","authors":"Stamatios Georgoulis, M. Proesmans, L. Gool","doi":"10.1109/3DV.2014.81","DOIUrl":"https://doi.org/10.1109/3DV.2014.81","url":null,"abstract":"In this work, we investigate the use of simple flash-based photography to capture an object's 3D shape and reflectance characteristics at the same time. The presented method is based on the principles of Structure from Motion (SfM) and Photometric Stereo (PS), yet, we make sure not to use more than readily-available consumer equipment, like a camera with flash. Starting from a SfM-generated mesh, we apply PS to refine both geometry and reflectance, where the latter is expressed in terms of data-driven Bidirectional Reflectance Distribution Function (BRDF) representations. We also introduce a novel approach to infer complete BRDFs starting from the sparsely sampled data-driven reflectance information captured with this setup. Our approach is experimentally validated by modeling several challenging objects, both synthetic and real.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128276932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Automated gbXML-Based Building Model Creation for Thermal Building Simulation 热建筑模拟中基于gbxml的自动建筑模型创建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.109
Chao Wang, Y. Cho
In the area of as-is BIM creation from point cloud, researchers start digging the potential of using point cloud as 3D scenes of the construction job site to monitor construction progresses and safety. However, limited contribution was made in the AEC/FM domain to assist decision making process on building retrofit and renovation. This paper presents a method of automatic gbXML-based building model generation from thermal point cloud. Through the proposed geometry extraction and thermal resistance value estimation techniques, the size and thermal information of the building envelope components are automatically obtained in order to quickly generate building models ready for energy performance simulation. The registered point cloud of a residential house was used as a case study to validate the proposed method.
在点云as-is BIM创建领域,研究人员开始挖掘使用点云作为施工现场3D场景的潜力,以监控施工进度和安全。然而,在AEC/FM领域,在协助建筑改造和改造的决策过程中,贡献有限。提出了一种基于gbxml的热点云建筑模型自动生成方法。通过所提出的几何形状提取和热阻值估计技术,自动获取建筑围护结构构件的尺寸和热信息,从而快速生成建筑模型,为节能性能仿真做好准备。以某住宅配准点云为例,对该方法进行了验证。
{"title":"Automated gbXML-Based Building Model Creation for Thermal Building Simulation","authors":"Chao Wang, Y. Cho","doi":"10.1109/3DV.2014.109","DOIUrl":"https://doi.org/10.1109/3DV.2014.109","url":null,"abstract":"In the area of as-is BIM creation from point cloud, researchers start digging the potential of using point cloud as 3D scenes of the construction job site to monitor construction progresses and safety. However, limited contribution was made in the AEC/FM domain to assist decision making process on building retrofit and renovation. This paper presents a method of automatic gbXML-based building model generation from thermal point cloud. Through the proposed geometry extraction and thermal resistance value estimation techniques, the size and thermal information of the building envelope components are automatically obtained in order to quickly generate building models ready for energy performance simulation. The registered point cloud of a residential house was used as a case study to validate the proposed method.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115959465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Real-Time Direct Dense Matching on Fisheye Images Using Plane-Sweeping Stereo 基于平面扫描立体的鱼眼图像实时直接密集匹配
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.77
Christian Häne, Lionel Heng, Gim Hee Lee, A. Sizov, M. Pollefeys
In this paper, we propose an adaptation of camera projection models for fisheye cameras into the plane-sweeping stereo matching algorithm. This adaptation allows us to do plane-sweeping stereo directly on fisheye images. Our approach also works for other non-pinhole cameras such as omni directional and catadioptric cameras when using the unified projection model. Despite the simplicity of our proposed approach, we are able to obtain full, good quality and high resolution depth maps from the fisheye images. To verify our approach, we show experimental results based on depth maps generated by our approach, and dense models produced from these depth maps.
本文提出了一种将鱼眼相机投影模型引入平面扫描立体匹配算法的方法。这种适应允许我们在鱼眼图像上直接做平面立体扫描。当使用统一的投影模型时,我们的方法也适用于其他非针孔相机,如全方位和反射相机。尽管我们提出的方法简单,但我们能够从鱼眼图像中获得完整、高质量和高分辨率的深度图。为了验证我们的方法,我们展示了基于我们的方法生成的深度图和由这些深度图生成的密集模型的实验结果。
{"title":"Real-Time Direct Dense Matching on Fisheye Images Using Plane-Sweeping Stereo","authors":"Christian Häne, Lionel Heng, Gim Hee Lee, A. Sizov, M. Pollefeys","doi":"10.1109/3DV.2014.77","DOIUrl":"https://doi.org/10.1109/3DV.2014.77","url":null,"abstract":"In this paper, we propose an adaptation of camera projection models for fisheye cameras into the plane-sweeping stereo matching algorithm. This adaptation allows us to do plane-sweeping stereo directly on fisheye images. Our approach also works for other non-pinhole cameras such as omni directional and catadioptric cameras when using the unified projection model. Despite the simplicity of our proposed approach, we are able to obtain full, good quality and high resolution depth maps from the fisheye images. To verify our approach, we show experimental results based on depth maps generated by our approach, and dense models produced from these depth maps.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134116426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
Matching Features Correctly through Semantic Understanding 通过语义理解正确匹配特征
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.15
Nikolay Kobyshev, Hayko Riemenschneider, L. Gool
Image-to-image feature matching is the single most restrictive time bottleneck in any matching pipeline. We propose two methods for improving the speed and quality by employing semantic scene segmentation. First, we introduce a way of capturing semantic scene context of a key point into a compact description. Second, we propose to learn correct match ability of descriptors from these semantic contexts. Finally, we further reduce the complexity of matching to only a pre-computed set of semantically close key points. All methods can be used independently and in the evaluation we show combinations for maximum speed benefits. Overall, our proposed methods outperform all baselines and provide significant improvements in accuracy and an order of magnitude faster key point matching.
图像到图像的特征匹配是任何匹配管道中最严格的时间瓶颈。本文提出了两种利用语义场景分割来提高分割速度和质量的方法。首先,我们引入了一种将关键点的语义场景上下文捕获到紧凑描述中的方法。其次,我们提出从这些语义语境中学习描述符的正确匹配能力。最后,我们进一步降低了匹配的复杂性,只需要预先计算一组语义上接近的关键点。所有方法都可以独立使用,在评估中,我们展示了最大速度效益的组合。总体而言,我们提出的方法优于所有基线,并提供了精度的显着提高和关键点匹配速度的数量级。
{"title":"Matching Features Correctly through Semantic Understanding","authors":"Nikolay Kobyshev, Hayko Riemenschneider, L. Gool","doi":"10.1109/3DV.2014.15","DOIUrl":"https://doi.org/10.1109/3DV.2014.15","url":null,"abstract":"Image-to-image feature matching is the single most restrictive time bottleneck in any matching pipeline. We propose two methods for improving the speed and quality by employing semantic scene segmentation. First, we introduce a way of capturing semantic scene context of a key point into a compact description. Second, we propose to learn correct match ability of descriptors from these semantic contexts. Finally, we further reduce the complexity of matching to only a pre-computed set of semantically close key points. All methods can be used independently and in the evaluation we show combinations for maximum speed benefits. Overall, our proposed methods outperform all baselines and provide significant improvements in accuracy and an order of magnitude faster key point matching.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128824284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Variational Regularization and Fusion of Surface Normal Maps 表面法线映射的变分正则化与融合
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.92
Bernhard Zeisl, C. Zach, M. Pollefeys
In this work we propose an optimization scheme for variational, vectorial denoising and fusion of surface normal maps. These are common outputs of shape from shading, photometric stereo or single image reconstruction methods, but tend to be noisy and request post-processing for further usage. Processing of normals maps, which do not provide knowledge about the underlying scene depth, is complicated due to their unit length constraint which renders the optimization non-linear and non-convex. The presented approach builds upon a linearization of the constraint to obtain a convex relaxation, while guaranteeing convergence. Experimental results demonstrate that our algorithm generates more consistent representations from estimated and potentially complementary normal maps.
在这项工作中,我们提出了一种表面法线贴图的变分、矢量去噪和融合的优化方案。这些是来自阴影、光度立体或单一图像重建方法的常见形状输出,但往往是嘈杂的,需要后处理以进一步使用。由于法线贴图的单位长度限制,使得优化变得非线性和非凸,因此法线贴图的处理非常复杂,因为法线贴图不提供底层场景深度的信息。提出的方法建立在约束的线性化上,以获得凸松弛,同时保证收敛。实验结果表明,我们的算法从估计的和可能互补的法线映射中产生更一致的表示。
{"title":"Variational Regularization and Fusion of Surface Normal Maps","authors":"Bernhard Zeisl, C. Zach, M. Pollefeys","doi":"10.1109/3DV.2014.92","DOIUrl":"https://doi.org/10.1109/3DV.2014.92","url":null,"abstract":"In this work we propose an optimization scheme for variational, vectorial denoising and fusion of surface normal maps. These are common outputs of shape from shading, photometric stereo or single image reconstruction methods, but tend to be noisy and request post-processing for further usage. Processing of normals maps, which do not provide knowledge about the underlying scene depth, is complicated due to their unit length constraint which renders the optimization non-linear and non-convex. The presented approach builds upon a linearization of the constraint to obtain a convex relaxation, while guaranteeing convergence. Experimental results demonstrate that our algorithm generates more consistent representations from estimated and potentially complementary normal maps.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129108575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Surface Detection Using Round Cut 使用圆切割的表面检测
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.60
Vedrana Andersen Dahl, A. Dahl, R. Larsen
We propose an iterative method for detecting closed surfaces in a volumetric data, where an optimal search is performed in a graph build upon a triangular mesh. Our approach is based on previous techniques for detecting an optimal terrain-like or tubular surface employing a regular grid. Unlike similar adaptations for triangle meshes, our method is capable of capturing complex geometries by iteratively refining the surface, where we obtain a high level of robustness by applying explicit mesh processing to intermediate results. Our method uses on-surface data support, but it also exploits data information about the region inside and outside the surface. This provides additional robustness to the algorithm. We demonstrate the capabilities of the approach by detecting surfaces of CT scanned objects.
我们提出了一种迭代方法来检测体积数据中的封闭表面,其中最优搜索是在三角形网格上构建的图中执行的。我们的方法是基于以前使用规则网格检测最佳类地形或管状表面的技术。与三角网格的类似适应不同,我们的方法能够通过迭代地细化表面来捕获复杂的几何形状,其中我们通过对中间结果应用显式网格处理来获得高水平的鲁棒性。我们的方法使用地表数据支持,但它也利用了地表内外区域的数据信息。这为算法提供了额外的鲁棒性。我们通过检测CT扫描对象的表面来证明该方法的能力。
{"title":"Surface Detection Using Round Cut","authors":"Vedrana Andersen Dahl, A. Dahl, R. Larsen","doi":"10.1109/3DV.2014.60","DOIUrl":"https://doi.org/10.1109/3DV.2014.60","url":null,"abstract":"We propose an iterative method for detecting closed surfaces in a volumetric data, where an optimal search is performed in a graph build upon a triangular mesh. Our approach is based on previous techniques for detecting an optimal terrain-like or tubular surface employing a regular grid. Unlike similar adaptations for triangle meshes, our method is capable of capturing complex geometries by iteratively refining the surface, where we obtain a high level of robustness by applying explicit mesh processing to intermediate results. Our method uses on-surface data support, but it also exploits data information about the region inside and outside the surface. This provides additional robustness to the algorithm. We demonstrate the capabilities of the approach by detecting surfaces of CT scanned objects.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116882171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2014 2nd International Conference on 3D Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1