首页 > 最新文献

2014 2nd International Conference on 3D Vision最新文献

英文 中文
3D Liver Vessel Reconstruction from CT Images 基于CT图像的三维肝血管重建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.96
Xing-Chen Pan, Hong-Ren Su, S. Lai, Kai-Che Liu, Hurng-Sheng Wu
We propose a novel framework for reconstructing 3D liver vessel model from CT images. The proposed algorithm consists of vessel detection, vessel tree reconstruction and vessel radius estimation. First, we employ the tubular-filter based approach to detect vessel structure and construct the minimum spanning tree to bridge all the gaps between vessels. Then, we propose an approach to estimate the radius of the vessel at all vessel centerline voxels based on the local patch descriptors. Using the proposed 3D vessel reconstruction system can provide detailed 3D liver vessel model very efficiently. Our experimental results demonstrate the accuracy of the proposed system for 3D liver vessel reconstruction from 3D CT images.
我们提出了一种基于CT图像重建三维肝血管模型的新框架。该算法包括血管检测、血管树重建和血管半径估计。首先,我们采用基于管状滤波器的方法来检测血管结构,并构造最小生成树来桥接血管之间的所有间隙。然后,我们提出了一种基于局部补丁描述符在所有血管中心线体素上估计血管半径的方法。利用所提出的三维血管重建系统可以非常有效地提供详细的三维肝脏血管模型。我们的实验结果证明了该系统从3D CT图像中重建三维肝血管的准确性。
{"title":"3D Liver Vessel Reconstruction from CT Images","authors":"Xing-Chen Pan, Hong-Ren Su, S. Lai, Kai-Che Liu, Hurng-Sheng Wu","doi":"10.1109/3DV.2014.96","DOIUrl":"https://doi.org/10.1109/3DV.2014.96","url":null,"abstract":"We propose a novel framework for reconstructing 3D liver vessel model from CT images. The proposed algorithm consists of vessel detection, vessel tree reconstruction and vessel radius estimation. First, we employ the tubular-filter based approach to detect vessel structure and construct the minimum spanning tree to bridge all the gaps between vessels. Then, we propose an approach to estimate the radius of the vessel at all vessel centerline voxels based on the local patch descriptors. Using the proposed 3D vessel reconstruction system can provide detailed 3D liver vessel model very efficiently. Our experimental results demonstrate the accuracy of the proposed system for 3D liver vessel reconstruction from 3D CT images.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129506217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Extraction of Moving Objects from Image and LIDAR Sequences 从图像和激光雷达序列中自动提取运动物体
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.94
Jizhou Yan, Dongdong Chen, Heesoo Myeong, Takaaki Shiratori, Yi Ma
Detecting and segmenting moving objects in an image sequence has always been a crucial task for many computer vision applications. This task becomes especially challenging for real-world image sequences of busy street scenes, where moving objects are ubiquitous. Although it remains technologically elusive to develop an effective and scalable image-based moving object detection, modern street side imagery are often augmented with sparse point clouds captured with depth sensors. This paper develops a simple but effective system for moving object detection that fully harnesses the complementary nature of 2D image and 3D LIDAR point clouds. We demonstrate how moving objects can be much more easily and reliably detected with sparse 3D measurements and how such information can significantly improve segmentation for moving objects in the image sequences. The results of our system are highly accurate "joint segmentation" of 2D images and 3D points for all moving objects in street scenes, which can serve many subsequent tasks such as object removal in images, 3D reconstruction and rendering.
检测和分割图像序列中的运动目标一直是许多计算机视觉应用的关键任务。对于繁忙街道场景的真实世界图像序列来说,这项任务尤其具有挑战性,因为移动物体无处不在。尽管开发一种有效的、可扩展的基于图像的移动物体检测技术在技术上仍是难以捉摸的,但现代街景图像通常是用深度传感器捕获的稀疏点云来增强的。本文开发了一种简单而有效的运动目标检测系统,充分利用了二维图像和三维激光雷达点云的互补性。我们演示了如何使用稀疏的3D测量更容易和可靠地检测移动物体,以及这些信息如何显着改善图像序列中移动物体的分割。我们的系统对街道场景中所有移动物体的2D图像和3D点进行了高度精确的“联合分割”,可以服务于许多后续任务,如图像中的物体去除,3D重建和渲染。
{"title":"Automatic Extraction of Moving Objects from Image and LIDAR Sequences","authors":"Jizhou Yan, Dongdong Chen, Heesoo Myeong, Takaaki Shiratori, Yi Ma","doi":"10.1109/3DV.2014.94","DOIUrl":"https://doi.org/10.1109/3DV.2014.94","url":null,"abstract":"Detecting and segmenting moving objects in an image sequence has always been a crucial task for many computer vision applications. This task becomes especially challenging for real-world image sequences of busy street scenes, where moving objects are ubiquitous. Although it remains technologically elusive to develop an effective and scalable image-based moving object detection, modern street side imagery are often augmented with sparse point clouds captured with depth sensors. This paper develops a simple but effective system for moving object detection that fully harnesses the complementary nature of 2D image and 3D LIDAR point clouds. We demonstrate how moving objects can be much more easily and reliably detected with sparse 3D measurements and how such information can significantly improve segmentation for moving objects in the image sequences. The results of our system are highly accurate \"joint segmentation\" of 2D images and 3D points for all moving objects in street scenes, which can serve many subsequent tasks such as object removal in images, 3D reconstruction and rendering.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115096532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Reconstruction of Inextensible Surfaces on a Budget via Bootstrapping 预算上不可扩展曲面的自举重建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.98
Alex Locher, Lennart Elsen, X. Boix, L. Gool
Many methods for 3D reconstruction of deformable surfaces from a monocular view rely on inextensibility constraints. An interesting application with commercial potential lies in augmented reality in portable and wearable devices. Such applications add an additional challenge to the 3D reconstruction, since in portable platforms the availability of resources is limited and not always guaranteed. Towards this goal, we introduce a method to deliver the best possible 3D reconstruction of the deformable surface at any time. Since computational resources may vary, it is decided on-the-fly when to stop the reconstruction algorithm. We use an efficient optimization method to quickly deliver the reconstructed surface. We introduce bootstrapping to improve the robustness of the efficient 3D reconstruction algorithm by merging multiple versions of the reconstructed surface. Also, these multiple 3D surfaces can be used to estimate the confidence of the reconstruction. In a series of experiments, in both synthetic and real data, we show that our method is effective for timely reconstruction of 3D surfaces.
许多从单目视图重建可变形曲面的方法依赖于不可扩展性约束。增强现实在便携式和可穿戴设备上的有趣应用具有商业潜力。这样的应用程序给3D重建增加了额外的挑战,因为在便携式平台上,资源的可用性是有限的,而且并不总是得到保证。为了实现这一目标,我们引入了一种方法,可以随时提供可变形表面的最佳3D重建。由于计算资源可能会变化,因此何时停止重建算法是动态决定的。我们使用了一种高效的优化方法来快速交付重构曲面。我们引入自举,通过合并多个版本的重建表面来提高高效三维重建算法的鲁棒性。此外,这些多个三维曲面可以用来估计重建的置信度。在一系列的实验中,无论是在合成数据还是实际数据中,我们都证明了我们的方法对于三维曲面的及时重建是有效的。
{"title":"Reconstruction of Inextensible Surfaces on a Budget via Bootstrapping","authors":"Alex Locher, Lennart Elsen, X. Boix, L. Gool","doi":"10.1109/3DV.2014.98","DOIUrl":"https://doi.org/10.1109/3DV.2014.98","url":null,"abstract":"Many methods for 3D reconstruction of deformable surfaces from a monocular view rely on inextensibility constraints. An interesting application with commercial potential lies in augmented reality in portable and wearable devices. Such applications add an additional challenge to the 3D reconstruction, since in portable platforms the availability of resources is limited and not always guaranteed. Towards this goal, we introduce a method to deliver the best possible 3D reconstruction of the deformable surface at any time. Since computational resources may vary, it is decided on-the-fly when to stop the reconstruction algorithm. We use an efficient optimization method to quickly deliver the reconstructed surface. We introduce bootstrapping to improve the robustness of the efficient 3D reconstruction algorithm by merging multiple versions of the reconstructed surface. Also, these multiple 3D surfaces can be used to estimate the confidence of the reconstruction. In a series of experiments, in both synthetic and real data, we show that our method is effective for timely reconstruction of 3D surfaces.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115511868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-view Photometric Stereo by Example 多视点立体测光实例
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.63
J. Ackermann, Fabian Langguth, Simon Fuhrmann, Arjan Kuijper, M. Goesele
We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.
本文提出了一种新的多视点立体测光技术,该技术可以在未知BRDF和光照条件下恢复无纹理物体的表面。相机和灯光的位置可以自由变化,并在每个图像中改变。我们利用目标和示例对象之间的方向一致性来开发一致性度量。由于法线比深度更可靠,我们将表面表示为深度图和法线图。这些地图是联合优化的,允许我们制定考虑表面方向的深度约束。我们的技术不需要视觉船体或立体重建来引导,只利用图像强度,而不需要辐射相机校准。我们展示了具有不同程度镜面的真实物体的结果,并表明这些结果可用于从多个视图创建全局一致的模型。
{"title":"Multi-view Photometric Stereo by Example","authors":"J. Ackermann, Fabian Langguth, Simon Fuhrmann, Arjan Kuijper, M. Goesele","doi":"10.1109/3DV.2014.63","DOIUrl":"https://doi.org/10.1109/3DV.2014.63","url":null,"abstract":"We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Structure from Motion Approach for the Analysis of Adhesions in Rotating Vessels 旋转血管粘连分析的运动结构方法
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.38
P. Waibel, J. Matthes, L. Gröll, H. Keller
While processing material in rotating vessels such as rotary kilns, adhesions on the inner vessel wall can occur. Large adhesions usually affect the process negatively and need to be prevented. An online detection and analysis of adhesions inside the vessel during operation could allow the process control to deploy counter-measures that prevent additional adhesions or reduce the adhesion's sizes. In this paper, we present a new method that enables an image-based online detection, tracking and characterization of adhesions inside a rotating vessel. Our algorithm makes use of the rotational movements of adhesions in a structure from motion approach which allows for the measurement of the positions and heights of adhesions with a single camera. The applicability of our method is shown by means of image sequences from a rotating vessel model as well as from an industrially used cement rotary kiln.
在诸如回转窑之类的旋转容器中加工材料时,可能会发生容器内壁的粘连。大粘连通常会对手术过程产生负面影响,需要预防。在操作过程中,对容器内部的粘连进行在线检测和分析,可以使过程控制部署对抗措施,防止额外的粘连或减少粘连的大小。在本文中,我们提出了一种新的方法,能够基于图像的在线检测,跟踪和表征旋转容器内的粘连。我们的算法利用粘接剂的旋转运动结构从运动的方法,允许测量的位置和高度的粘接剂与一个单一的相机。通过旋转容器模型和工业用水泥回转窑的图像序列表明了我们方法的适用性。
{"title":"A Structure from Motion Approach for the Analysis of Adhesions in Rotating Vessels","authors":"P. Waibel, J. Matthes, L. Gröll, H. Keller","doi":"10.1109/3DV.2014.38","DOIUrl":"https://doi.org/10.1109/3DV.2014.38","url":null,"abstract":"While processing material in rotating vessels such as rotary kilns, adhesions on the inner vessel wall can occur. Large adhesions usually affect the process negatively and need to be prevented. An online detection and analysis of adhesions inside the vessel during operation could allow the process control to deploy counter-measures that prevent additional adhesions or reduce the adhesion's sizes. In this paper, we present a new method that enables an image-based online detection, tracking and characterization of adhesions inside a rotating vessel. Our algorithm makes use of the rotational movements of adhesions in a structure from motion approach which allows for the measurement of the positions and heights of adhesions with a single camera. The applicability of our method is shown by means of image sequences from a rotating vessel model as well as from an industrially used cement rotary kiln.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Direct Optimization of T-Splines Based on Multiview Stereo 基于多视图立体的t样条直接优化
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.42
Thomas Morwald, Jonathan Balzer, M. Vincze
We propose a multi-view stereo reconstruction method in which the surface is represented by CAD-compatible T-splines. Our method hinges on the principle of is geometric analysis, formulating an energy functional that can be directly computed in terms of the T-spline basis. Paying attention to the idiosyncracies of this basis, we derive an analytic formula for the gradient of the functional which is then used in photo-consistency optimization. The numbers of degrees of freedom our model requires is drastically reduced compared to the state of the art. Gains in efficiency can firstly be attributed to the fact that T-splines are particularly suited for adaptive refinement. Secondly, evaluation of the proposed energy functional is highly parallelizable as demonstrated by means of a T-spline-specific GPU implementation. Our experiments indicate the superiority of T-spline surfaces over the widely-used triangular meshes in terms of memory efficiency and numerical stability, without relying on dedicated regularizers.
我们提出了一种多视图立体重建方法,其中曲面由cad兼容的t样条表示。我们的方法依赖于几何分析的原理,形成了一个能量泛函,可以根据t样条基直接计算。考虑到这个基的特性,我们推导出了一个泛函梯度的解析公式,然后将其用于光一致性优化。与现有技术相比,我们的模型所需的自由度大大减少了。效率的提高首先可以归因于t样条特别适合自适应细化的事实。其次,通过特定于t样条的GPU实现,所提出的能量泛函的评估具有高度并行性。我们的实验表明,t样条曲面在内存效率和数值稳定性方面优于广泛使用的三角形网格,而不依赖于专用的正则化器。
{"title":"Direct Optimization of T-Splines Based on Multiview Stereo","authors":"Thomas Morwald, Jonathan Balzer, M. Vincze","doi":"10.1109/3DV.2014.42","DOIUrl":"https://doi.org/10.1109/3DV.2014.42","url":null,"abstract":"We propose a multi-view stereo reconstruction method in which the surface is represented by CAD-compatible T-splines. Our method hinges on the principle of is geometric analysis, formulating an energy functional that can be directly computed in terms of the T-spline basis. Paying attention to the idiosyncracies of this basis, we derive an analytic formula for the gradient of the functional which is then used in photo-consistency optimization. The numbers of degrees of freedom our model requires is drastically reduced compared to the state of the art. Gains in efficiency can firstly be attributed to the fact that T-splines are particularly suited for adaptive refinement. Secondly, evaluation of the proposed energy functional is highly parallelizable as demonstrated by means of a T-spline-specific GPU implementation. Our experiments indicate the superiority of T-spline surfaces over the widely-used triangular meshes in terms of memory efficiency and numerical stability, without relying on dedicated regularizers.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124032126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Colour Helmholtz Stereopsis for Reconstruction of Complex Dynamic Scenes 彩色亥姆霍兹立体视觉用于复杂动态场景的重建
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.59
Nadejda Roubtsova, Jean-Yves Guillemaut
Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. Two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS - a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties.
亥姆霍兹立体视觉(HS)是一种用于重建具有任意反射特性的场景的强大技术。然而,由于要求顺序捕获对等的图像对(即相机和光源位置相互交换的两个图像),以前的配方仅限于静态对象。本文提出了一种基于波长复用的彩色HS技术。为了解决多光谱数据采集带来的新挑战,提出的新型彩色HS管道独特地结合了针对多相机/光源对的定制光度校准,表面色度校准的新程序以及适用于从最少数量的互易对重建的最先进的贝叶斯HS。定量和定性评价实验结果表明,该方法适用于静态场景的柔性(单镜头)重建和具有复杂表面反射率的动态场景重建。
{"title":"Colour Helmholtz Stereopsis for Reconstruction of Complex Dynamic Scenes","authors":"Nadejda Roubtsova, Jean-Yves Guillemaut","doi":"10.1109/3DV.2014.59","DOIUrl":"https://doi.org/10.1109/3DV.2014.59","url":null,"abstract":"Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. Two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS - a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132374488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Hashing Cross-Modal Manifold for Scalable Sketch-Based 3D Model Retrieval 哈希跨模态流形可扩展的基于草图的三维模型检索
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.72
T. Furuya, Ryutarou Ohbuchi
This paper proposes a novel sketch-based 3D model retrieval algorithm that is scalable as well as accurate. Accuracy is achieved by a combination of (1) a set of state-of-the-art visual features for comparing sketches and 3D models, and (2) an efficient algorithm to learn data-driven similarity across heterogeneous domains of sketches and 3D models. For the latter, we adopted the algorithm [18] by Furuya et al., which fuses, for more accurate similarity computation, three kinds of similarities, i.e., Those among sketches, those among 3D models, and those between sketches and 3D models. While the algorithm by Furuya et al. [18] does improve accuracy, it does not scale. We accelerate, without loss of accuracy, retrieval result ranking stage of [18] by embedding its cross-modal similarity graph into Hamming space. The embedding is performed by a combination of spectral embedding and hashing into compact binary codes. Experiments show that our proposed algorithm is more accurate and much faster than previous sketch-based 3D model retrieval algorithms.
本文提出了一种新的基于草图的三维模型检索算法,该算法具有可扩展性和准确性。准确性是通过(1)一组最先进的视觉特征来比较草图和3D模型,以及(2)一种高效的算法来学习草图和3D模型的异构域之间数据驱动的相似性来实现的。对于后者,我们采用Furuya等人的算法[18],该算法融合了三种相似度,即草图之间的相似度,3D模型之间的相似度,以及草图与3D模型之间的相似度,以便更精确地计算相似度。虽然Furuya等人[18]的算法确实提高了准确性,但它不具有可扩展性。我们通过将[18]的跨模态相似图嵌入到Hamming空间中,在不损失精度的情况下,加速了[18]的检索结果排序阶段。嵌入是通过频谱嵌入和哈希到紧凑的二进制码的组合来实现的。实验表明,该算法比以往基于草图的三维模型检索算法更准确、更快。
{"title":"Hashing Cross-Modal Manifold for Scalable Sketch-Based 3D Model Retrieval","authors":"T. Furuya, Ryutarou Ohbuchi","doi":"10.1109/3DV.2014.72","DOIUrl":"https://doi.org/10.1109/3DV.2014.72","url":null,"abstract":"This paper proposes a novel sketch-based 3D model retrieval algorithm that is scalable as well as accurate. Accuracy is achieved by a combination of (1) a set of state-of-the-art visual features for comparing sketches and 3D models, and (2) an efficient algorithm to learn data-driven similarity across heterogeneous domains of sketches and 3D models. For the latter, we adopted the algorithm [18] by Furuya et al., which fuses, for more accurate similarity computation, three kinds of similarities, i.e., Those among sketches, those among 3D models, and those between sketches and 3D models. While the algorithm by Furuya et al. [18] does improve accuracy, it does not scale. We accelerate, without loss of accuracy, retrieval result ranking stage of [18] by embedding its cross-modal similarity graph into Hamming space. The embedding is performed by a combination of spectral embedding and hashing into compact binary codes. Experiments show that our proposed algorithm is more accurate and much faster than previous sketch-based 3D model retrieval algorithms.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
MCOV: A Covariance Descriptor for Fusion of Texture and Shape Features in 3D Point Clouds 三维点云纹理和形状特征融合的协方差描述子
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.11
Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa
In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.
在本文中,我们提出了一种基于协方差的描述符MCOV,用于融合三维表面的形状和颜色信息以及相关纹理,旨在对三维点云中的区域进行鲁棒表征和匹配。提出的描述符基于协方差的概念,以便在径向邻域中创建纹理和表面特征变化的紧凑表示,而不是使用绝对特征本身。即使这种表示是紧凑和低维的,它仍然为复杂的场景提供了判别能力。从简单的公式角度来看,在一个点的封闭环境中对特征变化进行编码可以提供刚性空间变换的不变性和对噪声和场景分辨率变化的鲁棒性。通过在选定的数据库上测试该方法的性能,验证了该方法在具有挑战性的条件下的充分性,并且优于其他最先进的3D点描述符方法,从而验证了3D点识别的结果。还提供了一种用普通深度传感器设备获取的场景中物体匹配的定性测试应用。
{"title":"MCOV: A Covariance Descriptor for Fusion of Texture and Shape Features in 3D Point Clouds","authors":"Pol Cirujeda, Xavier Mateo, Yashin Dicente Cid, Xavier Binefa","doi":"10.1109/3DV.2014.11","DOIUrl":"https://doi.org/10.1109/3DV.2014.11","url":null,"abstract":"In this paper we propose MCOV, a covariance-based descriptor for the fusion of shape and color information of 3D surfaces with associated texture aiming at a robust characterization and matching of areas in 3D point clouds. The proposed descriptor is based on the notion of covariance in order to create compact representations of the variations of texture and surface features in a radial neighbourhood, instead of using the absolute features themselves. Even if this representation is compact and low dimensional, it still offers discriminative power for complex scenes. The codification of feature variations in a close environment of a point provides invariance to rigid spatial transformations and robustness to changes in noise and scene resolution from a simple formulation perspective. Results on 3D points discrimination are validated by testing this approach performance on top of a selected database, corroborating the adequacy of our approach on the posed challenging conditions and outperforming other state-of-the-art 3D point descriptor methods. A qualitative test application on matching objects on scenes acquired with a common depth-sensor device is also provided.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127385071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
4D Capture Using Visibility Information of Multiple Projector Camera System 利用多投影仪摄像机系统的可视性信息进行4D捕捉
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.70
R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo
In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.
在本文中,我们提出了一种使用多摄像机和投影仪进行4D运动物体捕获的方法。以前的4D捕捉系统的问题是相机数量有限,图像数量非常大,无法以高帧率捕捉序列。我们提出了一个多投影机-摄像机系统来解决这个问题。多视点立体的问题之一是确定相机对表面每个点的可见性。对于被动多视立体来说,场景几何形状及其可见性的估计是一个先有鸡还是先有蛋的问题,采用迭代法等方法反复进行场景几何形状的估计和重建。该方法利用投影模式独立解决可见性问题,在不考虑可见性问题的情况下,有效地恢复了形状。此外,可视化信息不仅用于多视图立体重建,还用于合并3D形状,以消除设备之间的不一致性。实验验证了该方法的有效性,证明合并后的网格适用于4Dreconstruction。
{"title":"4D Capture Using Visibility Information of Multiple Projector Camera System","authors":"R. Sagawa, N. Kasuya, Yoshinori Oki, Hiroshi Kawasaki, Yoshio Matsumoto, Furukawa Ryo","doi":"10.1109/3DV.2014.70","DOIUrl":"https://doi.org/10.1109/3DV.2014.70","url":null,"abstract":"In this paper, we propose a method with multiple cameras and projectors for 4D capture of moving objects. The issues of previous 4D capture systems are that the number of cameras are limited, and the number of images is very large to capture the sequence at high frame rate. We propose a multiple projector-camera system to tackle this problem. One of the issues of multi-view stereo is to determine visibility of cameras for each point of the surface. While estimating the scene geometry and its visibility is a chicken-and-egg problem for passive multi-view stereo, it was solved by, for example, iterative approach conducting the estimation of visibility and the reconstruction of the scene geometry repeatedly. With our method, since visibility problem is independently solved by using the projected pattern, shapes are recovered efficiently without considering visibility problem. Further, the visibility information is not only used for multi-view stereo reconstruction, but also for merging 3D shapes to eliminate inconsistency between devices. The efficiency of the proposed method is tested in the experiments, proving the merged mesh is suitable for 4Dreconstruction.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 2nd International Conference on 3D Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1