首页 > 最新文献

2014 2nd International Conference on 3D Vision最新文献

英文 中文
Height Gradient Histogram (HIGH) for 3D Scene Labeling 高度梯度直方图(HIGH)用于3D场景标注
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.16
Gangqiang Zhao, Junsong Yuan, K. Dang
RGB-D (color + 3D point cloud) based scene labeling has received much attention due to the affordable RGB-D sensors such as Microsoft Kinect. To fully utilize the RGB-D data, it is critical to develop robust features that can reliably describe the 3D shape information of the point cloud data. Previous work has proposed to extract SIFT-like features from the depth dimension data directly while ignored the important height dimension data of the 3D point cloud. In this paper, we propose to describe 3D scene using height gradient information and propose a new compact point cloud feature called Height Gradient Histogram (HIGH). Using Text on Boost as the pixel classifier, the experiments on two benchmarked 3D scene labeling datasets show that HIGH feature can well handle the intra-category variations of object class, and significantly improve class-average accuracy compared with the state-of-the-art results. We will publish the code of HIGH feature for the community.
基于RGB-D(彩色+ 3D点云)的场景标记由于价格合理的RGB-D传感器(如微软Kinect)而受到广泛关注。为了充分利用RGB-D数据,开发能够可靠地描述点云数据三维形状信息的鲁棒特征是至关重要的。以往的工作提出直接从深度维数据中提取类似sift的特征,而忽略了三维点云的重要高度维数据。在本文中,我们提出了使用高度梯度信息来描述三维场景,并提出了一种新的紧凑的点云特征,称为高度梯度直方图(HIGH)。使用Text on Boost作为像素分类器,在两个基准的3D场景标注数据集上进行了实验,结果表明,HIGH特征可以很好地处理对象类别的类别内变化,与目前的结果相比,类平均准确率显著提高。我们将向社区发布HIGH特性的代码。
{"title":"Height Gradient Histogram (HIGH) for 3D Scene Labeling","authors":"Gangqiang Zhao, Junsong Yuan, K. Dang","doi":"10.1109/3DV.2014.16","DOIUrl":"https://doi.org/10.1109/3DV.2014.16","url":null,"abstract":"RGB-D (color + 3D point cloud) based scene labeling has received much attention due to the affordable RGB-D sensors such as Microsoft Kinect. To fully utilize the RGB-D data, it is critical to develop robust features that can reliably describe the 3D shape information of the point cloud data. Previous work has proposed to extract SIFT-like features from the depth dimension data directly while ignored the important height dimension data of the 3D point cloud. In this paper, we propose to describe 3D scene using height gradient information and propose a new compact point cloud feature called Height Gradient Histogram (HIGH). Using Text on Boost as the pixel classifier, the experiments on two benchmarked 3D scene labeling datasets show that HIGH feature can well handle the intra-category variations of object class, and significantly improve class-average accuracy compared with the state-of-the-art results. We will publish the code of HIGH feature for the community.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123984207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model 基于各向异性高斯模型和的实时手部跟踪
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.37
Srinath Sridhar, Helge Rhodin, H. Seidel, Antti Oulasvirta, C. Theobalt
Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.
实时无标记手部跟踪在人机交互中越来越重要。由于多自由度、频繁的自我遮挡、快速的运动和均匀的肤色,对任意手部运动进行鲁棒和准确的跟踪是一个具有挑战性的问题。在本文中,我们提出了一种新的方法,从多个RGB相机实时跟踪手部的整个骨架运动。主要贡献包括一种新的生成跟踪方法,该方法采用基于各向异性高斯和(SAG)的隐式手部形状表示,以及一种光滑且可解析微的姿态拟合能量,使得基于梯度的快速姿态优化成为可能。这种形状表示与全透视投影模型一起,可以比文献中相关的基线方法更准确地进行手部建模。我们的方法比以前的方法获得了更好的精度,并且运行在25 fps。我们在公开可用的数据集上定性和定量地展示了这些改进。
{"title":"Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model","authors":"Srinath Sridhar, Helge Rhodin, H. Seidel, Antti Oulasvirta, C. Theobalt","doi":"10.1109/3DV.2014.37","DOIUrl":"https://doi.org/10.1109/3DV.2014.37","url":null,"abstract":"Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121586354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Generalized 4-Points Congruent Sets for 3D Registration 三维配准的广义4点同余集
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.21
Mustafa Mohamad, D. Rappaport, M. Greenspan
The 4-Points Congruent Sets (4PCS) algorithm is a state-of-the-art RANSAC-based algorithm for registering two partially overlapping 3D point sets using raw points. Unlike other RANSAC-based algorithms, which try to achieve registration by searching for matching 3-point bases, it uses a base of two coplanar pairs of points to reduce the search space matching bases. In this work, we first generalize the algorithm by allowing the two pairs to fall on two different planes which have an arbitrary distance, i.e. Degree of separation, between them. Furthermore, we show that increasing the degree of separation exponentially decreases the search space of matching bases. Using this property, we show that using the new generalized base allows for more efficient registration than the original 4PCS base type. We achieve a maximum run-time improvement of 83.10% for 3D registration.
4点同余集(4PCS)算法是一种基于ransac的先进算法,用于使用原始点注册两个部分重叠的3D点集。与其他基于ransac的算法通过搜索匹配的3点基来实现配准不同,该算法使用两个共面点对的基来减少匹配基的搜索空间。在这项工作中,我们首先通过允许两对落在两个不同的平面上来推广算法,这些平面之间具有任意距离,即分离度。此外,我们还证明了分离度的增加会成倍地减少匹配基的搜索空间。使用这个属性,我们表明使用新的广义基比原始的4PCS基类型允许更有效的注册。对于3D配准,我们实现了83.10%的最大运行时间改进。
{"title":"Generalized 4-Points Congruent Sets for 3D Registration","authors":"Mustafa Mohamad, D. Rappaport, M. Greenspan","doi":"10.1109/3DV.2014.21","DOIUrl":"https://doi.org/10.1109/3DV.2014.21","url":null,"abstract":"The 4-Points Congruent Sets (4PCS) algorithm is a state-of-the-art RANSAC-based algorithm for registering two partially overlapping 3D point sets using raw points. Unlike other RANSAC-based algorithms, which try to achieve registration by searching for matching 3-point bases, it uses a base of two coplanar pairs of points to reduce the search space matching bases. In this work, we first generalize the algorithm by allowing the two pairs to fall on two different planes which have an arbitrary distance, i.e. Degree of separation, between them. Furthermore, we show that increasing the degree of separation exponentially decreases the search space of matching bases. Using this property, we show that using the new generalized base allows for more efficient registration than the original 4PCS base type. We achieve a maximum run-time improvement of 83.10% for 3D registration.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"306 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
3D Model Retargeting Using Offset Statistics 使用偏移统计的3D模型重定位
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.74
Xiaokun Wu, Chuan Li, Michael Wand, K. Hildebrandt, Silke Jansen, H. Seidel
Texture synthesis is a versatile tool for creating and editing 2D images. However, applying it to 3D content creation is difficult due to the higher demand of model accuracy and the large search space that also contains many implausible shapes. Our paper explores offset statistics for 3D shape retargeting. We observe that the offset histograms between similar 3D features are sparse, in particular for man-made objects such as buildings and furniture. We employ sparse offset statistics to improve 3D shape retargeting (i.e., Rescaling in different directions). We employ a graph-cut texture synthesis method that iteratively stitches model fragments shifted by the detected sparse offsets. The offsets reveal important structural redundancy which leads to more plausible results and more efficient optimization. Our method is fully automatic, while intuitive user control can be incorporated for interactive modeling in real-time. We empirically evaluate the sparsity of offset statistics across a wide range of subjects, and show our statistics based retargeting significantly improves quality and efficiency over conventional MRF models.
纹理合成是一个用于创建和编辑2D图像的多功能工具。然而,由于对模型精度的要求较高,而且搜索空间大,其中还包含许多不可信的形状,因此将其应用于3D内容创建比较困难。我们的论文探讨了三维形状重定位的偏移统计。我们观察到相似3D特征之间的偏移直方图是稀疏的,特别是对于人造物体,如建筑物和家具。我们使用稀疏偏移统计来改进3D形状重定位(即在不同方向上重新缩放)。我们采用了一种图切割纹理合成方法,迭代地缝合由检测到的稀疏偏移位移的模型碎片。偏移量揭示了重要的结构冗余,从而导致更合理的结果和更有效的优化。我们的方法是全自动的,而直观的用户控制可以纳入实时交互建模。我们通过经验评估了跨广泛主题的偏移统计的稀疏性,并表明我们基于重定向的统计显著提高了传统MRF模型的质量和效率。
{"title":"3D Model Retargeting Using Offset Statistics","authors":"Xiaokun Wu, Chuan Li, Michael Wand, K. Hildebrandt, Silke Jansen, H. Seidel","doi":"10.1109/3DV.2014.74","DOIUrl":"https://doi.org/10.1109/3DV.2014.74","url":null,"abstract":"Texture synthesis is a versatile tool for creating and editing 2D images. However, applying it to 3D content creation is difficult due to the higher demand of model accuracy and the large search space that also contains many implausible shapes. Our paper explores offset statistics for 3D shape retargeting. We observe that the offset histograms between similar 3D features are sparse, in particular for man-made objects such as buildings and furniture. We employ sparse offset statistics to improve 3D shape retargeting (i.e., Rescaling in different directions). We employ a graph-cut texture synthesis method that iteratively stitches model fragments shifted by the detected sparse offsets. The offsets reveal important structural redundancy which leads to more plausible results and more efficient optimization. Our method is fully automatic, while intuitive user control can be incorporated for interactive modeling in real-time. We empirically evaluate the sparsity of offset statistics across a wide range of subjects, and show our statistics based retargeting significantly improves quality and efficiency over conventional MRF models.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123693253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Calibration of Non-overlapping Cameras Using an External SLAM System 使用外接SLAM系统标定非重叠相机
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.106
E. Cansizoglu, Yuichi Taguchi, S. Ramalingam, Yohei Miki
We present a simple method for calibrating a set of cameras that may not have overlapping field of views. We reduce the problem of calibrating the non-overlapping cameras to the problem of localizing the cameras with respect to a global 3D model reconstructed with a simultaneous localization and mapping (SLAM) system. Specifically, we first reconstruct such a global 3D model by using a SLAM system using an RGB-D sensor. We then perform localization and intrinsic parameter estimation for each camera by using 2D-3D correspondences between the camera and the 3D model. Our method locates the cameras within the 3D model, which is useful for visually inspecting camera poses and provides a model-guided browsing interface of the images. We demonstrate the advantages of our method using several indoor scenes.
我们提出了一种简单的方法来校准一组可能没有重叠视场的相机。我们将非重叠摄像机的校准问题简化为使用同步定位和映射(SLAM)系统重建的全局3D模型的摄像机定位问题。具体来说,我们首先通过使用一个使用RGB-D传感器的SLAM系统来重建这样一个全局3D模型。然后,我们使用相机和3D模型之间的2D-3D对应关系对每个相机进行定位和固有参数估计。我们的方法在3D模型中定位相机,这有助于视觉检查相机姿势,并提供模型引导的图像浏览界面。我们用几个室内场景证明了我们的方法的优点。
{"title":"Calibration of Non-overlapping Cameras Using an External SLAM System","authors":"E. Cansizoglu, Yuichi Taguchi, S. Ramalingam, Yohei Miki","doi":"10.1109/3DV.2014.106","DOIUrl":"https://doi.org/10.1109/3DV.2014.106","url":null,"abstract":"We present a simple method for calibrating a set of cameras that may not have overlapping field of views. We reduce the problem of calibrating the non-overlapping cameras to the problem of localizing the cameras with respect to a global 3D model reconstructed with a simultaneous localization and mapping (SLAM) system. Specifically, we first reconstruct such a global 3D model by using a SLAM system using an RGB-D sensor. We then perform localization and intrinsic parameter estimation for each camera by using 2D-3D correspondences between the camera and the 3D model. Our method locates the cameras within the 3D model, which is useful for visually inspecting camera poses and provides a model-guided browsing interface of the images. We demonstrate the advantages of our method using several indoor scenes.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124850271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Quantized Census for Stereoscopic Image Matching 立体图像匹配的量化普查
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.83
R. Basaru, Chris Child, Eduardo Alonso, G. Slabaugh
Current depth capturing devices show serious drawbacks in certain applications, for example ego-centric depth recovery: they are cumbersome, have a high power requirement, and do not portray high resolution at near distance. Stereo-matching techniques are a suitable alternative, but whilst the idea behind these techniques is simple it is well known that recovery of an accurate disparity map by stereo-matching requires overcoming three main problems: occluded regions causing absence of corresponding pixels, existence of noise in the image capturing sensor and inconsistent color and brightness in the captured images. We propose a modified version of the Census-Hamming cost function which allows more robust matching with an emphasis on improving performance under radiometric variations of the input images.
当前的深度捕获设备在某些应用中显示出严重的缺陷,例如以自我为中心的深度恢复:它们笨重,功率要求高,并且不能在近距离描绘高分辨率。立体匹配技术是一种合适的替代方案,但虽然这些技术背后的想法很简单,但众所周知,通过立体匹配恢复准确的视差图需要克服三个主要问题:闭塞区域导致缺乏相应的像素,图像捕获传感器中存在噪声以及捕获图像中不一致的颜色和亮度。我们提出了一个改进版本的Census-Hamming成本函数,它允许更稳健的匹配,并强调在输入图像的辐射变化下提高性能。
{"title":"Quantized Census for Stereoscopic Image Matching","authors":"R. Basaru, Chris Child, Eduardo Alonso, G. Slabaugh","doi":"10.1109/3DV.2014.83","DOIUrl":"https://doi.org/10.1109/3DV.2014.83","url":null,"abstract":"Current depth capturing devices show serious drawbacks in certain applications, for example ego-centric depth recovery: they are cumbersome, have a high power requirement, and do not portray high resolution at near distance. Stereo-matching techniques are a suitable alternative, but whilst the idea behind these techniques is simple it is well known that recovery of an accurate disparity map by stereo-matching requires overcoming three main problems: occluded regions causing absence of corresponding pixels, existence of noise in the image capturing sensor and inconsistent color and brightness in the captured images. We propose a modified version of the Census-Hamming cost function which allows more robust matching with an emphasis on improving performance under radiometric variations of the input images.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128353800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Photometric Stereo Using Internet Images 使用互联网图像的光度立体
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.9
Boxin Shi, K. Inose, Y. Matsushita, P. Tan, Sai-Kit Yeung, K. Ikeuchi
Photometric stereo using unorganized Internet images is very challenging, because the input images are captured under unknown general illuminations, with uncontrolled cameras. We propose to solve this difficult problem by a simple yet effective approach that makes use of a coarse shape prior. The shape prior is obtained from multi-view stereo and will be useful in twofold: resolving the shape-light ambiguity in uncalibrated photometric stereo and guiding the estimated normals to produce the high quality 3D surface. By assuming the surface albedo is not highly contrasted, we also propose a novel linear approximation of the nonlinear camera responses with our normal estimation algorithm. We evaluate our method using synthetic data and demonstrate the surface improvement on real data over multi-view stereo results.
使用无组织的互联网图像进行光度立体测量是非常具有挑战性的,因为输入图像是在未知的一般照明下捕获的,并且相机不受控制。我们提出一种简单而有效的方法来解决这一难题,即利用粗糙形状先验。形状先验是在多视点立体图像中获得的,它将在两个方面发挥作用:解决未经校准的光度立体图像中的形状-光模糊问题,并指导估计的法线生成高质量的三维表面。假设表面反照率对比度不高,我们还提出了一种新的线性近似的非线性相机响应与我们的正常估计算法。我们使用合成数据来评估我们的方法,并展示了在真实数据上对多视图立体结果的表面改进。
{"title":"Photometric Stereo Using Internet Images","authors":"Boxin Shi, K. Inose, Y. Matsushita, P. Tan, Sai-Kit Yeung, K. Ikeuchi","doi":"10.1109/3DV.2014.9","DOIUrl":"https://doi.org/10.1109/3DV.2014.9","url":null,"abstract":"Photometric stereo using unorganized Internet images is very challenging, because the input images are captured under unknown general illuminations, with uncontrolled cameras. We propose to solve this difficult problem by a simple yet effective approach that makes use of a coarse shape prior. The shape prior is obtained from multi-view stereo and will be useful in twofold: resolving the shape-light ambiguity in uncalibrated photometric stereo and guiding the estimated normals to produce the high quality 3D surface. By assuming the surface albedo is not highly contrasted, we also propose a novel linear approximation of the nonlinear camera responses with our normal estimation algorithm. We evaluate our method using synthetic data and demonstrate the surface improvement on real data over multi-view stereo results.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117230042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Vision-Based Differential GPS: Improving VSLAM / GPS Fusion in Urban Environment with 3D Building Models 基于视觉的差分GPS:利用三维建筑模型改进城市环境中的VSLAM / GPS融合
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.73
Dorra Larnaout, V. Gay-Bellile, S. Bourgeois, M. Dhome
We improve in this paper the localization accuracy of visual SLAM (VSLAM) / GPS fusion in dense urban area by using 3D building models provided by Geographic Information System (GIS). GPS inaccuracies are corrected by comparison of the reconstruction resulting from the VSLAM / GPS fusion with 3D building models. These corrected GPS data are thereafter re-injected in the fusion process. Experimental results demonstrate the accuracy improvements achieved through our proposed solution.
本文利用地理信息系统(GIS)提供的三维建筑模型,提高了城市密集地区视觉SLAM (VSLAM) / GPS融合的定位精度。通过将VSLAM / GPS融合的重建结果与三维建筑模型进行比较,修正了GPS的不准确性。这些校正后的GPS数据随后在融合过程中被重新注入。实验结果表明,该方法提高了算法的精度。
{"title":"Vision-Based Differential GPS: Improving VSLAM / GPS Fusion in Urban Environment with 3D Building Models","authors":"Dorra Larnaout, V. Gay-Bellile, S. Bourgeois, M. Dhome","doi":"10.1109/3DV.2014.73","DOIUrl":"https://doi.org/10.1109/3DV.2014.73","url":null,"abstract":"We improve in this paper the localization accuracy of visual SLAM (VSLAM) / GPS fusion in dense urban area by using 3D building models provided by Geographic Information System (GIS). GPS inaccuracies are corrected by comparison of the reconstruction resulting from the VSLAM / GPS fusion with 3D building models. These corrected GPS data are thereafter re-injected in the fusion process. Experimental results demonstrate the accuracy improvements achieved through our proposed solution.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121367774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
On Reliable Estimation of Curvatures of Implicit Surfaces 隐曲面曲率的可靠估计
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.30
Jacob D. Hauenstein, Timothy S Newman
Estimation of curvature in volumetric datasets is considered. One component of the exhibition here is new extensions of several known methods for such estimations in range images to the new domain of volumetric datasets. A second component is that the (1) accuracy and (2) computational performance of these extensions (and five well-known existing methods for curvature estimation in volumetric datasets) are comparatively examined.
考虑了体积数据集的曲率估计。这次展览的一个组成部分是将几种已知的距离图像估计方法扩展到新的体积数据集领域。第二个组成部分是(1)精度和(2)计算性能的这些扩展(和现有的五种众所周知的曲率估计方法在体积数据集)进行了比较检查。
{"title":"On Reliable Estimation of Curvatures of Implicit Surfaces","authors":"Jacob D. Hauenstein, Timothy S Newman","doi":"10.1109/3DV.2014.30","DOIUrl":"https://doi.org/10.1109/3DV.2014.30","url":null,"abstract":"Estimation of curvature in volumetric datasets is considered. One component of the exhibition here is new extensions of several known methods for such estimations in range images to the new domain of volumetric datasets. A second component is that the (1) accuracy and (2) computational performance of these extensions (and five well-known existing methods for curvature estimation in volumetric datasets) are comparatively examined.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126381844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
4DCov: A Nested Covariance Descriptor of Spatio-Temporal Features for Gesture Recognition in Depth Sequences 深度序列手势识别的时空特征嵌套协方差描述子
Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.10
Pol Cirujeda, Xavier Binefa
In this paper we propose a novel covariance-based framework for the robust characterization and classification of human gestures in 3D depth sequences. The proposed 4DCov descriptor uses the notion of covariance to create compact representations of complex interactions between variations of 3D features in the spatial and temporal domain, instead of using the absolute features themselves. Despite the compactness of this representation, it still offers discriminative power for human-gesture classification. The codification of feature variations along a scene makes our descriptor robust to inter-subject and intra-class variations, periodic motions and different speeds during gesture executions, compared to other key point or histogram-based descriptor approaches. Furthermore, a sparse collaborative classification method is also presented, taking advantage of our descriptor laying on a specific manifold topology and observing that similar motions are geometrically clustered in the descriptor space. Classification accuracy results are presented against state-of-the-art approaches on top of four public human gesture datasets acquired with 3D depth sensor devices, including complex gestures from different natures.
在本文中,我们提出了一种新的基于协方差的框架,用于三维深度序列中人类手势的鲁棒表征和分类。提出的4DCov描述符使用协方差的概念来创建空间和时间域中三维特征变化之间复杂相互作用的紧凑表示,而不是使用绝对特征本身。尽管这种表示很紧凑,但它仍然为人类手势分类提供了判别能力。与其他关键点或基于直方图的描述符方法相比,沿着场景的特征变化的编码使我们的描述符对主体间和类内变化,周期运动和手势执行期间的不同速度具有鲁棒性。此外,还提出了一种稀疏协同分类方法,利用我们的描述子放置在特定的流形拓扑上,并观察到相似的运动在描述子空间中呈几何聚类。在使用3D深度传感器设备获取的四种公共人类手势数据集(包括来自不同性质的复杂手势)上,采用最先进的方法给出了分类精度结果。
{"title":"4DCov: A Nested Covariance Descriptor of Spatio-Temporal Features for Gesture Recognition in Depth Sequences","authors":"Pol Cirujeda, Xavier Binefa","doi":"10.1109/3DV.2014.10","DOIUrl":"https://doi.org/10.1109/3DV.2014.10","url":null,"abstract":"In this paper we propose a novel covariance-based framework for the robust characterization and classification of human gestures in 3D depth sequences. The proposed 4DCov descriptor uses the notion of covariance to create compact representations of complex interactions between variations of 3D features in the spatial and temporal domain, instead of using the absolute features themselves. Despite the compactness of this representation, it still offers discriminative power for human-gesture classification. The codification of feature variations along a scene makes our descriptor robust to inter-subject and intra-class variations, periodic motions and different speeds during gesture executions, compared to other key point or histogram-based descriptor approaches. Furthermore, a sparse collaborative classification method is also presented, taking advantage of our descriptor laying on a specific manifold topology and observing that similar motions are geometrically clustered in the descriptor space. Classification accuracy results are presented against state-of-the-art approaches on top of four public human gesture datasets acquired with 3D depth sensor devices, including complex gestures from different natures.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131682014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
期刊
2014 2nd International Conference on 3D Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1