首页 > 最新文献

Proceedings. International Conference on 3D Vision最新文献

英文 中文
Message from the Program Chairs: 3DV 2022 节目主持人的信息:3DV 2022
Pub Date : 2022-09-01 DOI: 10.1109/3dv57658.2022.00006
Angela Dai, J. Kosecka, Gin Hee Lee, K. Schindler
{"title":"Message from the Program Chairs: 3DV 2022","authors":"Angela Dai, J. Kosecka, Gin Hee Lee, K. Schindler","doi":"10.1109/3dv57658.2022.00006","DOIUrl":"https://doi.org/10.1109/3dv57658.2022.00006","url":null,"abstract":"","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"88 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78708115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the 3DV 2020 Program Chairs 来自3DV 2020项目主席的信息
Pub Date : 2020-10-01 DOI: 10.1109/ldav51489.2020.00005
A. Hilton, Z. Kukelova, Stephen Lin, J. Sato
{"title":"Message from the 3DV 2020 Program Chairs","authors":"A. Hilton, Z. Kukelova, Stephen Lin, J. Sato","doi":"10.1109/ldav51489.2020.00005","DOIUrl":"https://doi.org/10.1109/ldav51489.2020.00005","url":null,"abstract":"","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"1 1","pages":"xx"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79162892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEGCloud: Semantic Segmentation of 3D Point Clouds SEGCloud: 3D点云的语义分割
Pub Date : 2017-10-01 DOI: 10.1109/3DV.2017.00067
Lyne P. Tchapmi, C. Choy, Iro Armeni, JunYoung Gwak, S. Savarese
3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, this http URL), and show performance comparable or superior to the state-of-the-art on all datasets.
三维语义场景标注是智能体在现实世界中操作的基础。特别是,标记来自传感器的原始3D点集提供了细粒度的语义。最近的工作利用了神经网络(nn)的能力,但仅限于粗体素预测,并且没有明确地强制全局一致性。我们提出了SEGCloud,这是一个端到端框架,用于获得三维点级分割,它结合了神经网络,三线性插值(TI)和完全连接条件随机场(FC-CRF)的优点。来自3D全卷积神经网络的粗体素预测通过三线性插值传递回原始3D点。然后FC-CRF强制全局一致性,并在点上提供细粒度语义。我们将后者作为一个可微的递归神经网络来实现联合优化。我们在两个室内和两个室外3D数据集(NYU V2, S3DIS, KITTI,此http URL)上评估了该框架,并在所有数据集上显示出与最先进的性能相当或优于最先进的性能。
{"title":"SEGCloud: Semantic Segmentation of 3D Point Clouds","authors":"Lyne P. Tchapmi, C. Choy, Iro Armeni, JunYoung Gwak, S. Savarese","doi":"10.1109/3DV.2017.00067","DOIUrl":"https://doi.org/10.1109/3DV.2017.00067","url":null,"abstract":"3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, this http URL), and show performance comparable or superior to the state-of-the-art on all datasets.","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"43 1","pages":"537-547"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86432312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 628
Performance Evaluation of 3D Correspondence Grouping Algorithms 三维对应分组算法的性能评价
Pub Date : 2017-10-01 DOI: 10.1109/3DV.2017.00060
Jiaqi Yang, Ke Xian, Yang Xiao, ZHIGUO CAO
This paper presents a thorough evaluation of several widely-used 3D correspondence grouping algorithms, motived by their significance in vision tasks relying on correct feature correspondences. A good correspondence grouping algorithm is desired to retrieve as many as inliers from initial feature matches, giving a rise in both precision and recall. Towards this rule, we deploy the experiments on three benchmarks respectively addressing shape retrieval, 3D object recognition and point cloud registration scenarios. The variety in application context brings a rich category of nuisances including noise, varying point densities, clutter, occlusion and partial overlaps. It also results to different ratios of inliers and correspondence distributions for comprehensive evaluation. Based on the quantitative outcomes, we give a summarization of the merits/demerits of the evaluated algorithms from both performance and efficiency perspectives.
本文对几种广泛使用的三维对应分组算法进行了全面的评估,其动机是它们在依赖于正确特征对应的视觉任务中的重要性。一个好的对应分组算法需要从初始特征匹配中检索尽可能多的内层,从而提高精度和召回率。针对这一规律,我们在三个基准上分别部署了实验,分别解决了形状检索、3D物体识别和点云配准场景。应用环境的变化带来了各种各样的干扰,包括噪声、变化的点密度、杂波、遮挡和部分重叠。这也导致了不同比例的内线和对应分布进行综合评价。基于定量结果,我们从性能和效率的角度总结了所评估算法的优缺点。
{"title":"Performance Evaluation of 3D Correspondence Grouping Algorithms","authors":"Jiaqi Yang, Ke Xian, Yang Xiao, ZHIGUO CAO","doi":"10.1109/3DV.2017.00060","DOIUrl":"https://doi.org/10.1109/3DV.2017.00060","url":null,"abstract":"This paper presents a thorough evaluation of several widely-used 3D correspondence grouping algorithms, motived by their significance in vision tasks relying on correct feature correspondences. A good correspondence grouping algorithm is desired to retrieve as many as inliers from initial feature matches, giving a rise in both precision and recall. Towards this rule, we deploy the experiments on three benchmarks respectively addressing shape retrieval, 3D object recognition and point cloud registration scenarios. The variety in application context brings a rich category of nuisances including noise, varying point densities, clutter, occlusion and partial overlaps. It also results to different ratios of inliers and correspondence distributions for comprehensive evaluation. Based on the quantitative outcomes, we give a summarization of the merits/demerits of the evaluated algorithms from both performance and efficiency perspectives.","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"48 1","pages":"467-476"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82617045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
GSLAM: Initialization-Robust Monocular Visual SLAM via Global Structure-from-Motion GSLAM:初始化-基于全局结构-运动的鲁棒单目SLAM
Pub Date : 2017-08-16 DOI: 10.1109/3DV.2017.00027
Chengzhou Tang, Oliver Wang, P. Tan
Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.
许多单目视觉SLAM算法都是由运动增量结构(SfM)方法衍生而来。这项工作提出了一种新的单目SLAM方法,该方法集成了全球SfM的最新进展。特别是,我们提出了视觉SLAM的两个主要贡献。首先,我们采用一种新的对地图初始化误差具有较强鲁棒性的秩1矩阵分解技术来解决视觉里程计问题。其次,我们采用了一种最新的全局SfM方法进行姿态图优化,这导致了一个多阶段的线性公式,并使L1优化具有更好的假循环鲁棒性。这两种方法的结合产生了更强大的重建,并且比最近最先进的SLAM系统快得多(4倍)。我们还提供了一个新的数据集,记录了Vicon运动捕捉室中的地面真实摄像机运动,并将我们的方法与先前的系统和已建立的基准数据集进行了比较。
{"title":"GSLAM: Initialization-Robust Monocular Visual SLAM via Global Structure-from-Motion","authors":"Chengzhou Tang, Oliver Wang, P. Tan","doi":"10.1109/3DV.2017.00027","DOIUrl":"https://doi.org/10.1109/3DV.2017.00027","url":null,"abstract":"Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"6 1","pages":"155-164"},"PeriodicalIF":0.0,"publicationDate":"2017-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82151666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
3D Face Hallucination from a Single Depth Frame. 单深度帧的3D面部幻觉。
Pub Date : 2014-12-01 DOI: 10.1109/3DV.2014.67
Shu Liang, Ira Kemelmacher-Shlizerman, Linda G Shapiro

We present an algorithm that takes a single frame of a person's face from a depth camera, e.g., Kinect, and produces a high-resolution 3D mesh of the input face. We leverage a dataset of 3D face meshes of 1204 distinct individuals ranging from age 3 to 40, captured in a neutral expression. We divide the input depth frame into semantically significant regions (eyes, nose, mouth, cheeks) and search the database for the best matching shape per region. We further combine the input depth frame with the matched database shapes into a single mesh that results in a highresolution shape of the input person. Our system is fully automatic and uses only depth data for matching, making it invariant to imaging conditions. We evaluate our results using ground truth shapes, as well as compare to state-of-the-art shape estimation methods. We demonstrate the robustness of our local matching approach with high-quality reconstruction of faces that fall outside of the dataset span, e.g., faces older than 40 years old, facial expressions, and different ethnicities.

我们提出了一种算法,该算法从深度相机(例如Kinect)中获取人脸的单帧,并生成输入人脸的高分辨率3D网格。我们利用1204个年龄从3岁到40岁的不同个体的3D面部网格数据集,以中性表情捕获。我们将输入深度帧划分为语义上重要的区域(眼睛,鼻子,嘴巴,脸颊),并在数据库中搜索每个区域的最佳匹配形状。我们进一步将输入深度帧与匹配的数据库形状组合成一个网格,从而得到输入人物的高分辨率形状。我们的系统是全自动的,只使用深度数据进行匹配,使其不受成像条件的影响。我们使用地面真值形状评估我们的结果,并与最先进的形状估计方法进行比较。我们通过对数据集范围之外的人脸进行高质量重建来证明我们的局部匹配方法的鲁棒性,例如,年龄超过40岁的人脸、面部表情和不同的种族。
{"title":"3D Face Hallucination from a Single Depth Frame.","authors":"Shu Liang,&nbsp;Ira Kemelmacher-Shlizerman,&nbsp;Linda G Shapiro","doi":"10.1109/3DV.2014.67","DOIUrl":"https://doi.org/10.1109/3DV.2014.67","url":null,"abstract":"<p><p>We present an algorithm that takes a single frame of a person's face from a depth camera, e.g., Kinect, and produces a high-resolution 3D mesh of the input face. We leverage a dataset of 3D face meshes of 1204 distinct individuals ranging from age 3 to 40, captured in a neutral expression. We divide the input depth frame into semantically significant regions (eyes, nose, mouth, cheeks) and search the database for the best matching shape per region. We further combine the input depth frame with the matched database shapes into a single mesh that results in a highresolution shape of the input person. Our system is fully automatic and uses only depth data for matching, making it invariant to imaging conditions. We evaluate our results using ground truth shapes, as well as compare to state-of-the-art shape estimation methods. We demonstrate the robustness of our local matching approach with high-quality reconstruction of faces that fall outside of the dataset span, e.g., faces older than 40 years old, facial expressions, and different ethnicities.</p>","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"2014 ","pages":"31-38"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/3DV.2014.67","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33994948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
期刊
Proceedings. International Conference on 3D Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1