深度融合黑色,透明,反射和无纹理的对象

Chun-Yu Chai, Yu-Po Wu, Shiao-Li Tsao
{"title":"深度融合黑色,透明,反射和无纹理的对象","authors":"Chun-Yu Chai, Yu-Po Wu, Shiao-Li Tsao","doi":"10.1109/ICRA40945.2020.9196894","DOIUrl":null,"url":null,"abstract":"Structured-light and stereo cameras, which are widely used to construct point clouds for robotic applications, have different limitations on estimating depth values. Structured-light cameras fail in black, transparent, and reflective objects, which influence the light path; stereo cameras fail in texture-less objects. In this work, we propose a depth fusion model that complements these two types of methods to generate high-quality point clouds for short-range robotic applications. The model first determines the fusion weights from the two input depth images and then refines the fused depth using color features. We construct a dataset containing the aforementioned challenging objects and report the performance of our proposed model. The results reveal that our method reduces the average L1 distance on depth prediction by 75% and 52% compared with the original depth output of the structured-light camera and the stereo model, respectively. A noticeable improvement on the Iterative Closest Point (ICP) algorithm can be achieved by using the refined depth images output from our method.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"2 1","pages":"6766-6772"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Deep Depth Fusion for Black, Transparent, Reflective and Texture-Less Objects\",\"authors\":\"Chun-Yu Chai, Yu-Po Wu, Shiao-Li Tsao\",\"doi\":\"10.1109/ICRA40945.2020.9196894\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Structured-light and stereo cameras, which are widely used to construct point clouds for robotic applications, have different limitations on estimating depth values. Structured-light cameras fail in black, transparent, and reflective objects, which influence the light path; stereo cameras fail in texture-less objects. In this work, we propose a depth fusion model that complements these two types of methods to generate high-quality point clouds for short-range robotic applications. The model first determines the fusion weights from the two input depth images and then refines the fused depth using color features. We construct a dataset containing the aforementioned challenging objects and report the performance of our proposed model. The results reveal that our method reduces the average L1 distance on depth prediction by 75% and 52% compared with the original depth output of the structured-light camera and the stereo model, respectively. A noticeable improvement on the Iterative Closest Point (ICP) algorithm can be achieved by using the refined depth images output from our method.\",\"PeriodicalId\":6859,\"journal\":{\"name\":\"2020 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"2 1\",\"pages\":\"6766-6772\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA40945.2020.9196894\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA40945.2020.9196894","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

结构光相机和立体相机被广泛用于机器人应用的点云构建,它们在估计深度值方面有不同的局限性。结构光相机无法拍摄黑色、透明和反射物体,因为它们会影响光路;立体相机无法拍摄无纹理的物体。在这项工作中,我们提出了一种深度融合模型,该模型补充了这两种方法,为短程机器人应用生成高质量的点云。该模型首先从两个输入深度图像中确定融合权重,然后利用颜色特征对融合深度进行细化。我们构建了一个包含上述挑战性对象的数据集,并报告了我们提出的模型的性能。结果表明,与结构光相机和立体模型的原始深度输出相比,我们的方法在深度预测上的平均L1距离分别减少了75%和52%。通过使用该方法输出的精细深度图像,可以显著改进迭代最近点(ICP)算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Depth Fusion for Black, Transparent, Reflective and Texture-Less Objects
Structured-light and stereo cameras, which are widely used to construct point clouds for robotic applications, have different limitations on estimating depth values. Structured-light cameras fail in black, transparent, and reflective objects, which influence the light path; stereo cameras fail in texture-less objects. In this work, we propose a depth fusion model that complements these two types of methods to generate high-quality point clouds for short-range robotic applications. The model first determines the fusion weights from the two input depth images and then refines the fused depth using color features. We construct a dataset containing the aforementioned challenging objects and report the performance of our proposed model. The results reveal that our method reduces the average L1 distance on depth prediction by 75% and 52% compared with the original depth output of the structured-light camera and the stereo model, respectively. A noticeable improvement on the Iterative Closest Point (ICP) algorithm can be achieved by using the refined depth images output from our method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Abstractions for computing all robotic sensors that suffice to solve a planning problem An Adaptive Supervisory Control Approach to Dynamic Locomotion Under Parametric Uncertainty Interval Search Genetic Algorithm Based on Trajectory to Solve Inverse Kinematics of Redundant Manipulators and Its Application Path-Following Model Predictive Control of Ballbots Identification and evaluation of a force model for multirotor UAVs*
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1