Unsupervised deep depth completion with heterogeneous LiDAR and RGB-D camera depth information

IF 7.5 1区 地球科学 Q1 Earth and Planetary Sciences International Journal of Applied Earth Observation and Geoinformation Pub Date : 2024-12-16 DOI:10.1016/j.jag.2024.104327
Guohua Gou, Han Li, Xuanhao Wang, Hao Zhang, Wei Yang, Haigang Sui
{"title":"Unsupervised deep depth completion with heterogeneous LiDAR and RGB-D camera depth information","authors":"Guohua Gou, Han Li, Xuanhao Wang, Hao Zhang, Wei Yang, Haigang Sui","doi":"10.1016/j.jag.2024.104327","DOIUrl":null,"url":null,"abstract":"In this work, a depth-only completion method designed to enhance perception in light-deprived environments. We achieve this through LidarDepthNet, a novel end-to-end unsupervised learning framework that fuses heterogeneous depth information captured by two distinct depth sensors: LiDAR and RGB-D cameras. This represents the first unsupervised LiDAR-depth fusion framework for depth completion, demonstrating scalability to diverse real-world subterranean and enclosed environments. To facilitate unsupervised learning, we leverage relative rigid motion transfer (RRMT) to synthesize co-visible depth maps from temporally adjacent frames. This allows us to construct a temporal depth consistency loss, constraining the fused depth to adhere to realistic metric scale. Furthermore, we introduce measurement confidence into the heterogeneous depth fusion model, further refining the fused depth and promoting synergistic complementation between the two depth modalities. Extensive evaluation on both real-world and synthetic datasets, notably a newly proposed LiDAR-depth fusion dataset, LidarDepthSet, demonstrates the significant advantages of our method compared to existing state-of-the-art approaches.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"20 1","pages":""},"PeriodicalIF":7.5000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Applied Earth Observation and Geoinformation","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.1016/j.jag.2024.104327","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Earth and Planetary Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

In this work, a depth-only completion method designed to enhance perception in light-deprived environments. We achieve this through LidarDepthNet, a novel end-to-end unsupervised learning framework that fuses heterogeneous depth information captured by two distinct depth sensors: LiDAR and RGB-D cameras. This represents the first unsupervised LiDAR-depth fusion framework for depth completion, demonstrating scalability to diverse real-world subterranean and enclosed environments. To facilitate unsupervised learning, we leverage relative rigid motion transfer (RRMT) to synthesize co-visible depth maps from temporally adjacent frames. This allows us to construct a temporal depth consistency loss, constraining the fused depth to adhere to realistic metric scale. Furthermore, we introduce measurement confidence into the heterogeneous depth fusion model, further refining the fused depth and promoting synergistic complementation between the two depth modalities. Extensive evaluation on both real-world and synthetic datasets, notably a newly proposed LiDAR-depth fusion dataset, LidarDepthSet, demonstrates the significant advantages of our method compared to existing state-of-the-art approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于非均匀激光雷达和RGB-D相机深度信息的无监督深度完井
在这项工作中,一种仅深度完成的方法旨在增强在光线不足的环境中的感知。我们通过LidarDepthNet实现了这一目标,这是一种新颖的端到端无监督学习框架,融合了两个不同深度传感器(LiDAR和RGB-D相机)捕获的异构深度信息。这是首个用于深度完井的无监督激光雷达深度融合框架,展示了在各种真实地下和封闭环境下的可扩展性。为了促进无监督学习,我们利用相对刚性运动转移(RRMT)从时间相邻帧合成共可见深度图。这允许我们构建一个时间深度一致性损失,约束融合深度坚持现实的度量尺度。在非均质深度融合模型中引入测量置信度,进一步细化融合深度,促进两种深度模式之间的协同互补。对真实世界和合成数据集的广泛评估,特别是新提出的激光雷达深度融合数据集LidarDepthSet,证明了我们的方法与现有最先进的方法相比具有显着优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.20
自引率
8.00%
发文量
49
审稿时长
7.2 months
期刊介绍: The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.
期刊最新文献
Modeling the impact of pandemic on the urban thermal environment over megacities in China: Spatiotemporal analysis from the perspective of heat anomaly variations BSG-WSL: BackScatter-guided weakly supervised learning for water mapping in SAR images Identification of standing dead trees in Robinia pseudoacacia plantations across China’s Loess Plateau using multiple deep learning models Detecting glacial lake water quality indicators from RGB surveillance images via deep learning Synergistic mapping of urban tree canopy height using ICESat-2 data and GF-2 imagery
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1