Depth-Guided Robust and Fast Point Cloud Fusion NeRF for Sparse Input Views

ArXiv Pub Date : 2024-03-04 DOI:10.1609/aaai.v38i3.27968
Shuai Guo, Q. Wang, Yijie Gao, Rong Xie, Li Song
{"title":"Depth-Guided Robust and Fast Point Cloud Fusion NeRF for Sparse Input Views","authors":"Shuai Guo, Q. Wang, Yijie Gao, Rong Xie, Li Song","doi":"10.1609/aaai.v38i3.27968","DOIUrl":null,"url":null,"abstract":"Novel-view synthesis with sparse input views is important for real-world applications like AR/VR and autonomous driving. Recent methods have integrated depth information into NeRFs for sparse input synthesis, leveraging depth prior for geometric and spatial understanding. However, most existing works tend to overlook inaccuracies within depth maps and have low time efficiency. To address these issues, we propose a depth-guided robust and fast point cloud fusion NeRF for sparse inputs. We perceive radiance fields as an explicit voxel grid of features. A point cloud is constructed for each input view, characterized within the voxel grid using matrices and vectors. We accumulate the point cloud of each input view to construct the fused point cloud of the entire scene. Each voxel determines its density and appearance by referring to the point cloud of the entire scene. Through point cloud fusion and voxel grid fine-tuning, inaccuracies in depth values are refined or substituted by those from other views. Moreover, our method can achieve faster reconstruction and greater compactness through effective vector-matrix decomposition. Experimental results underline the superior performance and time efficiency of our approach compared to state-of-the-art baselines.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"115 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaai.v38i3.27968","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Novel-view synthesis with sparse input views is important for real-world applications like AR/VR and autonomous driving. Recent methods have integrated depth information into NeRFs for sparse input synthesis, leveraging depth prior for geometric and spatial understanding. However, most existing works tend to overlook inaccuracies within depth maps and have low time efficiency. To address these issues, we propose a depth-guided robust and fast point cloud fusion NeRF for sparse inputs. We perceive radiance fields as an explicit voxel grid of features. A point cloud is constructed for each input view, characterized within the voxel grid using matrices and vectors. We accumulate the point cloud of each input view to construct the fused point cloud of the entire scene. Each voxel determines its density and appearance by referring to the point cloud of the entire scene. Through point cloud fusion and voxel grid fine-tuning, inaccuracies in depth values are refined or substituted by those from other views. Moreover, our method can achieve faster reconstruction and greater compactness through effective vector-matrix decomposition. Experimental results underline the superior performance and time efficiency of our approach compared to state-of-the-art baselines.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
针对稀疏输入视图的深度引导鲁棒快速点云融合 NeRF
具有稀疏输入视图的新视图合成对于 AR/VR 和自动驾驶等现实世界应用非常重要。最近的方法已将深度信息集成到 NeRF 中,用于稀疏输入合成,利用深度先验进行几何和空间理解。然而,大多数现有方法往往会忽略深度图中的不准确性,而且时间效率较低。为了解决这些问题,我们提出了一种用于稀疏输入的深度引导的稳健、快速点云融合 NeRF。我们将辐射场视为一个明确的体素网格特征。我们为每个输入视图构建点云,并使用矩阵和矢量对体素网格进行特征描述。我们将每个输入视图的点云累积起来,构建整个场景的融合点云。每个体素通过参考整个场景的点云来确定其密度和外观。通过点云融合和体素网格微调,深度值的误差会被其他视图的深度值完善或替代。此外,通过有效的矢量矩阵分解,我们的方法可以实现更快的重建和更紧凑的结构。实验结果表明,与最先进的基线方法相比,我们的方法具有更优越的性能和时间效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Combining Transformer based Deep Reinforcement Learning with Black-Litterman Model for Portfolio Optimization TinyGC-Net: An Extremely Tiny Network for Calibrating MEMS Gyroscopes Short-Term Solar Irradiance Forecasting Under Data Transmission Constraints F2Depth: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis Efficient Constrained k-Center Clustering with Background Knowledge
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1