{"title":"MT-NeRF: Neural implicit representation based on multi-resolution geometric feature planes","authors":"Wanqi Jiang, Yafei Liu, Mujiao Ouyang, Xiaoguo Zhang","doi":"10.1016/j.cag.2024.104157","DOIUrl":null,"url":null,"abstract":"<div><div>Reconstructing an indoor-scale scene from scratch is a difficult task when the camera pose is unknown. If it is also required to achieve fast convergence without sacrificing quality and ensure low memory usage at the same time, this work will be even more challenging. In this paper, we propose MT-NeRF, a novel radiance field rendering method based on RGB-D inputs without pre-computed camera poses. MT-NeRF maps indoor scenes at real-world scales to multi-resolution geometric feature planes, which greatly reduces memory footprint and enhances detailed scene fitting. In addition, MT-NeRF significantly enhances the localization accuracy of the system by introducing a photometric distortion loss based on interframe surface pixels. For keyframe selection, MT-NeRF employs a global-to-local keyframe selection strategy, which markedly enhances the global consistency of scene reconstruction. Experiments are designed and conducted to validate the effectiveness of MT-NeRF in scenarios involving complex motion or noisy depth map inputs. The results demonstrate remarkable improvements in scene reconstruction quality and pose estimation accuracy, all while ensuring a low memory footprint. At the same time, our method achieves a speedup of approximately fivefold.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104157"},"PeriodicalIF":2.5000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849324002929","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Reconstructing an indoor-scale scene from scratch is a difficult task when the camera pose is unknown. If it is also required to achieve fast convergence without sacrificing quality and ensure low memory usage at the same time, this work will be even more challenging. In this paper, we propose MT-NeRF, a novel radiance field rendering method based on RGB-D inputs without pre-computed camera poses. MT-NeRF maps indoor scenes at real-world scales to multi-resolution geometric feature planes, which greatly reduces memory footprint and enhances detailed scene fitting. In addition, MT-NeRF significantly enhances the localization accuracy of the system by introducing a photometric distortion loss based on interframe surface pixels. For keyframe selection, MT-NeRF employs a global-to-local keyframe selection strategy, which markedly enhances the global consistency of scene reconstruction. Experiments are designed and conducted to validate the effectiveness of MT-NeRF in scenarios involving complex motion or noisy depth map inputs. The results demonstrate remarkable improvements in scene reconstruction quality and pose estimation accuracy, all while ensuring a low memory footprint. At the same time, our method achieves a speedup of approximately fivefold.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.