Sheng Ye, Yuze He, Matthieu Lin, Jenny Sheng, Ruoyu Fan, Yiheng Han, Yubin Hu, Ran Yi, Yu-Hui Wen, Yong-Jin Liu, Wenping Wang
{"title":"PVP-Recon:通过翘曲一致性进行渐进式视图规划,实现稀疏视图曲面重构","authors":"Sheng Ye, Yuze He, Matthieu Lin, Jenny Sheng, Ruoyu Fan, Yiheng Han, Yubin Hu, Ran Yi, Yu-Hui Wen, Yong-Jin Liu, Wenping Wang","doi":"arxiv-2409.05474","DOIUrl":null,"url":null,"abstract":"Neural implicit representations have revolutionized dense multi-view surface\nreconstruction, yet their performance significantly diminishes with sparse\ninput views. A few pioneering works have sought to tackle the challenge of\nsparse-view reconstruction by leveraging additional geometric priors or\nmulti-scene generalizability. However, they are still hindered by the imperfect\nchoice of input views, using images under empirically determined viewpoints to\nprovide considerable overlap. We propose PVP-Recon, a novel and effective\nsparse-view surface reconstruction method that progressively plans the next\nbest views to form an optimal set of sparse viewpoints for image capturing.\nPVP-Recon starts initial surface reconstruction with as few as 3 views and\nprogressively adds new views which are determined based on a novel warping\nscore that reflects the information gain of each newly added view. This\nprogressive view planning progress is interleaved with a neural SDF-based\nreconstruction module that utilizes multi-resolution hash features, enhanced by\na progressive training scheme and a directional Hessian loss. Quantitative and\nqualitative experiments on three benchmark datasets show that our framework\nachieves high-quality reconstruction with a constrained input budget and\noutperforms existing baselines.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"60 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PVP-Recon: Progressive View Planning via Warping Consistency for Sparse-View Surface Reconstruction\",\"authors\":\"Sheng Ye, Yuze He, Matthieu Lin, Jenny Sheng, Ruoyu Fan, Yiheng Han, Yubin Hu, Ran Yi, Yu-Hui Wen, Yong-Jin Liu, Wenping Wang\",\"doi\":\"arxiv-2409.05474\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural implicit representations have revolutionized dense multi-view surface\\nreconstruction, yet their performance significantly diminishes with sparse\\ninput views. A few pioneering works have sought to tackle the challenge of\\nsparse-view reconstruction by leveraging additional geometric priors or\\nmulti-scene generalizability. However, they are still hindered by the imperfect\\nchoice of input views, using images under empirically determined viewpoints to\\nprovide considerable overlap. We propose PVP-Recon, a novel and effective\\nsparse-view surface reconstruction method that progressively plans the next\\nbest views to form an optimal set of sparse viewpoints for image capturing.\\nPVP-Recon starts initial surface reconstruction with as few as 3 views and\\nprogressively adds new views which are determined based on a novel warping\\nscore that reflects the information gain of each newly added view. This\\nprogressive view planning progress is interleaved with a neural SDF-based\\nreconstruction module that utilizes multi-resolution hash features, enhanced by\\na progressive training scheme and a directional Hessian loss. Quantitative and\\nqualitative experiments on three benchmark datasets show that our framework\\nachieves high-quality reconstruction with a constrained input budget and\\noutperforms existing baselines.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"60 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05474\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PVP-Recon: Progressive View Planning via Warping Consistency for Sparse-View Surface Reconstruction
Neural implicit representations have revolutionized dense multi-view surface
reconstruction, yet their performance significantly diminishes with sparse
input views. A few pioneering works have sought to tackle the challenge of
sparse-view reconstruction by leveraging additional geometric priors or
multi-scene generalizability. However, they are still hindered by the imperfect
choice of input views, using images under empirically determined viewpoints to
provide considerable overlap. We propose PVP-Recon, a novel and effective
sparse-view surface reconstruction method that progressively plans the next
best views to form an optimal set of sparse viewpoints for image capturing.
PVP-Recon starts initial surface reconstruction with as few as 3 views and
progressively adds new views which are determined based on a novel warping
score that reflects the information gain of each newly added view. This
progressive view planning progress is interleaved with a neural SDF-based
reconstruction module that utilizes multi-resolution hash features, enhanced by
a progressive training scheme and a directional Hessian loss. Quantitative and
qualitative experiments on three benchmark datasets show that our framework
achieves high-quality reconstruction with a constrained input budget and
outperforms existing baselines.