Pano2Room:从单一室内全景图合成新颖视图

Guo Pu, Yiming Zhao, Zhouhui Lian
{"title":"Pano2Room:从单一室内全景图合成新颖视图","authors":"Guo Pu, Yiming Zhao, Zhouhui Lian","doi":"arxiv-2408.11413","DOIUrl":null,"url":null,"abstract":"Recent single-view 3D generative methods have made significant advancements\nby leveraging knowledge distilled from extensive 3D object datasets. However,\nchallenges persist in the synthesis of 3D scenes from a single view, primarily\ndue to the complexity of real-world environments and the limited availability\nof high-quality prior resources. In this paper, we introduce a novel approach\ncalled Pano2Room, designed to automatically reconstruct high-quality 3D indoor\nscenes from a single panoramic image. These panoramic images can be easily\ngenerated using a panoramic RGBD inpainter from captures at a single location\nwith any camera. The key idea is to initially construct a preliminary mesh from\nthe input panorama, and iteratively refine this mesh using a panoramic RGBD\ninpainter while collecting photo-realistic 3D-consistent pseudo novel views.\nFinally, the refined mesh is converted into a 3D Gaussian Splatting field and\ntrained with the collected pseudo novel views. This pipeline enables the\nreconstruction of real-world 3D scenes, even in the presence of large\nocclusions, and facilitates the synthesis of photo-realistic novel views with\ndetailed geometry. Extensive qualitative and quantitative experiments have been\nconducted to validate the superiority of our method in single-panorama indoor\nnovel synthesis compared to the state-of-the-art. Our code and data are\navailable at \\url{https://github.com/TrickyGo/Pano2Room}.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pano2Room: Novel View Synthesis from a Single Indoor Panorama\",\"authors\":\"Guo Pu, Yiming Zhao, Zhouhui Lian\",\"doi\":\"arxiv-2408.11413\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent single-view 3D generative methods have made significant advancements\\nby leveraging knowledge distilled from extensive 3D object datasets. However,\\nchallenges persist in the synthesis of 3D scenes from a single view, primarily\\ndue to the complexity of real-world environments and the limited availability\\nof high-quality prior resources. In this paper, we introduce a novel approach\\ncalled Pano2Room, designed to automatically reconstruct high-quality 3D indoor\\nscenes from a single panoramic image. These panoramic images can be easily\\ngenerated using a panoramic RGBD inpainter from captures at a single location\\nwith any camera. The key idea is to initially construct a preliminary mesh from\\nthe input panorama, and iteratively refine this mesh using a panoramic RGBD\\ninpainter while collecting photo-realistic 3D-consistent pseudo novel views.\\nFinally, the refined mesh is converted into a 3D Gaussian Splatting field and\\ntrained with the collected pseudo novel views. This pipeline enables the\\nreconstruction of real-world 3D scenes, even in the presence of large\\nocclusions, and facilitates the synthesis of photo-realistic novel views with\\ndetailed geometry. Extensive qualitative and quantitative experiments have been\\nconducted to validate the superiority of our method in single-panorama indoor\\nnovel synthesis compared to the state-of-the-art. Our code and data are\\navailable at \\\\url{https://github.com/TrickyGo/Pano2Room}.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.11413\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.11413","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近的单视角三维生成方法利用从大量三维物体数据集中提炼出的知识,取得了重大进展。然而,从单一视角合成三维场景的挑战依然存在,这主要是由于现实世界环境的复杂性和高质量先验资源的有限性。在本文中,我们介绍了一种名为 Pano2Room 的新方法,旨在从单个全景图像自动重建高质量的三维室内场景。这些全景图像可以使用全景 RGBD inpainter 从单个位置的任意摄像头捕捉的图像中轻松生成。其主要思路是,首先根据输入的全景图像构建初步网格,然后使用全景 RGBD inpainter 迭代完善该网格,同时收集逼真的 3D 一致伪新视图。即使在存在大量夹杂物的情况下,该流水线也能构建真实世界的三维场景,并有助于合成具有详细几何图形的照片般逼真的新视图。我们进行了大量定性和定量实验,验证了我们的方法在单全景室内小说合成方面优于最先进的方法。我们的代码和数据可在(url{https://github.com/TrickyGo/Pano2Room}.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Pano2Room: Novel View Synthesis from a Single Indoor Panorama
Recent single-view 3D generative methods have made significant advancements by leveraging knowledge distilled from extensive 3D object datasets. However, challenges persist in the synthesis of 3D scenes from a single view, primarily due to the complexity of real-world environments and the limited availability of high-quality prior resources. In this paper, we introduce a novel approach called Pano2Room, designed to automatically reconstruct high-quality 3D indoor scenes from a single panoramic image. These panoramic images can be easily generated using a panoramic RGBD inpainter from captures at a single location with any camera. The key idea is to initially construct a preliminary mesh from the input panorama, and iteratively refine this mesh using a panoramic RGBD inpainter while collecting photo-realistic 3D-consistent pseudo novel views. Finally, the refined mesh is converted into a 3D Gaussian Splatting field and trained with the collected pseudo novel views. This pipeline enables the reconstruction of real-world 3D scenes, even in the presence of large occlusions, and facilitates the synthesis of photo-realistic novel views with detailed geometry. Extensive qualitative and quantitative experiments have been conducted to validate the superiority of our method in single-panorama indoor novel synthesis compared to the state-of-the-art. Our code and data are available at \url{https://github.com/TrickyGo/Pano2Room}.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations A Missing Data Imputation GAN for Character Sprite Generation Visualizing Temporal Topic Embeddings with a Compass Playground v3: Improving Text-to-Image Alignment with Deep-Fusion Large Language Models Phys3DGS: Physically-based 3D Gaussian Splatting for Inverse Rendering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1