Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai
{"title":"超级 NeRF:针对 NeRF 超级分辨率的视图一致性细节生成。","authors":"Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai","doi":"10.1109/TVCG.2024.3490840","DOIUrl":null,"url":null,"abstract":"<p><p>The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on making full use of high-resolution images to generate high-resolution novel views, but less considering the generation of high-resolution details given only low-resolution images. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate low-resolution-guided high-resolution 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a multi-view consistency-controlling super-resolution module to generate various view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each input view to control the generated reasonable high-resolution 2D images satisfying view consistency. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and even AI-generated NeRFs. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Super-NeRF: View-consistent Detail Generation for NeRF Super-resolution.\",\"authors\":\"Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai\",\"doi\":\"10.1109/TVCG.2024.3490840\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on making full use of high-resolution images to generate high-resolution novel views, but less considering the generation of high-resolution details given only low-resolution images. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate low-resolution-guided high-resolution 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a multi-view consistency-controlling super-resolution module to generate various view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each input view to control the generated reasonable high-resolution 2D images satisfying view consistency. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and even AI-generated NeRFs. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.</p>\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2024.3490840\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2024.3490840","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Super-NeRF: View-consistent Detail Generation for NeRF Super-resolution.
The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on making full use of high-resolution images to generate high-resolution novel views, but less considering the generation of high-resolution details given only low-resolution images. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate low-resolution-guided high-resolution 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a multi-view consistency-controlling super-resolution module to generate various view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each input view to control the generated reasonable high-resolution 2D images satisfying view consistency. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and even AI-generated NeRFs. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.