Zhao Liu, Zhongliang Fu, Gang Li, Jie Hu, Yang Yang
{"title":"STNeRF:用于单视图车辆图像的新型视图合成的对称三面神经辐射场","authors":"Zhao Liu, Zhongliang Fu, Gang Li, Jie Hu, Yang Yang","doi":"10.1007/s10489-024-06005-9","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents STNeRF, a method for synthesizing novel views of vehicles from single-view 2D images without the need for 3D ground truth data, such as point clouds, depth maps, CAD models, etc., as prior knowledge. A significant challenge in this task arises from the characteristics of CNNs and the utilization of local features can lead to a flattened representation of the synthesized image when training and validation with images from a single viewpoint. Many current methodologies tend to overlook local features and rely on global features throughout the entire reconstruction process, potentially resulting in the loss of fine-grained details in the synthesized image. To tackle this issue, we introduce Symmetric Triplane Neural Radiance Fields (STNeRF). STNeRF employs a triplane feature extractor with spatially aware convolution to extend 2D image features into 3D. This decouples the appearance component, which includes local features, and the shape component, which consists of global features, and utilizes them to construct a neural radiance field. These neural priors are then employed for rendering novel views. Furthermore, STNeRF leverages the symmetric properties of vehicles to liberate the appearance component from reliance on the original viewpoint and to align it with the symmetry of the target space, thereby enhancing the neural radiance field network’s ability to represent the invisible regions. The qualitative and quantitative evaluations demonstrate that STNeRF outperforms existing solutions in terms of both geometry and appearance reconstruction. More supplementary materials and the implementation code are available for access at the following link: https://github.com/ll594282475/STNeRF.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 5","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-024-06005-9.pdf","citationCount":"0","resultStr":"{\"title\":\"STNeRF: symmetric triplane neural radiance fields for novel view synthesis from single-view vehicle images\",\"authors\":\"Zhao Liu, Zhongliang Fu, Gang Li, Jie Hu, Yang Yang\",\"doi\":\"10.1007/s10489-024-06005-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper presents STNeRF, a method for synthesizing novel views of vehicles from single-view 2D images without the need for 3D ground truth data, such as point clouds, depth maps, CAD models, etc., as prior knowledge. A significant challenge in this task arises from the characteristics of CNNs and the utilization of local features can lead to a flattened representation of the synthesized image when training and validation with images from a single viewpoint. Many current methodologies tend to overlook local features and rely on global features throughout the entire reconstruction process, potentially resulting in the loss of fine-grained details in the synthesized image. To tackle this issue, we introduce Symmetric Triplane Neural Radiance Fields (STNeRF). STNeRF employs a triplane feature extractor with spatially aware convolution to extend 2D image features into 3D. This decouples the appearance component, which includes local features, and the shape component, which consists of global features, and utilizes them to construct a neural radiance field. These neural priors are then employed for rendering novel views. Furthermore, STNeRF leverages the symmetric properties of vehicles to liberate the appearance component from reliance on the original viewpoint and to align it with the symmetry of the target space, thereby enhancing the neural radiance field network’s ability to represent the invisible regions. The qualitative and quantitative evaluations demonstrate that STNeRF outperforms existing solutions in terms of both geometry and appearance reconstruction. More supplementary materials and the implementation code are available for access at the following link: https://github.com/ll594282475/STNeRF.</p></div>\",\"PeriodicalId\":8041,\"journal\":{\"name\":\"Applied Intelligence\",\"volume\":\"55 5\",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-01-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10489-024-06005-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10489-024-06005-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-06005-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
STNeRF: symmetric triplane neural radiance fields for novel view synthesis from single-view vehicle images
This paper presents STNeRF, a method for synthesizing novel views of vehicles from single-view 2D images without the need for 3D ground truth data, such as point clouds, depth maps, CAD models, etc., as prior knowledge. A significant challenge in this task arises from the characteristics of CNNs and the utilization of local features can lead to a flattened representation of the synthesized image when training and validation with images from a single viewpoint. Many current methodologies tend to overlook local features and rely on global features throughout the entire reconstruction process, potentially resulting in the loss of fine-grained details in the synthesized image. To tackle this issue, we introduce Symmetric Triplane Neural Radiance Fields (STNeRF). STNeRF employs a triplane feature extractor with spatially aware convolution to extend 2D image features into 3D. This decouples the appearance component, which includes local features, and the shape component, which consists of global features, and utilizes them to construct a neural radiance field. These neural priors are then employed for rendering novel views. Furthermore, STNeRF leverages the symmetric properties of vehicles to liberate the appearance component from reliance on the original viewpoint and to align it with the symmetry of the target space, thereby enhancing the neural radiance field network’s ability to represent the invisible regions. The qualitative and quantitative evaluations demonstrate that STNeRF outperforms existing solutions in terms of both geometry and appearance reconstruction. More supplementary materials and the implementation code are available for access at the following link: https://github.com/ll594282475/STNeRF.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.