{"title":"VPRF:用于有焦点图像合成的视觉感知辐射场","authors":"Zijun Wang;Jian Wu;Runze Fan;Wei Ke;Lili Wang","doi":"10.1109/TVCG.2024.3456184","DOIUrl":null,"url":null,"abstract":"Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework. Initially, we encode both the appearance and visual sensitivity information of the scene into our radiance field representation. Then, we propose a visual perceptual sampling strategy, allocating computational resources according to the HVS sensitivity of different regions. Finally, we propose a sampling weight-constrained training scheme to ensure the effectiveness of our sampling strategy and improve the representation of the radiance field based on the scene content. Experimental results demonstrate that our method renders more efficiently, with higher PSNR and SSIM in the foveal and salient regions compared to the state-of-the-art FoV-NeRF. The results of the user study confirm that our rendering results exhibit high-fidelity visual perception.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7183-7192"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis\",\"authors\":\"Zijun Wang;Jian Wu;Runze Fan;Wei Ke;Lili Wang\",\"doi\":\"10.1109/TVCG.2024.3456184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework. Initially, we encode both the appearance and visual sensitivity information of the scene into our radiance field representation. Then, we propose a visual perceptual sampling strategy, allocating computational resources according to the HVS sensitivity of different regions. Finally, we propose a sampling weight-constrained training scheme to ensure the effectiveness of our sampling strategy and improve the representation of the radiance field based on the scene content. Experimental results demonstrate that our method renders more efficiently, with higher PSNR and SSIM in the foveal and salient regions compared to the state-of-the-art FoV-NeRF. The results of the user study confirm that our rendering results exhibit high-fidelity visual perception.\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":\"30 11\",\"pages\":\"7183-7192\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10678880/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10678880/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis
Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework. Initially, we encode both the appearance and visual sensitivity information of the scene into our radiance field representation. Then, we propose a visual perceptual sampling strategy, allocating computational resources according to the HVS sensitivity of different regions. Finally, we propose a sampling weight-constrained training scheme to ensure the effectiveness of our sampling strategy and improve the representation of the radiance field based on the scene content. Experimental results demonstrate that our method renders more efficiently, with higher PSNR and SSIM in the foveal and salient regions compared to the state-of-the-art FoV-NeRF. The results of the user study confirm that our rendering results exhibit high-fidelity visual perception.