{"title":"Light Field Synthesis from a Monocular Video Using Neural Radiance Fields","authors":"Hyungsun Baek, In Kyu Park","doi":"10.1109/ICEIC61013.2024.10457235","DOIUrl":null,"url":null,"abstract":"Light field, known for capturing directional light rays, has garnered substantial interest owing to the growing demand for view synthesis in immersive media and recent advancements in deep learning techniques. However, existing light field synthesis methods focus on generating views with a limited baseline, which is the distance between sub-aperture images (SAIs). In this paper, we propose a novel method to compose a light field with an expanded baseline using successive frames from a monocular video. We create a synthetic light field dataset with a wide baseline derived from a video game, employing photorealistic rendering. This dataset consists of continuous light field frames and depth maps of the central sub-aperture images. The proposed network consists of two key steps, a preprocessing step that generates visible SAIs using RGBD images and a synthesis step that constructs a Neural Radiance Field with RGBD supervision.","PeriodicalId":518726,"journal":{"name":"2024 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"229 3","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 International Conference on Electronics, Information, and Communication (ICEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEIC61013.2024.10457235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Light field, known for capturing directional light rays, has garnered substantial interest owing to the growing demand for view synthesis in immersive media and recent advancements in deep learning techniques. However, existing light field synthesis methods focus on generating views with a limited baseline, which is the distance between sub-aperture images (SAIs). In this paper, we propose a novel method to compose a light field with an expanded baseline using successive frames from a monocular video. We create a synthetic light field dataset with a wide baseline derived from a video game, employing photorealistic rendering. This dataset consists of continuous light field frames and depth maps of the central sub-aperture images. The proposed network consists of two key steps, a preprocessing step that generates visible SAIs using RGBD images and a synthesis step that constructs a Neural Radiance Field with RGBD supervision.