{"title":"Frame-Wise CNN-Based View Synthesis for Light Field Camera Arrays","authors":"I. Schiopu, Patrice Rondao-Alface, A. Munteanu","doi":"10.1109/IC3D48390.2019.8975901","DOIUrl":null,"url":null,"abstract":"The paper proposes a novel frame-wise view synthesis method based on convolutional neural networks (CNNs) for wide-baseline light field (LF) camera arrays. A novel neural network architecture that follows a multi-resolution processing paradigm is employed to synthesize an entire view. A novel loss function formulation based on the structural similarity index (SSIM) is proposed. A wide-baseline LF image dataset is generated and employed to train the proposed deep model. The proposed method synthesizes each subaperture image (SAI) from a LF image based on corresponding SAIs from two reference LF images. Experimental results show that the proposed method yields promising results with an average PSNR and SSIM of 34.71 dB and 0.9673 respectively for wide baselines.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on 3D Immersion (IC3D)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3D48390.2019.8975901","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The paper proposes a novel frame-wise view synthesis method based on convolutional neural networks (CNNs) for wide-baseline light field (LF) camera arrays. A novel neural network architecture that follows a multi-resolution processing paradigm is employed to synthesize an entire view. A novel loss function formulation based on the structural similarity index (SSIM) is proposed. A wide-baseline LF image dataset is generated and employed to train the proposed deep model. The proposed method synthesizes each subaperture image (SAI) from a LF image based on corresponding SAIs from two reference LF images. Experimental results show that the proposed method yields promising results with an average PSNR and SSIM of 34.71 dB and 0.9673 respectively for wide baselines.