{"title":"Enhanced light field depth estimation through occlusion refinement and feature fusion","authors":"Yuxuan Gao , Haiwei Zhang , Zhihong Chen, Lifang Xue, Yinping Miao, Jiamin Fu","doi":"10.1016/j.optlaseng.2024.108655","DOIUrl":null,"url":null,"abstract":"<div><div>Light field depth estimation is crucial for various applications, but current algorithms often falter when dealing with complex textures and edges. To address this, we propose a light field depth estimation network based on multi-scale fusion and channel attention (LFMCNet). It incorporates a convolutional multi-scale fusion module to enhance feature extraction and utilizes a channel attention mechanism to refine depth map accuracy. Additionally, LFMCNet integrates the Transformer Feature Fusion Module (TFFM) and Channel Attention-Based Perspective Fusion (CAPF) module for improved occlusion refinement, effectively handling challenges in occluded regions. Testing on the 4D HCI and real-world datasets demonstrates that LFMCNet significantly reduces the Bad Pixel (BP) rate and Mean Square Error (MSE).</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108655"},"PeriodicalIF":3.5000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S014381662400633X","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Light field depth estimation is crucial for various applications, but current algorithms often falter when dealing with complex textures and edges. To address this, we propose a light field depth estimation network based on multi-scale fusion and channel attention (LFMCNet). It incorporates a convolutional multi-scale fusion module to enhance feature extraction and utilizes a channel attention mechanism to refine depth map accuracy. Additionally, LFMCNet integrates the Transformer Feature Fusion Module (TFFM) and Channel Attention-Based Perspective Fusion (CAPF) module for improved occlusion refinement, effectively handling challenges in occluded regions. Testing on the 4D HCI and real-world datasets demonstrates that LFMCNet significantly reduces the Bad Pixel (BP) rate and Mean Square Error (MSE).
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques