{"title":"Disparity-Guided Multi-View Interaction Network for Light Field Reflection Removal","authors":"Yutong Liu;Wenming Weng;Ruisheng Gao;Zeyu Xiao;Yueyi Zhang;Zhiwei Xiong","doi":"10.1109/TCI.2024.3394773","DOIUrl":null,"url":null,"abstract":"Light field (LF) imaging presents a promising avenue for reflection removal, owing to its ability of reliable depth perception and utilization of complementary texture details from multiple sub-aperture images (SAIs). However, the domain shifts between real-world and synthetic scenes, as well as the challenge of embedding transmission information across SAIs pose the main obstacles in this task. In this paper, we conquer the above challenges from the perspectives of data and network, respectively. To mitigate domain shifts, we propose an efficient data synthesis strategy for simulating realistic reflection scenes, and build the largest ever LF reflection dataset containing 420 synthetic scenes and 70 real-world scenes. To enable the transmission information embedding across SAIs, we propose a novel \n<underline>D</u>\nisparity-guided \n<underline>M</u>\nulti-view \n<underline>I</u>\nnteraction \n<underline>Net</u>\nwork (DMINet) for LF reflection removal. DMINet mainly consists of a transmission disparity estimation (TDE) module and a center-side interaction (CSI) module. The TDE module aims to predict transmission disparity by filtering out reflection disturbances, while the CSI module is responsible for the transmission integration which adopts the central view as the bridge for the propagation conducted between different SAIs. Compared with existing reflection removal methods for LF input, DMINet achieves a distinct performance boost with merits of efficiency and robustness, especially for scenes with complex depth variations.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"726-741"},"PeriodicalIF":4.2000,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10510261/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Light field (LF) imaging presents a promising avenue for reflection removal, owing to its ability of reliable depth perception and utilization of complementary texture details from multiple sub-aperture images (SAIs). However, the domain shifts between real-world and synthetic scenes, as well as the challenge of embedding transmission information across SAIs pose the main obstacles in this task. In this paper, we conquer the above challenges from the perspectives of data and network, respectively. To mitigate domain shifts, we propose an efficient data synthesis strategy for simulating realistic reflection scenes, and build the largest ever LF reflection dataset containing 420 synthetic scenes and 70 real-world scenes. To enable the transmission information embedding across SAIs, we propose a novel
D
isparity-guided
M
ulti-view
I
nteraction
Net
work (DMINet) for LF reflection removal. DMINet mainly consists of a transmission disparity estimation (TDE) module and a center-side interaction (CSI) module. The TDE module aims to predict transmission disparity by filtering out reflection disturbances, while the CSI module is responsible for the transmission integration which adopts the central view as the bridge for the propagation conducted between different SAIs. Compared with existing reflection removal methods for LF input, DMINet achieves a distinct performance boost with merits of efficiency and robustness, especially for scenes with complex depth variations.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.