{"title":"NLCMR: Indoor Depth Recovery Model With Non-Local Cross-Modality Prior","authors":"Junkang Zhang;Zhengkai Qi;Faming Fang;Tingting Wang;Guixu Zhang","doi":"10.1109/TCI.2025.3545358","DOIUrl":null,"url":null,"abstract":"Recovering a dense depth image from sparse inputs is inherently challenging. Image-guided depth completion has become a prevalent technique, leveraging sparse depth data alongside RGB images to produce detailed depth maps. Although deep learning-based methods have achieved notable success, many state-of-the-art networks operate as black boxes, lacking transparent mechanisms for depth recovery. To address this, we introduce a novel model-guided depth recovery method. Our approach is built on a maximum a posterior (MAP) framework and features an optimization model that incorporates a non-local cross-modality regularizer and a deep image prior. The cross-modality regularizer capitalizes on the inherent correlations between depth and RGB images, enhancing the extraction of shared information. Additionally, the deep image prior captures local characteristics between the depth and RGB domains effectively. To counter the challenge of high heterogeneity leading to degenerate operators, we have integrated an implicit data consistency term into our model. Our model is then realized as a network using the half-quadratic splitting algorithm. Extensive evaluations on the NYU-Depth V2 and SUN RGB-D datasets demonstrate that our method performs competitively with current deep learning techniques.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"265-276"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10906528/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Recovering a dense depth image from sparse inputs is inherently challenging. Image-guided depth completion has become a prevalent technique, leveraging sparse depth data alongside RGB images to produce detailed depth maps. Although deep learning-based methods have achieved notable success, many state-of-the-art networks operate as black boxes, lacking transparent mechanisms for depth recovery. To address this, we introduce a novel model-guided depth recovery method. Our approach is built on a maximum a posterior (MAP) framework and features an optimization model that incorporates a non-local cross-modality regularizer and a deep image prior. The cross-modality regularizer capitalizes on the inherent correlations between depth and RGB images, enhancing the extraction of shared information. Additionally, the deep image prior captures local characteristics between the depth and RGB domains effectively. To counter the challenge of high heterogeneity leading to degenerate operators, we have integrated an implicit data consistency term into our model. Our model is then realized as a network using the half-quadratic splitting algorithm. Extensive evaluations on the NYU-Depth V2 and SUN RGB-D datasets demonstrate that our method performs competitively with current deep learning techniques.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.