Siyuan Zhang, Jingxian Dong, Yan Ma, Hongsen Cai, Meijie Wang, Yan Li, Twaha B. Kabika, Xin Li, Wenguang Hou
{"title":"CDF-DSR:学习连续深度场的自监督rgb引导深度图超分辨率","authors":"Siyuan Zhang, Jingxian Dong, Yan Ma, Hongsen Cai, Meijie Wang, Yan Li, Twaha B. Kabika, Xin Li, Wenguang Hou","doi":"10.1016/j.inffus.2024.102884","DOIUrl":null,"url":null,"abstract":"RGB-guided depth map super-resolution (GDSR) is a pivotal multimodal fusion task aimed at enhancing low-resolution (LR) depth maps using corresponding high-resolution (HR) RGB images as guidance. Existing approaches largely rely on supervised deep learning techniques, which are often hampered by limited generalization capabilities due to the challenges in collecting varied RGB-D datasets. To address this, we introduce a novel self-supervised paradigm that achieves depth map super-resolution utilizing just a single RGB-D sample, without any additional training data. Considering that scene depths are typically continuous, the proposed method conceptualizes the GDSR task as reconstructing a continuous depth field for each RGB-D sample. The depth field is represented as a neural network-based mapping from image coordinates to depth values, and optimized by leveraging the available HR RGB image and the LR depth map. Meanwhile, a novel cross-modal geometric consistency loss is proposed to enhance the detail accuracy of the depth field. Experimental results across multiple datasets demonstrate that the proposed method offers superior generalization compared to state-of-the-art GDSR methods and shows remarkable performance in practical applications. The test code is available at: <ce:inter-ref xlink:href=\"https://github.com/zsy950116/CDF-DSR\" xlink:type=\"simple\">https://github.com/zsy950116/CDF-DSR</ce:inter-ref>.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"359 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CDF-DSR: Learning continuous depth field for self-supervised RGB-guided depth map super resolution\",\"authors\":\"Siyuan Zhang, Jingxian Dong, Yan Ma, Hongsen Cai, Meijie Wang, Yan Li, Twaha B. Kabika, Xin Li, Wenguang Hou\",\"doi\":\"10.1016/j.inffus.2024.102884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGB-guided depth map super-resolution (GDSR) is a pivotal multimodal fusion task aimed at enhancing low-resolution (LR) depth maps using corresponding high-resolution (HR) RGB images as guidance. Existing approaches largely rely on supervised deep learning techniques, which are often hampered by limited generalization capabilities due to the challenges in collecting varied RGB-D datasets. To address this, we introduce a novel self-supervised paradigm that achieves depth map super-resolution utilizing just a single RGB-D sample, without any additional training data. Considering that scene depths are typically continuous, the proposed method conceptualizes the GDSR task as reconstructing a continuous depth field for each RGB-D sample. The depth field is represented as a neural network-based mapping from image coordinates to depth values, and optimized by leveraging the available HR RGB image and the LR depth map. Meanwhile, a novel cross-modal geometric consistency loss is proposed to enhance the detail accuracy of the depth field. Experimental results across multiple datasets demonstrate that the proposed method offers superior generalization compared to state-of-the-art GDSR methods and shows remarkable performance in practical applications. The test code is available at: <ce:inter-ref xlink:href=\\\"https://github.com/zsy950116/CDF-DSR\\\" xlink:type=\\\"simple\\\">https://github.com/zsy950116/CDF-DSR</ce:inter-ref>.\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"359 1\",\"pages\":\"\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2024-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.inffus.2024.102884\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.inffus.2024.102884","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
CDF-DSR: Learning continuous depth field for self-supervised RGB-guided depth map super resolution
RGB-guided depth map super-resolution (GDSR) is a pivotal multimodal fusion task aimed at enhancing low-resolution (LR) depth maps using corresponding high-resolution (HR) RGB images as guidance. Existing approaches largely rely on supervised deep learning techniques, which are often hampered by limited generalization capabilities due to the challenges in collecting varied RGB-D datasets. To address this, we introduce a novel self-supervised paradigm that achieves depth map super-resolution utilizing just a single RGB-D sample, without any additional training data. Considering that scene depths are typically continuous, the proposed method conceptualizes the GDSR task as reconstructing a continuous depth field for each RGB-D sample. The depth field is represented as a neural network-based mapping from image coordinates to depth values, and optimized by leveraging the available HR RGB image and the LR depth map. Meanwhile, a novel cross-modal geometric consistency loss is proposed to enhance the detail accuracy of the depth field. Experimental results across multiple datasets demonstrate that the proposed method offers superior generalization compared to state-of-the-art GDSR methods and shows remarkable performance in practical applications. The test code is available at: https://github.com/zsy950116/CDF-DSR.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.