{"title":"Non-Guided Depth Completion with Adversarial Networks","authors":"Yuki Tsuji, Hiroyuki Chishiro, S. Kato","doi":"10.1109/ITSC.2018.8569389","DOIUrl":null,"url":null,"abstract":"Depth completion, which interpolates dense depth maps based on sparse inputs acquired from 3D LiDAR sensors, enhances perception capabilities of autonomous driving using object detection and 3D mapping. Recent studies on depth completion have leveraged deep learning approaches applying traditional convolutional neural networks to prediction of invisible information in sparse and irregular inputs. Due to the lack of local and global structures such as object boundary cues, however, the predicted information results in unstructured and noisy depth maps. This paper presents a supervised depth completion method using an adversarial network based only on sparse inputs. In the presented method, a fully convolutional depth completion network, along with the adversarial network, is designed to find and correct inconsistencies between ground truth distributions and the resulting depth maps interpolated by the depth completion network. This leads to more realistic and structured depth images without compromising runtime performance of inference. Experimental results based on the KITTI depth completion benchmark show that the presented adversarial network method achieves about 60% improvements for the accuracy of inference and increases the rate of convergence during training.","PeriodicalId":395239,"journal":{"name":"2018 21st International Conference on Intelligent Transportation Systems (ITSC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 21st International Conference on Intelligent Transportation Systems (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC.2018.8569389","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Depth completion, which interpolates dense depth maps based on sparse inputs acquired from 3D LiDAR sensors, enhances perception capabilities of autonomous driving using object detection and 3D mapping. Recent studies on depth completion have leveraged deep learning approaches applying traditional convolutional neural networks to prediction of invisible information in sparse and irregular inputs. Due to the lack of local and global structures such as object boundary cues, however, the predicted information results in unstructured and noisy depth maps. This paper presents a supervised depth completion method using an adversarial network based only on sparse inputs. In the presented method, a fully convolutional depth completion network, along with the adversarial network, is designed to find and correct inconsistencies between ground truth distributions and the resulting depth maps interpolated by the depth completion network. This leads to more realistic and structured depth images without compromising runtime performance of inference. Experimental results based on the KITTI depth completion benchmark show that the presented adversarial network method achieves about 60% improvements for the accuracy of inference and increases the rate of convergence during training.