{"title":"Depth refinement on sparse-depth images using visual perception cues","authors":"Muhammad Umar Karim Khan, Asim Khan, C. Kyung","doi":"10.1109/APCCAS.2016.7803997","DOIUrl":null,"url":null,"abstract":"Numerous depth extraction schemes cannot extract depth on textureless regions, thus generating sparse depth maps. In this paper, we propose using perception cues to improve the sparse depth map. We consider the local neighborhood as well the global surface properties of objects. We use this information to complement depth extraction schemes. The method is not scene or class specific. With quantitative evaluation, the proposed method is shown to perform better compared to previous depth refinement methods. The error in terms of standard deviation of depth has been reduced down by 60%. The computational overhead of the proposed method is also very low, making it a suitable candidate for depth refinement.","PeriodicalId":6495,"journal":{"name":"2016 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APCCAS.2016.7803997","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Numerous depth extraction schemes cannot extract depth on textureless regions, thus generating sparse depth maps. In this paper, we propose using perception cues to improve the sparse depth map. We consider the local neighborhood as well the global surface properties of objects. We use this information to complement depth extraction schemes. The method is not scene or class specific. With quantitative evaluation, the proposed method is shown to perform better compared to previous depth refinement methods. The error in terms of standard deviation of depth has been reduced down by 60%. The computational overhead of the proposed method is also very low, making it a suitable candidate for depth refinement.