{"title":"Deep Learning-Driven Depth from Defocus via Active Multispectral Quasi-Random Projections with Complex Subpatterns","authors":"A. Ma, A. Wong, David A Clausi","doi":"10.1109/CRV.2018.00048","DOIUrl":null,"url":null,"abstract":"A promising approach to depth from defocus (DfD) involves actively projecting a quasi-random point pattern onto an object and assessing the blurriness of the point projection as captured by a camera to recover the depth of the scene. Recently, it was found that the depth inference can be made not only faster but also more accurate by leveraging deep learning approaches to computationally model and predict depth based on the quasi-random point projections as captured by a camera. Motivated by the fact that deep learning techniques can automatically learn useful features from the captured image of the projection, in this paper we present an extension of this quasi-random projection approach to DfD by introducing the use of a new quasi-random projection pattern consisting of complex subpatterns instead of points. The design and choice of the subpattern used in the quasi-random projection is a key factor in the ability to achieve improved depth recovery with high fidelity. Experimental results using quasi-random projection patterns composed of a variety of non-conventional subpattern designs on complex surfaces showed that the use of complex subpatterns in the quasi-random projection pattern can significantly improve depth reconstruction quality compared to a point pattern.","PeriodicalId":281779,"journal":{"name":"2018 15th Conference on Computer and Robot Vision (CRV)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th Conference on Computer and Robot Vision (CRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV.2018.00048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A promising approach to depth from defocus (DfD) involves actively projecting a quasi-random point pattern onto an object and assessing the blurriness of the point projection as captured by a camera to recover the depth of the scene. Recently, it was found that the depth inference can be made not only faster but also more accurate by leveraging deep learning approaches to computationally model and predict depth based on the quasi-random point projections as captured by a camera. Motivated by the fact that deep learning techniques can automatically learn useful features from the captured image of the projection, in this paper we present an extension of this quasi-random projection approach to DfD by introducing the use of a new quasi-random projection pattern consisting of complex subpatterns instead of points. The design and choice of the subpattern used in the quasi-random projection is a key factor in the ability to achieve improved depth recovery with high fidelity. Experimental results using quasi-random projection patterns composed of a variety of non-conventional subpattern designs on complex surfaces showed that the use of complex subpatterns in the quasi-random projection pattern can significantly improve depth reconstruction quality compared to a point pattern.