Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown
{"title":"Modeling Defocus-Disparity in Dual-Pixel Sensors","authors":"Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown","doi":"10.1109/ICCP48838.2020.9105278","DOIUrl":null,"url":null,"abstract":"Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCP48838.2020.9105278","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.