双像素传感器离焦视差建模

Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown
{"title":"双像素传感器离焦视差建模","authors":"Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown","doi":"10.1109/ICCP48838.2020.9105278","DOIUrl":null,"url":null,"abstract":"Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":"{\"title\":\"Modeling Defocus-Disparity in Dual-Pixel Sensors\",\"authors\":\"Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown\",\"doi\":\"10.1109/ICCP48838.2020.9105278\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.\",\"PeriodicalId\":406823,\"journal\":{\"name\":\"2020 IEEE International Conference on Computational Photography (ICCP)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"26\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Computational Photography (ICCP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCP48838.2020.9105278\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCP48838.2020.9105278","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

摘要

大多数现代消费者相机使用双像素(DP)传感器,在一张照片中提供两个子光圈的场景视图。DP传感器的设计是为了协助相机的自动对焦程序,该程序检查两个子光圈视图中的局部差异,以确定图像的哪些部分失焦。最近,这些DP视图已被用于自动对焦以外的任务,如合成散景、反射去除和深度重建。这些方法将两个DP视图视为立体图像对,并应用立体匹配算法计算局部视差。然而,双像素的视差不是由立体视差引起的,而是由于在图像的失焦区域发生的散焦模糊。本文提出了一种新的参数点扩展函数来模拟DP传感器上的离焦视差。我们将该模型应用于DP数据的深度估计任务。我们的模型的一个重要特征是它能够在每个像素上利用DP模糊核的对称性。我们利用这种对称性来形成一个不需要真值深度的无监督损失函数。我们证明了我们的方法在单反和智能手机DP数据上的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Modeling Defocus-Disparity in Dual-Pixel Sensors
Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Awards [3 award winners] NLDNet++: A Physics Based Single Image Dehazing Network Action Recognition from a Single Coded Image Fast confocal microscopy imaging based on deep learning Comparing Vision-based to Sonar-based 3D Reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1