FoveaSPAD: Exploiting Depth Priors for Adaptive and Efficient Single-Photon 3D Imaging

IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Computational Imaging Pub Date : 2024-12-09 DOI:10.1109/TCI.2024.3503360
Justin Folden;Atul Ingle;Sanjeev J. Koppal
{"title":"FoveaSPAD: Exploiting Depth Priors for Adaptive and Efficient Single-Photon 3D Imaging","authors":"Justin Folden;Atul Ingle;Sanjeev J. Koppal","doi":"10.1109/TCI.2024.3503360","DOIUrl":null,"url":null,"abstract":"Fast, efficient, and accurate depth-sensing is important for safety-critical applications such as autonomous vehicles. Direct time-of-flight LiDAR has the potential to fulfill these demands, thanks to its ability to provide high-precision depth measurements at long standoff distances. While conventional LiDAR relies on avalanche photodiodes (APDs), single-photon avalanche diodes (SPADs) are an emerging image-sensing technology that offer many advantages such as extreme sensitivity and time resolution. In this paper, we remove the key challenges to widespread adoption of SPAD-based LiDARs: their susceptibility to ambient light and the large amount of raw photon data that must be processed to obtain in-pixel depth estimates. We propose new algorithms and sensing policies that improve signal-to-noise ratio (SNR) and increase computing and memory efficiency for SPAD-based LiDARs. During capture, we use external signals to \n<italic>foveate</i>\n, i.e., guide how the SPAD system estimates scene depths. This foveated approach allows our method to “zoom into” the signal of interest, reducing the amount of raw photon data that needs to be stored and transferred from the SPAD sensor, while also improving resilience to ambient light. We show results both in simulation and also with real hardware emulation, with specific implementations achieving a 1548-fold reduction in memory usage, and our algorithms can be applied to newly available and future SPAD arrays.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1722-1735"},"PeriodicalIF":4.8000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10781443/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Fast, efficient, and accurate depth-sensing is important for safety-critical applications such as autonomous vehicles. Direct time-of-flight LiDAR has the potential to fulfill these demands, thanks to its ability to provide high-precision depth measurements at long standoff distances. While conventional LiDAR relies on avalanche photodiodes (APDs), single-photon avalanche diodes (SPADs) are an emerging image-sensing technology that offer many advantages such as extreme sensitivity and time resolution. In this paper, we remove the key challenges to widespread adoption of SPAD-based LiDARs: their susceptibility to ambient light and the large amount of raw photon data that must be processed to obtain in-pixel depth estimates. We propose new algorithms and sensing policies that improve signal-to-noise ratio (SNR) and increase computing and memory efficiency for SPAD-based LiDARs. During capture, we use external signals to foveate , i.e., guide how the SPAD system estimates scene depths. This foveated approach allows our method to “zoom into” the signal of interest, reducing the amount of raw photon data that needs to be stored and transferred from the SPAD sensor, while also improving resilience to ambient light. We show results both in simulation and also with real hardware emulation, with specific implementations achieving a 1548-fold reduction in memory usage, and our algorithms can be applied to newly available and future SPAD arrays.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FoveaSPAD:利用深度先验进行自适应和高效的单光子3D成像
快速、高效和准确的深度传感对于自动驾驶汽车等安全关键应用非常重要。直接飞行时间激光雷达有潜力满足这些需求,因为它能够在远距离提供高精度的深度测量。传统的激光雷达依赖于雪崩光电二极管(apd),而单光子雪崩二极管(spad)是一种新兴的图像传感技术,具有极高的灵敏度和时间分辨率等优点。在本文中,我们消除了广泛采用基于spad的激光雷达的关键挑战:它们对环境光的敏感性以及必须处理大量原始光子数据以获得像素深度估计。我们提出了新的算法和传感策略,提高了基于spad的激光雷达的信噪比(SNR),提高了计算和存储效率。在捕获过程中,我们使用外部信号来注视点,即指导SPAD系统如何估计场景深度。这种注视点方法允许我们的方法“放大”感兴趣的信号,减少需要从SPAD传感器存储和传输的原始光子数据的数量,同时也提高了对环境光的恢复能力。我们在模拟和真实硬件仿真中都展示了结果,特定实现将内存使用量减少了1548倍,并且我们的算法可以应用于新可用的和未来的SPAD阵列。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Computational Imaging
IEEE Transactions on Computational Imaging Mathematics-Computational Mathematics
CiteScore
8.20
自引率
7.40%
发文量
59
期刊介绍: The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.
期刊最新文献
IEEE Transactions on Computational Imaging Publication Information Convergent Primal-Dual Plug-and-Play Image Restoration: A General Algorithm and Applications High Temporal-Lateral Resolution Photoacoustic Microscopy Imaging With Dual Branch Graph Induced Fusion Network DANG: Data Augmentation Based on NIR-II Guided Diffusion Model for Fluorescence Molecular Tomography List of Reviewers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1