DeepVID v2: self-supervised denoising with decoupled spatiotemporal enhancement for low-photon voltage imaging.

IF 4.8 2区 医学 Q1 NEUROSCIENCES Neurophotonics Pub Date : 2024-10-01 Epub Date: 2024-10-29 DOI:10.1117/1.NPh.11.4.045007
Chang Liu, Jiayu Lu, Yicun Wu, Xin Ye, Allison M Ahrens, Jelena Platisa, Vincent A Pieribone, Jerry L Chen, Lei Tian
{"title":"DeepVID v2: self-supervised denoising with decoupled spatiotemporal enhancement for low-photon voltage imaging.","authors":"Chang Liu, Jiayu Lu, Yicun Wu, Xin Ye, Allison M Ahrens, Jelena Platisa, Vincent A Pieribone, Jerry L Chen, Lei Tian","doi":"10.1117/1.NPh.11.4.045007","DOIUrl":null,"url":null,"abstract":"<p><strong>Significance: </strong>Voltage imaging is a powerful tool for studying the dynamics of neuronal activities in the brain. However, voltage imaging data are fundamentally corrupted by severe Poisson noise in the low-photon regime, which hinders the accurate extraction of neuronal activities. Self-supervised deep learning denoising methods have shown great potential in addressing the challenges in low-photon voltage imaging without the need for ground-truth but usually suffer from the trade-off between spatial and temporal performances.</p><p><strong>Aim: </strong>We present DeepVID v2, a self-supervised denoising framework with decoupled spatial and temporal enhancement capability to significantly augment low-photon voltage imaging.</p><p><strong>Approach: </strong>DeepVID v2 is built on our original DeepVID framework, which performs frame-based denoising by utilizing a sequence of frames around the central frame targeted for denoising to leverage temporal information and ensure consistency. Similar to DeepVID, the network further integrates multiple blind pixels in the central frame to enrich the learning of local spatial information. In addition, DeepVID v2 introduces a new spatial prior extraction branch to capture fine structural details to learn high spatial resolution information. Two variants of DeepVID v2 are introduced to meet specific denoising needs: an online version tailored for real-time inference with a limited number of frames and an offline version designed to leverage the full dataset, achieving optimal temporal and spatial performances.</p><p><strong>Results: </strong>We demonstrate that DeepVID v2 is able to overcome the trade-off between spatial and temporal performances and achieve superior denoising capability in resolving both high-resolution spatial structures and rapid temporal neuronal activities. We further show that DeepVID v2 can generalize to different imaging conditions, including time-series measurements with various signal-to-noise ratios and extreme low-photon conditions.</p><p><strong>Conclusions: </strong>Our results underscore DeepVID v2 as a promising tool for enhancing voltage imaging. This framework has the potential to generalize to other low-photon imaging modalities and greatly facilitate the study of neuronal activities in the brain.</p>","PeriodicalId":54335,"journal":{"name":"Neurophotonics","volume":"11 4","pages":"045007"},"PeriodicalIF":4.8000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519979/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurophotonics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.NPh.11.4.045007","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/29 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Significance: Voltage imaging is a powerful tool for studying the dynamics of neuronal activities in the brain. However, voltage imaging data are fundamentally corrupted by severe Poisson noise in the low-photon regime, which hinders the accurate extraction of neuronal activities. Self-supervised deep learning denoising methods have shown great potential in addressing the challenges in low-photon voltage imaging without the need for ground-truth but usually suffer from the trade-off between spatial and temporal performances.

Aim: We present DeepVID v2, a self-supervised denoising framework with decoupled spatial and temporal enhancement capability to significantly augment low-photon voltage imaging.

Approach: DeepVID v2 is built on our original DeepVID framework, which performs frame-based denoising by utilizing a sequence of frames around the central frame targeted for denoising to leverage temporal information and ensure consistency. Similar to DeepVID, the network further integrates multiple blind pixels in the central frame to enrich the learning of local spatial information. In addition, DeepVID v2 introduces a new spatial prior extraction branch to capture fine structural details to learn high spatial resolution information. Two variants of DeepVID v2 are introduced to meet specific denoising needs: an online version tailored for real-time inference with a limited number of frames and an offline version designed to leverage the full dataset, achieving optimal temporal and spatial performances.

Results: We demonstrate that DeepVID v2 is able to overcome the trade-off between spatial and temporal performances and achieve superior denoising capability in resolving both high-resolution spatial structures and rapid temporal neuronal activities. We further show that DeepVID v2 can generalize to different imaging conditions, including time-series measurements with various signal-to-noise ratios and extreme low-photon conditions.

Conclusions: Our results underscore DeepVID v2 as a promising tool for enhancing voltage imaging. This framework has the potential to generalize to other low-photon imaging modalities and greatly facilitate the study of neuronal activities in the brain.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DeepVID v2:针对低光子电压成像的自监督去噪与解耦时空增强。
意义重大:电压成像是研究大脑神经元活动动态的有力工具。然而,电压成像数据从根本上受到低光子系统中严重泊松噪声的破坏,这阻碍了神经元活动的准确提取。自我监督深度学习去噪方法在无需地面实况的情况下应对低光子电压成像的挑战方面已显示出巨大潜力,但通常在空间和时间性能之间存在权衡问题:DeepVID v2建立在我们最初的DeepVID框架基础上,该框架通过利用去噪目标中心帧周围的帧序列来执行基于帧的去噪,从而充分利用时间信息并确保一致性。与 DeepVID 相似,该网络进一步整合了中心帧中的多个盲像素,以丰富局部空间信息的学习。此外,DeepVID v2 还引入了新的空间先验提取分支,以捕捉精细结构细节,从而学习高空间分辨率信息。为满足特定的去噪需求,DeepVID v2 引入了两个变体:一个是为有限帧数的实时推理量身定制的在线版本,另一个是为充分利用完整数据集而设计的离线版本,以实现最佳的时间和空间性能:我们证明,DeepVID v2 能够克服空间和时间性能之间的权衡,在解析高分辨率空间结构和快速时间神经元活动方面实现卓越的去噪能力。我们进一步表明,DeepVID v2 可以适应不同的成像条件,包括各种信噪比的时间序列测量和极端低光子条件:我们的研究结果表明,DeepVID v2 是一种很有前途的增强电压成像工具。这一框架有望推广到其他低光子成像模式,极大地促进对大脑神经元活动的研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurophotonics
Neurophotonics Neuroscience-Neuroscience (miscellaneous)
CiteScore
7.20
自引率
11.30%
发文量
114
审稿时长
21 weeks
期刊介绍: At the interface of optics and neuroscience, Neurophotonics is a peer-reviewed journal that covers advances in optical technology applicable to study of the brain and their impact on the basic and clinical neuroscience applications.
期刊最新文献
Viscocohesive hyaluronan gel enhances stability of intravital multiphoton imaging with subcellular resolution. Zika virus encephalitis causes transient reduction of functional cortical connectivity. Early changes in spatiotemporal dynamics of remapped circuits and global networks predict functional recovery after stroke in mice. Distribution of spine classes shows intra-neuronal dendritic heterogeneity in mouse cortex. Expansion microscopy reveals neural circuit organization in genetic animal models.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1