Interpretable deep learning for deconvolutional analysis of neural signals.

IF 15 1区 医学 Q1 NEUROSCIENCES Neuron Pub Date : 2025-04-16 Epub Date: 2025-03-12 DOI:10.1016/j.neuron.2025.02.006
Bahareh Tolooshams, Sara Matias, Hao Wu, Simona Temereanca, Naoshige Uchida, Venkatesh N Murthy, Paul Masset, Demba Ba
{"title":"Interpretable deep learning for deconvolutional analysis of neural signals.","authors":"Bahareh Tolooshams, Sara Matias, Hao Wu, Simona Temereanca, Naoshige Uchida, Venkatesh N Murthy, Paul Masset, Demba Ba","doi":"10.1016/j.neuron.2025.02.006","DOIUrl":null,"url":null,"abstract":"<p><p>The widespread adoption of deep learning to model neural activity often relies on \"black-box\" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.</p>","PeriodicalId":19313,"journal":{"name":"Neuron","volume":" ","pages":"1151-1168.e13"},"PeriodicalIF":15.0000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12006907/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuron","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.neuron.2025.02.006","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The widespread adoption of deep learning to model neural activity often relies on "black-box" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于神经信号反卷积分析的可解释深度学习。
深度学习对神经活动建模的广泛采用往往依赖于“黑箱”方法,这种方法在神经活动和网络参数之间缺乏可解释的联系。在这里,我们提出使用算法展开(一种可解释深度学习的方法)来设计稀疏反卷积神经网络的架构,并通过生成模型获得与刺激驱动的单神经元活动相关的网络权重的直接解释。我们介绍了我们的方法,反卷积展开神经学习(DUNL),并通过将其应用于跨多个大脑区域和记录模式的反卷积单次试验局部信号来证明其通用性。我们发现了来自中脑多巴胺神经元的多重显著性和奖励预测错误信号,在体感丘脑记录中进行了同步事件检测和表征,并在非结构化的自然实验中表征了梨状皮质和纹状体神经反应的异质性。我们的工作利用可解释深度学习的进步来提供对神经活动的机制理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neuron
Neuron 医学-神经科学
CiteScore
24.50
自引率
3.10%
发文量
382
审稿时长
1 months
期刊介绍: Established as a highly influential journal in neuroscience, Neuron is widely relied upon in the field. The editors adopt interdisciplinary strategies, integrating biophysical, cellular, developmental, and molecular approaches alongside a systems approach to sensory, motor, and higher-order cognitive functions. Serving as a premier intellectual forum, Neuron holds a prominent position in the entire neuroscience community.
期刊最新文献
Oligodendrocyte-encoded lactate dehydrogenase A couples glycolysis to remyelination via protein lactylation Two translocation mechanisms drive neural stem cell dissemination into the human fetal cortex GABAergic Gbx1 neurons of the superficial dorsal horn are critical elements of a spinal circuit for stress-induced analgesia A population approach to cortical GABAergic interneuron function. The layer 6b theory of attention
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1