Bahareh Tolooshams, Sara Matias, Hao Wu, Simona Temereanca, Naoshige Uchida, Venkatesh N Murthy, Paul Masset, Demba Ba
{"title":"Interpretable deep learning for deconvolutional analysis of neural signals.","authors":"Bahareh Tolooshams, Sara Matias, Hao Wu, Simona Temereanca, Naoshige Uchida, Venkatesh N Murthy, Paul Masset, Demba Ba","doi":"10.1016/j.neuron.2025.02.006","DOIUrl":null,"url":null,"abstract":"<p><p>The widespread adoption of deep learning to model neural activity often relies on \"black-box\" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.</p>","PeriodicalId":19313,"journal":{"name":"Neuron","volume":" ","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuron","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.neuron.2025.02.006","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
The widespread adoption of deep learning to model neural activity often relies on "black-box" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.
期刊介绍:
Established as a highly influential journal in neuroscience, Neuron is widely relied upon in the field. The editors adopt interdisciplinary strategies, integrating biophysical, cellular, developmental, and molecular approaches alongside a systems approach to sensory, motor, and higher-order cognitive functions. Serving as a premier intellectual forum, Neuron holds a prominent position in the entire neuroscience community.