Expressive architectures enhance interpretability of dynamics-based neural population models

Andrew R. Sedler, Christopher Versteeg, Chethan Pandarinath
{"title":"Expressive architectures enhance interpretability of dynamics-based neural population models","authors":"Andrew R. Sedler, Christopher Versteeg, Chethan Pandarinath","doi":"10.51628/001c.73987","DOIUrl":null,"url":null,"abstract":"Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurons, behavior, data analysis and theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.51628/001c.73987","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
表达性架构增强了基于动态的神经种群模型的可解释性
人工神经网络可以从记录的神经活动中恢复潜在的动态,这可能为识别和解释生物计算背后的动态基元提供了一个强大的途径。考虑到神经变异本身并不能唯一地决定潜在的动力系统,可解释的架构应该优先考虑准确和低维的潜在动力。在这项工作中,我们评估了顺序自编码器(sae)从模拟神经数据集中恢复潜在混沌吸引子的性能。我们发现,广泛使用的基于递归神经网络(RNN)动力学的sae无法在真实潜在状态维数下推断出准确的发射率,而且更大的RNN依赖于数据中不存在的动态特征。另一方面,基于神经常微分方程(NODE)动力学的SAEs在真实潜在状态维数下推断出准确的速率,同时也恢复潜在轨迹和不动点结构。研究表明,这主要是因为节点(1)允许使用更高容量的多层感知器(mlp)来建模向量场,(2)预测导数而不是下一个状态。将动态模型的容量与其潜在维度解耦,使节点能够在RNN细胞失效时学习必要的低维动态。此外,NODE预测导数的事实对潜在状态施加了有用的自回归先验。广泛使用的基于RNN的动态的次优可解释性可能会激发替代架构,例如NODE,它可以在低维潜在空间中学习准确的动态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity Expressive architectures enhance interpretability of dynamics-based neural population models Probabilistic representations as building blocks for higher-level vision Deep Direct Discriminative Decoders for High-dimensional Time-series Data Analysis Frontal effective connectivity increases with task demands and time on task: a Dynamic Causal Model of electrocorticogram in macaque monkeys
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1