首页 > 最新文献

Neurons, behavior, data analysis and theory最新文献

英文 中文
Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity 具有随机和局部连通性的脉冲神经网络中运动皮层自发放电活动的建模
Pub Date : 2023-06-26 DOI: 10.51628/001c.82127
Lysea Haggie, Thor Besier, Angus JC McMorland
Computational models of cortical activity can provide insight into the mechanisms of higher-order processing in the human brain including planning, perception and the control of movement. Activity in the cortex is ongoing even in the absence of sensory input or discernable movements and is thought to be linked to the topology of the underlying cortical circuitry. However, the connectivity and its functional role in the generation of spatio-temporal firing patterns and cortical computations are still vastly unknown. Movement of the body is a key function of the brain, with the motor cortex the main cortical area implicated in the generation of movement. We built a spiking neural network model of the motor cortex which incorporates a laminar structure and circuitry based on a previous cortical model by Potjans & Diesmann (2014). A local connectivity scheme was implemented to introduce more physiological plausbility to the cortex model, and the effect on the rates, distributions and irregularity of neuronal firing, was compared to the original random connectivity method and experimental data. Local connectivity increased the distribution of and overall rate of neuronal firing. It also resulted in the irregularity of firing being more similar to those observed in experimental measurements, and a reduction in the variability in power spectrum measures. The larger variability in dynamical behaviour of the local connectivity model suggests that the topological structure of the connections in neuronal population plays a significant role in firing patterns during spontaneous activity. This model aims to take steps towards replicating the macroscopic network of the motor cortex, replicating realistic firing in order to shed light on information coding in the cortex. Large scale computational models such as this one can capture how structure and function relate to observable neuronal firing behaviour, and investigates the underlying computational mechanisms of the brain.
大脑皮层活动的计算模型可以让我们深入了解人类大脑的高阶处理机制,包括计划、感知和运动控制。即使在没有感觉输入或可识别的运动的情况下,皮层的活动也在进行,并且被认为与底层皮层电路的拓扑结构有关。然而,连通性及其在产生时空放电模式和皮层计算中的功能作用仍然非常未知。身体的运动是大脑的一个关键功能,运动皮层是涉及运动产生的主要皮层区域。基于Potjans & .的皮层模型,我们建立了一个包含层流结构和电路的运动皮层脉冲神经网络模型。Diesmann(2014)。采用局部连通性方案,增强了大脑皮层模型的生理合理性,并与原始随机连接方法和实验数据比较了局部连通性对神经元放电速率、分布和不规则性的影响。局部连接增加了神经元放电的分布和总体速率。它还导致发射的不规则性与实验测量中观察到的更相似,并且减少了功率谱测量的可变性。局部连接模型动态行为的较大可变性表明,神经元群体中连接的拓扑结构在自发活动期间的放电模式中起着重要作用。该模型旨在采取步骤复制运动皮层的宏观网络,复制真实的放电,以阐明皮层中的信息编码。像这样的大规模计算模型可以捕捉到结构和功能如何与可观察到的神经元放电行为相关联,并研究大脑的潜在计算机制。
{"title":"Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity","authors":"Lysea Haggie, Thor Besier, Angus JC McMorland","doi":"10.51628/001c.82127","DOIUrl":"https://doi.org/10.51628/001c.82127","url":null,"abstract":"Computational models of cortical activity can provide insight into the mechanisms of higher-order processing in the human brain including planning, perception and the control of movement. Activity in the cortex is ongoing even in the absence of sensory input or discernable movements and is thought to be linked to the topology of the underlying cortical circuitry. However, the connectivity and its functional role in the generation of spatio-temporal firing patterns and cortical computations are still vastly unknown. Movement of the body is a key function of the brain, with the motor cortex the main cortical area implicated in the generation of movement. We built a spiking neural network model of the motor cortex which incorporates a laminar structure and circuitry based on a previous cortical model by Potjans & Diesmann (2014). A local connectivity scheme was implemented to introduce more physiological plausbility to the cortex model, and the effect on the rates, distributions and irregularity of neuronal firing, was compared to the original random connectivity method and experimental data. Local connectivity increased the distribution of and overall rate of neuronal firing. It also resulted in the irregularity of firing being more similar to those observed in experimental measurements, and a reduction in the variability in power spectrum measures. The larger variability in dynamical behaviour of the local connectivity model suggests that the topological structure of the connections in neuronal population plays a significant role in firing patterns during spontaneous activity. This model aims to take steps towards replicating the macroscopic network of the motor cortex, replicating realistic firing in order to shed light on information coding in the cortex. Large scale computational models such as this one can capture how structure and function relate to observable neuronal firing behaviour, and investigates the underlying computational mechanisms of the brain.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134933699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expressive architectures enhance interpretability of dynamics-based neural population models 表达性架构增强了基于动态的神经种群模型的可解释性
Pub Date : 2023-03-28 DOI: 10.51628/001c.73987
Andrew R. Sedler, Christopher Versteeg, Chethan Pandarinath
Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.
人工神经网络可以从记录的神经活动中恢复潜在的动态,这可能为识别和解释生物计算背后的动态基元提供了一个强大的途径。考虑到神经变异本身并不能唯一地决定潜在的动力系统,可解释的架构应该优先考虑准确和低维的潜在动力。在这项工作中,我们评估了顺序自编码器(sae)从模拟神经数据集中恢复潜在混沌吸引子的性能。我们发现,广泛使用的基于递归神经网络(RNN)动力学的sae无法在真实潜在状态维数下推断出准确的发射率,而且更大的RNN依赖于数据中不存在的动态特征。另一方面,基于神经常微分方程(NODE)动力学的SAEs在真实潜在状态维数下推断出准确的速率,同时也恢复潜在轨迹和不动点结构。研究表明,这主要是因为节点(1)允许使用更高容量的多层感知器(mlp)来建模向量场,(2)预测导数而不是下一个状态。将动态模型的容量与其潜在维度解耦,使节点能够在RNN细胞失效时学习必要的低维动态。此外,NODE预测导数的事实对潜在状态施加了有用的自回归先验。广泛使用的基于RNN的动态的次优可解释性可能会激发替代架构,例如NODE,它可以在低维潜在空间中学习准确的动态。
{"title":"Expressive architectures enhance interpretability of dynamics-based neural population models","authors":"Andrew R. Sedler, Christopher Versteeg, Chethan Pandarinath","doi":"10.51628/001c.73987","DOIUrl":"https://doi.org/10.51628/001c.73987","url":null,"abstract":"Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135676451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic representations as building blocks for higher-level vision 概率表示作为高级视觉的构建块
Pub Date : 2023-01-31 DOI: 10.51628/001c.55730
Andrey Chetverikov, Arni Kristjansson
Current theories of perception suggest that the brain represents features of the world as probability distributions, but can such uncertain foundations provide the basis for everyday vision? Perceiving objects and scenes requires knowing not just how features (e.g., colors) are distributed but also where they are and which other features they are combined with. Using a Bayesian computational model, we recovered probabilistic representations used by human observers to search for odd stimuli among distractors. Importantly, we found that the brain integrates information between feature dimensions and spatial locations, leading to more precise representations compared to when information integration is not possible. We also uncovered representational asymmetries and biases, showing their spatial organization and explain how this structure argues against “summary statistics” accounts of visual representations. Our results confirm that probabilistically encoded visual features are bound with other features and to particular locations, providing a powerful demonstration of how probabilistic representations can be a foundation for higher-level vision.
目前的感知理论认为,大脑以概率分布的形式来表征世界的特征,但这种不确定的基础能否成为日常视觉的基础呢?感知物体和场景不仅需要知道特征(如颜色)是如何分布的,还需要知道它们在哪里,以及它们与哪些其他特征相结合。使用贝叶斯计算模型,我们恢复了人类观察者在干扰物中搜索奇数刺激时使用的概率表示。重要的是,我们发现大脑在特征维度和空间位置之间整合信息,与信息整合不可能相比,导致更精确的表征。我们还发现了表征不对称和偏见,展示了它们的空间组织,并解释了这种结构如何与视觉表征的“汇总统计”账户相矛盾。我们的研究结果证实,概率编码的视觉特征与其他特征和特定位置绑定在一起,为概率表示如何成为高级视觉的基础提供了有力的证明。
{"title":"Probabilistic representations as building blocks for higher-level vision","authors":"Andrey Chetverikov, Arni Kristjansson","doi":"10.51628/001c.55730","DOIUrl":"https://doi.org/10.51628/001c.55730","url":null,"abstract":"Current theories of perception suggest that the brain represents features of the world as probability distributions, but can such uncertain foundations provide the basis for everyday vision? Perceiving objects and scenes requires knowing not just how features (e.g., colors) are distributed but also where they are and which other features they are combined with. Using a Bayesian computational model, we recovered probabilistic representations used by human observers to search for odd stimuli among distractors. Importantly, we found that the brain integrates information between feature dimensions and spatial locations, leading to more precise representations compared to when information integration is not possible. We also uncovered representational asymmetries and biases, showing their spatial organization and explain how this structure argues against “summary statistics” accounts of visual representations. Our results confirm that probabilistically encoded visual features are bound with other features and to particular locations, providing a powerful demonstration of how probabilistic representations can be a foundation for higher-level vision.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135256460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Direct Discriminative Decoders for High-dimensional Time-series Data Analysis 用于高维时间序列数据分析的深度直接判别解码器
Pub Date : 2022-05-22 DOI: 10.51628/001c.85131
Mohammadreza Rezaei, Milos Popovic, M. Lankarany, A. Yousefi
The state-space models (SSMs) are widely utilized in the analysis of time-series data. SSMs rely on an explicit definition of the state and observation processes. Characterizing these processes is not always easy and becomes a modeling challenge when the dimension of observed data grows or the observed data distribution deviates from the normal distribution. Here, we propose a new formulation of SSM for high-dimensional observation processes with a heavy-tailed distribution. We call this solution the deep direct discriminative process (D4). The D4 brings deep neural networks’ expressiveness and scalability to the SSM formulation letting us build a novel solution that efficiently estimates the underlying state processes through high-dimensional observation signal.We demonstrate the D4 solutions in simulated and real data such as Lorenz attractors, Langevin dynamics, random walk dynamics, and rat hippocampus spiking neural data and show that the D4’s performance precedes traditional SSMs and RNNs. The D4 can be applied to a broader class of time-series data where the connection between high-dimensional observation and the underlying latent process is hard to characterize.
状态空间模型在时间序列数据分析中得到了广泛的应用。ssm依赖于状态和观测过程的明确定义。描述这些过程并不总是容易的,当观测数据的维数增加或观测数据分布偏离正态分布时,就成为建模的挑战。在这里,我们提出了一种新的具有重尾分布的高维观测过程的SSM公式。我们称这种解决方案为深度直接判别过程(D4)。D4将深度神经网络的表达能力和可扩展性引入到SSM公式中,使我们能够构建一种新的解决方案,通过高维观测信号有效地估计潜在的状态过程。我们在模拟和真实数据(如Lorenz吸引子、Langevin动力学、随机漫步动力学和大鼠海马峰神经数据)中展示了D4解决方案,并表明D4的性能优于传统的ssm和rnn。D4可以应用于更广泛的时间序列数据,其中高维观测和潜在过程之间的联系很难表征。
{"title":"Deep Direct Discriminative Decoders for High-dimensional Time-series Data Analysis","authors":"Mohammadreza Rezaei, Milos Popovic, M. Lankarany, A. Yousefi","doi":"10.51628/001c.85131","DOIUrl":"https://doi.org/10.51628/001c.85131","url":null,"abstract":"The state-space models (SSMs) are widely utilized in the analysis of time-series data. SSMs rely on an explicit definition of the state and observation processes. Characterizing these processes is not always easy and becomes a modeling challenge when the dimension of observed data grows or the observed data distribution deviates from the normal distribution. Here, we propose a new formulation of SSM for high-dimensional observation processes with a heavy-tailed distribution. We call this solution the deep direct discriminative process (D4). The D4 brings deep neural networks’ expressiveness and scalability to the SSM formulation letting us build a novel solution that efficiently estimates the underlying state processes through high-dimensional observation signal.We demonstrate the D4 solutions in simulated and real data such as Lorenz attractors, Langevin dynamics, random walk dynamics, and rat hippocampus spiking neural data and show that the D4’s performance precedes traditional SSMs and RNNs. The D4 can be applied to a broader class of time-series data where the connection between high-dimensional observation and the underlying latent process is hard to characterize.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"3 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90498947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frontal effective connectivity increases with task demands and time on task: a Dynamic Causal Model of electrocorticogram in macaque monkeys 额叶有效连通性随任务需求和任务时间的增加而增加:猕猴脑皮质电图的动态因果模型
Pub Date : 2022-02-21 DOI: 10.51628/001c.68433
K. Wegner, C. R. Wilson, E. Procyk, K. Friston, Frederik Van de Steen, D. Pinotsis, Daniele Marinazzo
We apply Dynamic Causal Models to electrocorticogram recordings from two macaque monkeys performing a problem-solving task that engages working memory, and induces time-on-task effects. We thus provide a computational account of changes in effective connectivity within two regions of the fronto-parietal network, the dorsolateral prefrontal cortex and the pre-supplementary motor area. We find that forward connections between the two regions increased in strength when task demands increased, and as the experimental session progressed. Similarities in the effects of task demands and time on task allow us to interpret changes in frontal connectivity in terms of increased attentional effort allocation that compensates cognitive fatigue.
我们将动态因果模型应用于两只猕猴执行一项涉及工作记忆的解决问题任务的皮质电图记录,并诱导了任务时间效应。因此,我们提供了在额顶叶网络的两个区域,背外侧前额叶皮层和前补充运动区域内有效连接变化的计算帐户。我们发现,当任务要求增加时,两个区域之间的正向连接的强度增加,随着实验的进行。任务需求和时间对任务的影响的相似性使我们能够从增加的注意力努力分配来补偿认知疲劳的角度来解释额叶连接的变化。
{"title":"Frontal effective connectivity increases with task demands and time on task: a Dynamic Causal Model of electrocorticogram in macaque monkeys","authors":"K. Wegner, C. R. Wilson, E. Procyk, K. Friston, Frederik Van de Steen, D. Pinotsis, Daniele Marinazzo","doi":"10.51628/001c.68433","DOIUrl":"https://doi.org/10.51628/001c.68433","url":null,"abstract":"We apply Dynamic Causal Models to electrocorticogram recordings from two macaque monkeys performing a problem-solving task that engages working memory, and induces time-on-task effects. We thus provide a computational account of changes in effective connectivity within two regions of the fronto-parietal network, the dorsolateral prefrontal cortex and the pre-supplementary motor area. We find that forward connections between the two regions increased in strength when task demands increased, and as the experimental session progressed. Similarities in the effects of task demands and time on task allow us to interpret changes in frontal connectivity in terms of increased attentional effort allocation that compensates cognitive fatigue.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82241163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Golden rhythms as a theoretical framework for cross-frequency organization. 黄金节奏作为跨频组织的理论框架。
Pub Date : 2022-01-01 DOI: 10.51628/001c.38960
Mark A Kramer

While brain rhythms appear fundamental to brain function, why brain rhythms consistently organize into the small set of discrete frequency bands observed remains unknown. Here we propose that rhythms separated by factors of the golden ratio (ϕ=(1+5)/2)) optimally support segregation and cross-frequency integration of information transmission in the brain. Organized by the golden ratio, pairs of transient rhythms support multiplexing by reducing interference between separate communication channels, and triplets of transient rhythms support integration of signals to establish a hierarchy of cross-frequency interactions. We illustrate this framework in simulation and apply this framework to propose four hypotheses.

虽然大脑节律似乎是大脑功能的基础,但为什么大脑节律始终组织成一小组观察到的离散频带仍然未知。在这里,我们提出,由黄金分割率(φ =(1+5)/2)的因素分开的节奏,最佳地支持隔离和跨频率整合的信息传输在大脑中。由黄金比例组织,瞬态节奏对通过减少独立通信信道之间的干扰来支持多路复用,瞬态节奏的三联体支持信号集成,以建立跨频率交互的层次结构。我们在模拟中说明了这个框架,并应用这个框架提出了四个假设。
{"title":"Golden rhythms as a theoretical framework for cross-frequency organization.","authors":"Mark A Kramer","doi":"10.51628/001c.38960","DOIUrl":"https://doi.org/10.51628/001c.38960","url":null,"abstract":"<p><p>While brain rhythms appear fundamental to brain function, why brain rhythms consistently organize into the small set of discrete frequency bands observed remains unknown. Here we propose that rhythms separated by factors of the golden ratio <math><mrow><mrow><mrow><mo>(</mo><mi>ϕ</mi><mo>=</mo><mo>(</mo><mn>1</mn><mo>+</mo><msqrt><mn>5</mn></msqrt><mo>)</mo><mo>/</mo><mn>2</mn><mo>)</mo></mrow><mo>)</mo></mrow></mrow></math> optimally support segregation and cross-frequency integration of information transmission in the brain. Organized by the golden ratio, pairs of transient rhythms support multiplexing by reducing interference between separate communication channels, and triplets of transient rhythms support integration of signals to establish a hierarchy of cross-frequency interactions. We illustrate this framework in simulation and apply this framework to propose four hypotheses.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181851/pdf/nihms-1844698.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9529868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Explaining the effectiveness of fear extinction through latent-cause inference 通过潜在原因推理解释恐惧消除的有效性
Pub Date : 2021-10-04 DOI: 10.31234/osf.io/2fhr7
Mingyu Song, Carolyn E. Jones, M. Monfils, Y. Niv
Acquiring fear responses to predictors of aversive outcomes is crucial for survival. At the same time, it is important to be able to modify such associations when they are maladaptive, for instance in treating anxiety and trauma-related disorders. Standard extinction procedures can reduce fear temporarily, but with sufficient delay or with reminders of the aversive experience, fear often returns. The latent-cause inference framework explains the return of fear by presuming that animals learn a rich model of the environment, in which the standard extinction procedure triggers the inference of a new latent cause, preventing the extinguishing of the original aversive associations. This computational framework had previously inspired an alternative extinction paradigm -- gradual extinction -- which indeed was shown to be more effective in reducing fear. However, the original framework was not sufficient to explain the pattern of results seen in the experiments. Here, we propose a formal model to explain the effectiveness of gradual extinction, in contrast to the ineffectiveness of standard extinction and a gradual reverse control procedure. We demonstrate through quantitative simulation that our model can explain qualitative behavioral differences across different extinction procedures as seen in the empirical study. We verify the necessity of several key assumptions added to the latent-cause framework, which suggest potential general principles of animal learning and provide novel predictions for future experiments.
获得对厌恶结果预测者的恐惧反应对生存至关重要。与此同时,重要的是,当这些关联出现适应不良时,能够修改它们,例如在治疗焦虑和创伤相关疾病时。标准的消除程序可以暂时减少恐惧,但如果有足够的延迟或对厌恶经历的提醒,恐惧往往会再次出现。潜在原因推理框架通过假设动物学习了丰富的环境模型来解释恐惧的回归,其中标准的灭绝程序触发了新的潜在原因的推理,阻止了原始厌恶关联的熄灭。这个计算框架之前启发了另一种灭绝范式——逐渐灭绝——这确实被证明在减少恐惧方面更有效。然而,最初的框架不足以解释实验中看到的结果模式。在这里,我们提出了一个正式的模型来解释逐步消光的有效性,而不是标准消光和逐步反向控制过程的有效性。我们通过定量模拟证明,我们的模型可以解释在经验研究中看到的不同灭绝过程中的定性行为差异。我们验证了在潜在原因框架中添加几个关键假设的必要性,这些假设提出了动物学习的潜在一般原则,并为未来的实验提供了新的预测。
{"title":"Explaining the effectiveness of fear extinction through latent-cause inference","authors":"Mingyu Song, Carolyn E. Jones, M. Monfils, Y. Niv","doi":"10.31234/osf.io/2fhr7","DOIUrl":"https://doi.org/10.31234/osf.io/2fhr7","url":null,"abstract":"Acquiring fear responses to predictors of aversive outcomes is crucial for survival. At the same time, it is important to be able to modify such associations when they are maladaptive, for instance in treating anxiety and trauma-related disorders. Standard extinction procedures can reduce fear temporarily, but with sufficient delay or with reminders of the aversive experience, fear often returns. The latent-cause inference framework explains the return of fear by presuming that animals learn a rich model of the environment, in which the standard extinction procedure triggers the inference of a new latent cause, preventing the extinguishing of the original aversive associations. This computational framework had previously inspired an alternative extinction paradigm -- gradual extinction -- which indeed was shown to be more effective in reducing fear. However, the original framework was not sufficient to explain the pattern of results seen in the experiments. Here, we propose a formal model to explain the effectiveness of gradual extinction, in contrast to the ineffectiveness of standard extinction and a gradual reverse control procedure. We demonstrate through quantitative simulation that our model can explain qualitative behavioral differences across different extinction procedures as seen in the empirical study. We verify the necessity of several key assumptions added to the latent-cause framework, which suggest potential general principles of animal learning and provide novel predictions for future experiments.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90833830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How do we generalize? 我们如何概括?
Pub Date : 2021-08-30 DOI: 10.51628/001c.27687
Jessica Elizabeth Taylor, Aurelio Cortese, Helen C Barron, Xiaochuan Pan, Masamichi Sakagami, Dagmar Zeithamova

Humans and animals are able to generalize or transfer information from previous experience so that they can behave appropriately in novel situations. What mechanisms-computations, representations, and neural systems-give rise to this remarkable ability? The members of this Generative Adversarial Collaboration (GAC) come from a range of academic backgrounds but are all interested in uncovering the mechanisms of generalization. We started out this GAC with the aim of arbitrating between two alternative conceptual accounts: (1) generalization stems from integration of multiple experiences into summary representations that reflect generalized knowledge, and (2) generalization is computed on-the-fly using separately stored individual memories. Across the course of this collaboration, we found that-despite using different terminology and techniques, and although some of our specific papers may provide evidence one way or the other-we in fact largely agree that both of these broad accounts (as well as several others) are likely valid. We believe that future research and theoretical synthesis across multiple lines of research is necessary to help determine the degree to which different candidate generalization mechanisms may operate simultaneously, operate on different scales, or be employed under distinct conditions. Here, as the first step, we introduce some of these candidate mechanisms and we discuss the issues currently hindering better synthesis of generalization research. Finally, we introduce some of our own research questions that have arisen over the course of this GAC, that we believe would benefit from future collaborative efforts.

人类和动物能够从以往的经验中归纳或转移信息,从而在新情况下做出适当的行为。是什么机制--计算、表征和神经系统--产生了这种非凡的能力?这个生成对抗协作组(GAC)的成员来自不同的学术背景,但都对揭示泛化机制感兴趣。我们成立 GAC 的初衷是在两种可供选择的概念描述之间进行仲裁:(1)泛化源于将多种经验整合到反映泛化知识的总结表征中;(2)泛化是利用单独存储的个体记忆即时计算得出的。在整个合作过程中,我们发现,尽管使用了不同的术语和技术,尽管我们的一些具体论文可能会提供这样或那样的证据,但事实上,我们在很大程度上同意这两种广义的说法(以及其他一些说法)都可能是有效的。我们认为,未来的研究和跨多个研究领域的理论综合是必要的,以帮助确定不同的候选概括机制在多大程度上可能同时运作、在不同的规模上运作或在不同的条件下使用。在此,作为第一步,我们将介绍其中一些候选机制,并讨论目前阻碍更好地综合泛化研究的问题。最后,我们将介绍一些我们自己的研究问题,这些问题是在全球咨询委员会的研究过程中产生的,我们相信这些问题将受益于未来的合作努力。
{"title":"How do we generalize?","authors":"Jessica Elizabeth Taylor, Aurelio Cortese, Helen C Barron, Xiaochuan Pan, Masamichi Sakagami, Dagmar Zeithamova","doi":"10.51628/001c.27687","DOIUrl":"10.51628/001c.27687","url":null,"abstract":"<p><p>Humans and animals are able to generalize or transfer information from previous experience so that they can behave appropriately in novel situations. What mechanisms-computations, representations, and neural systems-give rise to this remarkable ability? The members of this Generative Adversarial Collaboration (GAC) come from a range of academic backgrounds but are all interested in uncovering the mechanisms of generalization. We started out this GAC with the aim of arbitrating between two alternative conceptual accounts: (1) generalization stems from integration of multiple experiences into summary representations that reflect generalized knowledge, and (2) generalization is computed on-the-fly using separately stored individual memories. Across the course of this collaboration, we found that-despite using different terminology and techniques, and although some of our specific papers may provide evidence one way or the other-we in fact largely agree that both of these broad accounts (as well as several others) are likely valid. We believe that future research and theoretical synthesis across multiple lines of research is necessary to help determine the degree to which different candidate generalization mechanisms may operate simultaneously, operate on different scales, or be employed under distinct conditions. Here, as the first step, we introduce some of these candidate mechanisms and we discuss the issues currently hindering better synthesis of generalization research. Finally, we introduce some of our own research questions that have arisen over the course of this GAC, that we believe would benefit from future collaborative efforts.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613724/pdf/EMS144088.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40680586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models 通过结合自然主义范例、深度采样和预测计算模型,实现逆向工程现实世界泛化的路线图
Pub Date : 2021-08-23 DOI: 10.51628/001c.67879
P. Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, V. Borghesani
Real-world generalization, e.g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences. Such a seemingly easy behavioral choice requires the interplay of multiple neural mechanisms, from integrative encoding to category-based inference, weighted differently according to the circumstances. Here, we argue that a comprehensive theory of the neuro-cognitive substrates of real-world generalization will greatly benefit from empirical research with three key elements. First, the ecological validity provided by multimodal, naturalistic paradigms. Second, the model stability afforded by deep sampling. Finally, the statistical rigor granted by predictive modeling and computational controls.
现实世界的泛化,例如,决定接近从未见过的动物,依赖于上下文信息和以前的经验。这样一个看似简单的行为选择需要多种神经机制的相互作用,从整合编码到基于类别的推理,根据环境的不同加权。在此,我们认为现实世界泛化的神经认知基础的综合理论将极大地受益于三个关键要素的实证研究。首先,多模态、自然主义范式提供的生态有效性。二是深度采样所提供的模型稳定性。最后,通过预测建模和计算控制获得的统计严谨性。
{"title":"A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models","authors":"P. Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, V. Borghesani","doi":"10.51628/001c.67879","DOIUrl":"https://doi.org/10.51628/001c.67879","url":null,"abstract":"Real-world generalization, e.g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences. Such a seemingly easy behavioral choice requires the interplay of multiple neural mechanisms, from integrative encoding to category-based inference, weighted differently according to the circumstances. Here, we argue that a comprehensive theory of the neuro-cognitive substrates of real-world generalization will greatly benefit from empirical research with three key elements. First, the ecological validity provided by multimodal, naturalistic paradigms. Second, the model stability afforded by deep sampling. Finally, the statistical rigor granted by predictive modeling and computational controls.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91048635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating smooth and sparse neural receptive fields with a flexible spline basis 基于柔性样条基的平滑稀疏神经感受野估计
Pub Date : 2021-08-18 DOI: 10.51628/001c.27578
Ziwei Huang, Yanli Ran, Jonathan Oesterle, Thomas Euler, Philipp Berens
Spatio-temporal receptive field (STRF) models are frequently used to approximate the computation implemented by a sensory neuron. Typically, such STRFs are assumed to be smooth and sparse. Current state-of-the-art approaches for estimating STRFs based empirical Bayes estimation encode such prior knowledge into a prior covariance matrix, whose hyperparameters are learned from the data, and thus provide STRF estimates with the desired properties even with little or noisy data. However, empirical Bayes methods are often not computationally efficient in high-dimensional settings, as encountered in sensory neuroscience. Here we pursued an alternative approach and encode prior knowledge for estimation of STRFs by choosing a set of basis function with the desired properties: a natural cubic spline basis. Our method is computationally efficient, and can be easily applied to Linear-Gaussian and Linear-Nonlinear-Poisson models as well as more complicated Linear-Nonlinear-Linear-Nonlinear cascade model or spike-triggered clustering methods. We compared the performance of spline-based methods to no-spline ones on simulated and experimental data, showing that spline-based methods consistently outperformed the no-spline versions. We provide a Python toolbox for all suggested methods (https://github.com/berenslab/RFEst/).
{"title":"Estimating smooth and sparse neural receptive fields with a flexible spline basis","authors":"Ziwei Huang, Yanli Ran, Jonathan Oesterle, Thomas Euler, Philipp Berens","doi":"10.51628/001c.27578","DOIUrl":"https://doi.org/10.51628/001c.27578","url":null,"abstract":"Spatio-temporal receptive field (STRF) models are frequently used to approximate the computation implemented by a sensory neuron. Typically, such STRFs are assumed to be smooth and sparse. Current state-of-the-art approaches for estimating STRFs based empirical Bayes estimation encode such prior knowledge into a prior covariance matrix, whose hyperparameters are learned from the data, and thus provide STRF estimates with the desired properties even with little or noisy data. However, empirical Bayes methods are often not computationally efficient in high-dimensional settings, as encountered in sensory neuroscience. Here we pursued an alternative approach and encode prior knowledge for estimation of STRFs by choosing a set of basis function with the desired properties: a natural cubic spline basis. Our method is computationally efficient, and can be easily applied to Linear-Gaussian and Linear-Nonlinear-Poisson models as well as more complicated Linear-Nonlinear-Linear-Nonlinear cascade model or spike-triggered clustering methods. We compared the performance of spline-based methods to no-spline ones on simulated and experimental data, showing that spline-based methods consistently outperformed the no-spline versions. We provide a Python toolbox for all suggested methods (https://github.com/berenslab/RFEst/).","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80946148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Neurons, behavior, data analysis and theory
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1