首页 > 最新文献

Neural Computation最新文献

英文 中文
A General, Noise-Driven Mechanism for the 1/f-Like Behavior of Neural Field Spectra 神经场谱 1/f-Like 行为的一般噪声驱动机制
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01682
Mark A. Kramer;Catherine J. Chu
Consistent observations across recording modalities, experiments, and neural systems find neural field spectra with 1/f-like scaling, eliciting many alternative theories to explain this universal phenomenon. We show that a general dynamical system with stochastic drive and minimal assumptions generates 1/f-like spectra consistent with the range of values observed in vivo without requiring a specific biological mechanism or collective critical behavior.
对各种记录模式、实验和神经系统的一致观察发现,神经场频谱具有类似 1/f 的缩放,这引发了许多替代理论来解释这一普遍现象。我们的研究表明,一个具有随机驱动力和最小假设的一般动力系统能产生与体内观察到的数值范围一致的 1/f 类频谱,而不需要特定的生物机制或集体临界行为。
{"title":"A General, Noise-Driven Mechanism for the 1/f-Like Behavior of Neural Field Spectra","authors":"Mark A. Kramer;Catherine J. Chu","doi":"10.1162/neco_a_01682","DOIUrl":"10.1162/neco_a_01682","url":null,"abstract":"Consistent observations across recording modalities, experiments, and neural systems find neural field spectra with 1/f-like scaling, eliciting many alternative theories to explain this universal phenomenon. We show that a general dynamical system with stochastic drive and minimal assumptions generates 1/f-like spectra consistent with the range of values observed in vivo without requiring a specific biological mechanism or collective critical behavior.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Promoting the Shift From Pixel-Level Correlations to Object Semantics Learning by Rethinking Computer Vision Benchmark Data Sets 通过重新思考计算机视觉基准数据集,促进从像素级相关性学习到物体语义学习的转变
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01677
Maria Osório;Andreas Wichert
In computer vision research, convolutional neural networks (CNNs) have demonstrated remarkable capabilities at extracting patterns from raw pixel data, achieving state-of-the-art recognition accuracy. However, they significantly differ from human visual perception, prioritizing pixel-level correlations and statistical patterns, often overlooking object semantics. To explore this difference, we propose an approach that isolates core visual features crucial for human perception and object recognition: color, texture, and shape. In experiments on three benchmarks—Fruits 360, CIFAR-10, and Fashion MNIST—each visual feature is individually input into a neural network. Results reveal data set–dependent variations in classification accuracy, highlighting that deep learning models tend to learn pixel-level correlations instead of fundamental visual features. To validate this observation, we used various combinations of concatenated visual features as input for a neural network on the CIFAR-10 data set. CNNs excel at learning statistical patterns in images, achieving exceptional performance when training and test data share similar distributions. To substantiate this point, we trained a CNN on CIFAR-10 data set and evaluated its performance on the “dog” class from CIFAR-10 and on an equivalent number of examples from the Stanford Dogs data set. The CNN poor performance on Stanford Dogs images underlines the disparity between deep learning and human visual perception, highlighting the need for models that learn object semantics. Specialized benchmark data sets with controlled variations hold promise for aligning learned representations with human cognition in computer vision research.
在计算机视觉研究中,卷积神经网络(CNN)在从原始像素数据中提取模式方面表现出了非凡的能力,达到了最先进的识别精度。然而,它们与人类的视觉感知有很大不同,它们优先考虑像素级的相关性和统计模式,往往忽略了物体的语义。为了探索这种差异,我们提出了一种方法,它能分离出对人类感知和物体识别至关重要的核心视觉特征:颜色、纹理和形状。在三个基准--水果 360、CIFAR-10 和时尚 MNIST--的实验中,每个视觉特征都被单独输入到神经网络中。实验结果表明,分类准确率随数据集而变化,这突出表明深度学习模型倾向于学习像素级相关性,而不是基本视觉特征。为了验证这一观点,我们在 CIFAR-10 数据集上使用了不同的视觉特征串联组合作为神经网络的输入。CNN 擅长学习图像中的统计模式,在训练数据和测试数据具有相似分布的情况下,CNN 可实现卓越的性能。为了证明这一点,我们在 CIFAR-10 数据集上训练了一个 CNN,并评估了它在 CIFAR-10 的 "狗 "类和斯坦福狗数据集的同等数量示例上的表现。CNN 在 "斯坦福狗 "图像上的表现不佳,凸显了深度学习与人类视觉感知之间的差距,强调了学习对象语义的模型的必要性。具有可控变化的专用基准数据集有望使计算机视觉研究中的学习表征与人类认知相一致。
{"title":"Promoting the Shift From Pixel-Level Correlations to Object Semantics Learning by Rethinking Computer Vision Benchmark Data Sets","authors":"Maria Osório;Andreas Wichert","doi":"10.1162/neco_a_01677","DOIUrl":"10.1162/neco_a_01677","url":null,"abstract":"In computer vision research, convolutional neural networks (CNNs) have demonstrated remarkable capabilities at extracting patterns from raw pixel data, achieving state-of-the-art recognition accuracy. However, they significantly differ from human visual perception, prioritizing pixel-level correlations and statistical patterns, often overlooking object semantics. To explore this difference, we propose an approach that isolates core visual features crucial for human perception and object recognition: color, texture, and shape. In experiments on three benchmarks—Fruits 360, CIFAR-10, and Fashion MNIST—each visual feature is individually input into a neural network. Results reveal data set–dependent variations in classification accuracy, highlighting that deep learning models tend to learn pixel-level correlations instead of fundamental visual features. To validate this observation, we used various combinations of concatenated visual features as input for a neural network on the CIFAR-10 data set. CNNs excel at learning statistical patterns in images, achieving exceptional performance when training and test data share similar distributions. To substantiate this point, we trained a CNN on CIFAR-10 data set and evaluated its performance on the “dog” class from CIFAR-10 and on an equivalent number of examples from the Stanford Dogs data set. The CNN poor performance on Stanford Dogs images underlines the disparity between deep learning and human visual perception, highlighting the need for models that learn object semantics. Specialized benchmark data sets with controlled variations hold promise for aligning learned representations with human cognition in computer vision research.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks 尖峰神经元网络中的脉冲形状和电压相关同步性
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01680
Bastian Pietras
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.
脉冲耦合尖峰神经网络是一种强大的工具,可以从机理上深入了解神经元如何自我组织以产生连贯的集体行为。这些网络使用简单的尖峰神经元模型,如θ神经元或二次积分-发射(QIF)神经元,复制了真实神经动力学的基本特征。神经元之间的相互作用是通过无限窄的脉冲或尖峰来模拟的,而不是真实突触的更复杂动态。为了使这些网络在生物学上更加合理,有人提出它们还必须考虑到脉冲的有限宽度,因为这可能对网络动力学产生重大影响。然而,对这些脉冲的推导和解释是相互矛盾的,脉冲形状对网络动力学的影响在很大程度上也未得到探讨。在这里,我采用一种全面的方法来研究 QIF 和 θ 神经元网络中的脉冲耦合。我认为窄脉冲能激活电压依赖性突触电导,并展示了如何在 QIF 神经元中实现窄脉冲,使其效应能持续到尖峰之后的阶段。通过对全局耦合尖峰神经元网络进行精确的低维描述,我证明了在瞬时相互作用中,由于平均电压的有效耦合,会出现集体振荡。我通过一系列具有任意有限宽度、对称或不对称形状的平滑脉冲函数,分析了脉冲形状的影响。对于对称脉冲,所产生的电压耦合在同步神经元方面并不十分有效,但对尖峰后相位略有倾斜的脉冲则很容易产生集体振荡。这些结果揭示了一种依赖电压的尖峰同步机制,它是突发性集体行为的核心,有限宽度的脉冲促进了这种机制,并与尖峰神经元网络中的传统突触传递相辅相成。
{"title":"Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks","authors":"Bastian Pietras","doi":"10.1162/neco_a_01680","DOIUrl":"10.1162/neco_a_01680","url":null,"abstract":"Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model 通过重参数化网络模型学习循环神经网络的定点
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01681
Vicky Zhu;Robert Rosenbaum
In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
在计算神经科学中,循环神经网络被广泛用于神经活动和学习建模。在许多研究中,循环神经网络的定点被用于模拟神经对静态或缓慢变化刺激的反应,如视觉皮层对静态视觉刺激的反应。这些应用提出了一个问题:如何训练递归神经网络中的权重,以最小化在定点上评估的损失函数。与此同时,训练定点也是机器学习中深度平衡模型研究的核心课题。一种自然的方法是在权重的欧氏空间上使用梯度下降法。我们的研究表明,这种方法会导致学习效果不佳,部分原因是损失面中出现了奇点。我们利用对递归网络模型的重新参数化,推导出两种可供选择的学习规则,它们能产生更稳健的学习动态。我们证明,这些学习规则可以避免奇异性,而且比标准梯度下降学习方法更有效。在循环权重空间的非欧几里得度量下,新的学习规则可分别解释为最陡峭下降和梯度下降。我们的研究结果质疑了一个常见的隐含假设,即大脑中的学习应该遵循突触权重的负欧氏梯度。
{"title":"Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model","authors":"Vicky Zhu;Robert Rosenbaum","doi":"10.1162/neco_a_01681","DOIUrl":"10.1162/neco_a_01681","url":null,"abstract":"In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mean Field to Capture Asynchronous Irregular Dynamics of Conductance-Based Networks of Adaptive Quadratic Integrate-and-Fire Neuron Models 捕捉基于电导的自适应四元积分与火神经元模型网络的异步不规则动态的平均场
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01670
Christoffer G. Alexandersen;Chloé Duprat;Aitakin Ezzati;Pierre Houzelstein;Ambre Ledoux;Yuhong Liu;Sandra Saghir;Alain Destexhe;Federico Tesler;Damien Depannemaecker
Mean-field models are a class of models used in computational neuroscience to study the behavior of large populations of neurons. These models are based on the idea of representing the activity of a large number of neurons as the average behavior of mean-field variables. This abstraction allows the study of large-scale neural dynamics in a computationally efficient and mathematically tractable manner. One of these methods, based on a semianalytical approach, has previously been applied to different types of single-neuron models, but never to models based on a quadratic form. In this work, we adapted this method to quadratic integrate-and-fire neuron models with adaptation and conductance-based synaptic interactions. We validated the mean-field model by comparing it to the spiking network model. This mean-field model should be useful to model large-scale activity based on quadratic neurons interacting with conductance-based synapses.
均场模型是计算神经科学中用于研究大量神经元群体行为的一类模型。这些模型基于将大量神经元的活动表示为平均场变量的平均行为的理念。这种抽象方法允许以一种计算高效、数学上可控的方式研究大规模神经动力学。其中一种基于半解析方法的方法以前曾应用于不同类型的单神经元模型,但从未应用于基于二次方形式的模型。在这项研究中,我们将这种方法应用于具有适应性和基于电导的突触相互作用的二次整合-发射神经元模型。通过与尖峰网络模型比较,我们验证了均值场模型。这种均值场模型应该有助于模拟基于四元神经元与基于电导的突触相互作用的大规模活动。
{"title":"A Mean Field to Capture Asynchronous Irregular Dynamics of Conductance-Based Networks of Adaptive Quadratic Integrate-and-Fire Neuron Models","authors":"Christoffer G. Alexandersen;Chloé Duprat;Aitakin Ezzati;Pierre Houzelstein;Ambre Ledoux;Yuhong Liu;Sandra Saghir;Alain Destexhe;Federico Tesler;Damien Depannemaecker","doi":"10.1162/neco_a_01670","DOIUrl":"10.1162/neco_a_01670","url":null,"abstract":"Mean-field models are a class of models used in computational neuroscience to study the behavior of large populations of neurons. These models are based on the idea of representing the activity of a large number of neurons as the average behavior of mean-field variables. This abstraction allows the study of large-scale neural dynamics in a computationally efficient and mathematically tractable manner. One of these methods, based on a semianalytical approach, has previously been applied to different types of single-neuron models, but never to models based on a quadratic form. In this work, we adapted this method to quadratic integrate-and-fire neuron models with adaptation and conductance-based synaptic interactions. We validated the mean-field model by comparing it to the spiking network model. This mean-field model should be useful to model large-scale activity based on quadratic neurons interacting with conductance-based synapses.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes 生物神经网络的学习是否基于随机梯度下降?使用随机过程的分析
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01668
Sören Christensen;Jan Kallsen
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.
近年来,关于生物神经网络(BNN)的学习与人工神经网络的学习有何不同,一直存在着激烈的争论。人们通常认为,大脑中连接的更新仅依赖于局部信息,因此不能使用随机梯度-后裔类型的优化方法。在本文中,我们研究了一种用于 BNN 监督学习的随机模型。我们发现,当每个学习机会都经过多次局部更新处理时,就会出现(连续的)梯度阶跃。这一结果表明,随机梯度下降确实可以在 BNN 的优化中发挥作用。
{"title":"Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes","authors":"Sören Christensen;Jan Kallsen","doi":"10.1162/neco_a_01668","DOIUrl":"10.1162/neco_a_01668","url":null,"abstract":"In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Desiderata for Normative Models of Synaptic Plasticity 突触可塑性规范模型的基本要求
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01671
Colin Bredenberg;Cristina Savin
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
突触可塑性的规范模型利用计算原理来预测行为和网络层面的适应现象。近年来,这一领域的理论研究激增,但实验证实仍然有限。在这篇综述中,我们将从一系列必要条件的角度对规范可塑性模型的研究工作进行梳理,这些必要条件一旦得到满足,就能确保特定模型能够证明可塑性与适应性行为之间的明确联系,与神经可塑性的已知生物学证据相一致,并产生具体的可检验预测。作为原型,我们对 REINFORCE 算法进行了详细分析。我们还讨论了新模型如何开始改进已确定的标准,并提出了进一步发展的途径。总之,我们提供了一个概念指南,以帮助开发精确、强大和可实验检验的神经学习理论。
{"title":"Desiderata for Normative Models of Synaptic Plasticity","authors":"Colin Bredenberg;Cristina Savin","doi":"10.1162/neco_a_01671","DOIUrl":"10.1162/neco_a_01671","url":null,"abstract":"Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Efficiency, Dimensionality Reduction, and the Generalized Symmetric Information Bottleneck 数据效率、降维与广义对称信息瓶颈
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01667
K. Michael Martini;Ilya Nemenman
The symmetric information bottleneck (SIB), an extension of the more familiar information bottleneck, is a dimensionality-reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the generalized symmetric information bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the data set size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables.
对称信息瓶颈(SIB)是人们更熟悉的信息瓶颈的扩展,是一种同时压缩两个随机变量以保留其压缩版本之间信息的降维技术。我们引入了广义对称信息瓶颈(GSIB),探讨了这种同步缩减成本的不同函数形式。然后,我们探讨了这种同步压缩对数据集大小的要求。为此,我们推导出了相关损失函数统计波动的边界和均方根估计值。我们发现,在典型情况下,与逐个压缩变量相比,同时压缩 GSIB 所需的数据要少得多,才能达到相同的误差。我们认为,这是一个更普遍原理的例子,即同时压缩比独立压缩每个输入变量更有效率。
{"title":"Data Efficiency, Dimensionality Reduction, and the Generalized Symmetric Information Bottleneck","authors":"K. Michael Martini;Ilya Nemenman","doi":"10.1162/neco_a_01667","DOIUrl":"10.1162/neco_a_01667","url":null,"abstract":"The symmetric information bottleneck (SIB), an extension of the more familiar information bottleneck, is a dimensionality-reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the generalized symmetric information bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the data set size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Fitting Approach to Construct Single-Neuron Models With Patch Clamp and High-Density Microelectrode Arrays 利用膜片钳和高密度微电极阵列构建单神经元模型的多模态拟合方法
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01672
Alessio Paolo Buccino;Tanguy Damart;Julian Bartram;Darshan Mandge;Xiaohan Xue;Mickael Zbili;Tobias Gänswein;Aurélien Jaquier;Vishalini Emmenegger;Henry Markram;Andreas Hierlemann;Werner Van Geit
In computational neuroscience, multicompartment models are among the most biophysically realistic representations of single neurons. Constructing such models usually involves the use of the patch-clamp technique to record somatic voltage signals under different experimental conditions. The experimental data are then used to fit the many parameters of the model. While patching of the soma is currently the gold-standard approach to build multicompartment models, several studies have also evidenced a richness of dynamics in dendritic and axonal sections. Recording from the soma alone makes it hard to observe and correctly parameterize the activity of nonsomatic compartments. In order to provide a richer set of data as input to multicompartment models, we here investigate the combination of somatic patch-clamp recordings with recordings of high-density microelectrode arrays (HD-MEAs). HD-MEAs enable the observation of extracellular potentials and neural activity of neuronal compartments at subcellular resolution. In this work, we introduce a novel framework to combine patch-clamp and HD-MEA data to construct multicompartment models. We first validate our method on a ground-truth model with known parameters and show that the use of features extracted from extracellular signals, in addition to intracellular ones, yields models enabling better fits than using intracellular features alone. We also demonstrate our procedure using experimental data by constructing cell models from in vitro cell cultures. The proposed multimodal fitting procedure has the potential to augment the modeling efforts of the computational neuroscience community and provide the field with neuronal models that are more realistic and can be better validated.
在计算神经科学中,多室模型是单个神经元在生物物理上最逼真的表征之一。构建此类模型通常需要使用膜片钳技术记录不同实验条件下的体电压信号。然后利用实验数据来拟合模型的许多参数。虽然对体细胞进行贴片是目前建立多室模型的黄金标准方法,但一些研究也证明树突和轴突部分具有丰富的动态变化。仅从体节记录很难观察到非配体区室的活动并对其进行正确的参数化。为了提供更丰富的数据集作为多区室模型的输入,我们在此研究了体细胞膜片钳记录与高密度微电极阵列(HD-MEAs)记录的结合。HD-MEAs 能够以亚细胞分辨率观察细胞外电位和神经元区室的神经活动。在这项工作中,我们引入了一个新颖的框架,将膜片钳和 HD-MEA 数据结合起来构建多室模型。我们首先在一个已知参数的真实模型上验证了我们的方法,结果表明,除了使用细胞内特征外,还使用从细胞外信号中提取的特征,能产生比单独使用细胞内特征更好的拟合模型。我们还利用实验数据,通过体外细胞培养构建细胞模型,演示了我们的程序。我们提出的多模态拟合程序有可能增强计算神经科学领域的建模工作,并为该领域提供更逼真、更可验证的神经元模型。
{"title":"A Multimodal Fitting Approach to Construct Single-Neuron Models With Patch Clamp and High-Density Microelectrode Arrays","authors":"Alessio Paolo Buccino;Tanguy Damart;Julian Bartram;Darshan Mandge;Xiaohan Xue;Mickael Zbili;Tobias Gänswein;Aurélien Jaquier;Vishalini Emmenegger;Henry Markram;Andreas Hierlemann;Werner Van Geit","doi":"10.1162/neco_a_01672","DOIUrl":"10.1162/neco_a_01672","url":null,"abstract":"In computational neuroscience, multicompartment models are among the most biophysically realistic representations of single neurons. Constructing such models usually involves the use of the patch-clamp technique to record somatic voltage signals under different experimental conditions. The experimental data are then used to fit the many parameters of the model. While patching of the soma is currently the gold-standard approach to build multicompartment models, several studies have also evidenced a richness of dynamics in dendritic and axonal sections. Recording from the soma alone makes it hard to observe and correctly parameterize the activity of nonsomatic compartments. In order to provide a richer set of data as input to multicompartment models, we here investigate the combination of somatic patch-clamp recordings with recordings of high-density microelectrode arrays (HD-MEAs). HD-MEAs enable the observation of extracellular potentials and neural activity of neuronal compartments at subcellular resolution. In this work, we introduce a novel framework to combine patch-clamp and HD-MEA data to construct multicompartment models. We first validate our method on a ground-truth model with known parameters and show that the use of features extracted from extracellular signals, in addition to intracellular ones, yields models enabling better fits than using intracellular features alone. We also demonstrate our procedure using experimental data by constructing cell models from in vitro cell cultures. The proposed multimodal fitting procedure has the potential to augment the modeling efforts of the computational neuroscience community and provide the field with neuronal models that are more realistic and can be better validated.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10661254","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associative Learning of an Unnormalized Successor Representation 非规范化继承表征的联想学习
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01675
Niels J. Verosky
The successor representation is known to relate to temporal associations learned in the temporal context model (Gershman et al., 2012), and subsequent work suggests a wide relevance of the successor representation across spatial, visual, and abstract relational tasks. I demonstrate that the successor representation and purely associative learning have an even deeper relationship than initially indicated: Hebbian temporal associations are an unnormalized form of the successor representation, such that the two converge on an identical representation whenever all states are equally frequent and can correlate highly in practice even when the state distribution is nonuniform.
众所周知,后继表征与在时间情境模型中学习到的时间关联有关(Gershman 等人,2012 年),随后的研究表明,后继表征在空间、视觉和抽象关联任务中具有广泛的相关性。我的研究表明,后继表征和纯联想学习之间的关系比最初所显示的更为深刻:海比时间联想是后继表征的一种非规范化形式,因此只要所有状态的出现频率相同,两者就会趋同于一个相同的表征,而且即使状态分布不均匀,两者在实践中也能高度相关。
{"title":"Associative Learning of an Unnormalized Successor Representation","authors":"Niels J. Verosky","doi":"10.1162/neco_a_01675","DOIUrl":"10.1162/neco_a_01675","url":null,"abstract":"The successor representation is known to relate to temporal associations learned in the temporal context model (Gershman et al., 2012), and subsequent work suggests a wide relevance of the successor representation across spatial, visual, and abstract relational tasks. I demonstrate that the successor representation and purely associative learning have an even deeper relationship than initially indicated: Hebbian temporal associations are an unnormalized form of the successor representation, such that the two converge on an identical representation whenever all states are equally frequent and can correlate highly in practice even when the state distribution is nonuniform.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1