首页 > 最新文献

Neural Computation最新文献

英文 中文
Learning in Wilson-Cowan Model for Metapopulation 基于Wilson-Cowan模型的元人口学习。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01744
Raffaele Marino;Lorenzo Buffoni;Lorenzo Chicchi;Francesca Di Patti;Diego Febbe;Lorenzo Giambagli;Duccio Fanelli
The Wilson-Cowan model for metapopulation, a neural mass network model, treats different subcortical regions of the brain as connected nodes, with connections representing various types of structural, functional, or effective neuronal connectivity between these regions. Each region comprises interacting populations of excitatory and inhibitory cells, consistent with the standard Wilson-Cowan model. In this article, we show how to incorporate stable attractors into such a metapopulation model’s dynamics. By doing so, we transform the neural mass network model into a biologically inspired learning algorithm capable of solving different classification tasks. We test it on MNIST and Fashion MNIST in combination with convolutional neural networks, as well as on CIFAR-10 and TF-FLOWERS, and in combination with a transformer architecture (BERT) on IMDB, consistently achieving high classification accuracy.
Wilson-Cowan模型是一种神经质量网络模型,它将大脑皮层下的不同区域视为连接的节点,这些连接代表了这些区域之间不同类型的结构、功能或有效的神经元连接。每个区域包括相互作用的兴奋性和抑制性细胞群,与标准的威尔逊-考恩模型一致。在本文中,我们将展示如何将稳定吸引子纳入这种元种群模型的动力学中。通过这样做,我们将神经质量网络模型转换为能够解决不同分类任务的生物学启发学习算法。我们将其与卷积神经网络、CIFAR-10和TF-FLOWERS以及IMDB上的变压器架构(BERT)结合在一起,在MNIST和Fashion MNIST上进行了测试,始终如一地实现了较高的分类精度。
{"title":"Learning in Wilson-Cowan Model for Metapopulation","authors":"Raffaele Marino;Lorenzo Buffoni;Lorenzo Chicchi;Francesca Di Patti;Diego Febbe;Lorenzo Giambagli;Duccio Fanelli","doi":"10.1162/neco_a_01744","DOIUrl":"10.1162/neco_a_01744","url":null,"abstract":"The Wilson-Cowan model for metapopulation, a neural mass network model, treats different subcortical regions of the brain as connected nodes, with connections representing various types of structural, functional, or effective neuronal connectivity between these regions. Each region comprises interacting populations of excitatory and inhibitory cells, consistent with the standard Wilson-Cowan model. In this article, we show how to incorporate stable attractors into such a metapopulation model’s dynamics. By doing so, we transform the neural mass network model into a biologically inspired learning algorithm capable of solving different classification tasks. We test it on MNIST and Fashion MNIST in combination with convolutional neural networks, as well as on CIFAR-10 and TF-FLOWERS, and in combination with a transformer architecture (BERT) on IMDB, consistently achieving high classification accuracy.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"701-741"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nearly Optimal Learning Using Sparse Deep ReLU Networks in Regularized Empirical Risk Minimization With Lipschitz Loss 基于稀疏深度ReLU网络的带Lipschitz损失正则化经验风险最小化的近最优学习。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01742
Ke Huang;Mingming Liu;Shujie Ma
We propose a sparse deep ReLU network (SDRN) estimator of the regression function obtained from regularized empirical risk minimization with a Lipschitz loss function. Our framework can be applied to a variety of regression and classification problems. We establish novel nonasymptotic excess risk bounds for our SDRN estimator when the regression function belongs to a Sobolev space with mixed derivatives. We obtain a new, nearly optimal, risk rate in the sense that the SDRN estimator can achieve nearly the same optimal minimax convergence rate as one-dimensional nonparametric regression with the dimension involved in a logarithm term only when the feature dimension is fixed. The estimator has a slightly slower rate when the dimension grows with the sample size. We show that the depth of the SDRN estimator grows with the sample size in logarithmic order, and the total number of nodes and weights grows in polynomial order of the sample size to have the nearly optimal risk rate. The proposed SDRN can go deeper with fewer parameters to well estimate the regression and overcome the overfitting problem encountered by conventional feedforward neural networks.
我们提出了一种稀疏深度 ReLU 网络(SDRN)回归函数估计器,该估计器由正则化经验风险最小化与 Lipschitz 损失函数获得。我们的框架可应用于各种回归和分类问题。当回归函数属于具有混合导数的 Sobolev 空间时,我们为 SDRN 估计器建立了新的非渐近超额风险边界。我们得到了一种新的、近乎最优的风险率,即只有当特征维度固定时,SDRN 估计器才能达到与一维非参数回归几乎相同的最优最小收敛率,且其维度涉及对数项。当维度随样本量增长时,估计器的收敛速度稍慢。我们的研究表明,SDRN 估计器的深度随样本量的对数阶增长,节点和权重的总数随样本量的多项式阶增长,从而获得接近最优的风险率。所提出的 SDRN 可以用更少的参数进行更深入的回归估计,克服了传统前馈神经网络所遇到的过拟合问题。
{"title":"Nearly Optimal Learning Using Sparse Deep ReLU Networks in Regularized Empirical Risk Minimization With Lipschitz Loss","authors":"Ke Huang;Mingming Liu;Shujie Ma","doi":"10.1162/neco_a_01742","DOIUrl":"10.1162/neco_a_01742","url":null,"abstract":"We propose a sparse deep ReLU network (SDRN) estimator of the regression function obtained from regularized empirical risk minimization with a Lipschitz loss function. Our framework can be applied to a variety of regression and classification problems. We establish novel nonasymptotic excess risk bounds for our SDRN estimator when the regression function belongs to a Sobolev space with mixed derivatives. We obtain a new, nearly optimal, risk rate in the sense that the SDRN estimator can achieve nearly the same optimal minimax convergence rate as one-dimensional nonparametric regression with the dimension involved in a logarithm term only when the feature dimension is fixed. The estimator has a slightly slower rate when the dimension grows with the sample size. We show that the depth of the SDRN estimator grows with the sample size in logarithmic order, and the total number of nodes and weights grows in polynomial order of the sample size to have the nearly optimal risk rate. The proposed SDRN can go deeper with fewer parameters to well estimate the regression and overcome the overfitting problem encountered by conventional feedforward neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"815-870"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Sensitive Processing in a Model Neocortical Pyramidal Cell With Two Sites of Input Integration 具有两个输入整合位点的新皮质锥体细胞模型的上下文敏感加工。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01739
Bruce P. Graham;Jim W. Kay;William A. Phillips
Neocortical layer 5 thick-tufted pyramidal cells are prone to exhibiting burst firing on receipt of coincident basal and apical dendritic inputs. These inputs carry different information, with basal inputs coming from feedforward sensory pathways and apical inputs coming from diverse sources that provide context in the cortical hierarchy. We explore the information processing possibilities of this burst firing using computer simulations of a noisy compartmental cell model. Simulated data on stochastic burst firing due to brief, simultaneously injected basal and apical currents allow estimation of burst firing probability for different stimulus current amplitudes. Information-theory-based partial information decomposition (PID) is used to quantify the contributions of the apical and basal input streams to the information in the cell output bursting probability. Four different operating regimes are apparent, depending on the relative strengths of the input streams, with output burst probability carrying more or less information that is uniquely contributed by either the basal or apical input, or shared and synergistic information due to the combined streams. We derive and fit transfer functions for these different regimes that describe burst probability over the different ranges of basal and apical input amplitudes. The operating regimes can be classified into distinct modes of information processing, depending on the contribution of apical input to output bursting: apical cooperation, in which both basal and apical inputs are required to generate a burst; apical amplification, in which basal input alone can generate a burst but the burst probability is modulated by apical input; apical drive, in which apical input alone can produce a burst; and apical integration, in which strong apical or basal inputs alone, as well as their combination, can generate bursting. In particular, PID and the transfer function clarify that the apical amplification mode has the features required for contextually modulated information processing.
大脑皮层第 5 层厚簇锥体细胞在接收到重合的基底和顶端树突输入时容易出现爆发性发射。这些输入携带不同的信息,基底输入来自前馈感觉通路,而顶端输入则来自皮层层次结构中提供上下文的不同来源。我们利用计算机模拟了一个噪声分区细胞模型,探索了这种突发性发射的信息处理可能性。通过对同时注入基底和顶端电流的短暂随机猝发发射的模拟数据,我们可以估算出不同刺激电流幅度下的猝发发射概率。基于信息论的部分信息分解(PID)被用来量化顶端和基底输入流对细胞输出猝发概率信息的贡献。根据输入流的相对强度,输出猝发概率或多或少地包含了由基底或顶端输入所独有的信息,或由综合输入流所共享和协同的信息。我们推导并拟合了这些不同状态的传递函数,它们描述了基底和顶端输入振幅不同范围内的猝发概率。根据心尖输入对输出猝发的贡献,这些运行状态可分为不同的信息处理模式:心尖合作,即需要基底和心尖输入才能产生猝发;心尖放大,即基底输入单独就能产生猝发,但猝发概率受心尖输入的调节;心尖驱动,即心尖输入单独就能产生猝发;心尖整合,即心尖或基底输入单独以及它们的组合都能产生猝发。特别是,PID 和传递函数表明,根尖放大模式具有上下文调制信息处理所需的特征。
{"title":"Context-Sensitive Processing in a Model Neocortical Pyramidal Cell With Two Sites of Input Integration","authors":"Bruce P. Graham;Jim W. Kay;William A. Phillips","doi":"10.1162/neco_a_01739","DOIUrl":"10.1162/neco_a_01739","url":null,"abstract":"Neocortical layer 5 thick-tufted pyramidal cells are prone to exhibiting burst firing on receipt of coincident basal and apical dendritic inputs. These inputs carry different information, with basal inputs coming from feedforward sensory pathways and apical inputs coming from diverse sources that provide context in the cortical hierarchy. We explore the information processing possibilities of this burst firing using computer simulations of a noisy compartmental cell model. Simulated data on stochastic burst firing due to brief, simultaneously injected basal and apical currents allow estimation of burst firing probability for different stimulus current amplitudes. Information-theory-based partial information decomposition (PID) is used to quantify the contributions of the apical and basal input streams to the information in the cell output bursting probability. Four different operating regimes are apparent, depending on the relative strengths of the input streams, with output burst probability carrying more or less information that is uniquely contributed by either the basal or apical input, or shared and synergistic information due to the combined streams. We derive and fit transfer functions for these different regimes that describe burst probability over the different ranges of basal and apical input amplitudes. The operating regimes can be classified into distinct modes of information processing, depending on the contribution of apical input to output bursting: apical cooperation, in which both basal and apical inputs are required to generate a burst; apical amplification, in which basal input alone can generate a burst but the burst probability is modulated by apical input; apical drive, in which apical input alone can produce a burst; and apical integration, in which strong apical or basal inputs alone, as well as their combination, can generate bursting. In particular, PID and the transfer function clarify that the apical amplification mode has the features required for contextually modulated information processing.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"588-634"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced EEG Forecasting: A Probabilistic Deep Learning Approach 增强脑电图预测:一种概率深度学习方法。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01743
Hanna Pankka;Jaakko Lehtinen;Risto J. Ilmoniemi;Timo Roine
Forecasting electroencephalography (EEG) signals, that is, estimating future values of the time series based on the past ones, is essential in many real-time EEG-based applications, such as brain–computer interfaces and closed-loop brain stimulation. As these applications are becoming more and more common, the importance of a good prediction model has increased. Previously, the autoregressive model (AR) has been employed for this task; however, its prediction accuracy tends to fade quickly as multiple steps are predicted. We aim to improve on this by applying probabilistic deep learning to make robust longer-range forecasts. For this, we applied the probabilistic deep neural network model WaveNet to forecast resting-state EEG in theta- (4–7.5 Hz) and alpha-frequency (8–13 Hz) bands and compared it to the AR model. WaveNet reliably predicted EEG signals in both theta and alpha frequencies 150 ms ahead, with mean absolute errors of 1.0 ± 1.1 µV (theta) and 0.9 ± 1.1 µV (alpha), and outperformed the AR model in estimating the signal amplitude and phase. Furthermore, we found that the probabilistic approach offers a way of forecasting even more accurately while effectively discarding uncertain predictions. We demonstrate for the first time that probabilistic deep learning can be used to forecast resting-state EEG time series. In the future, the developed model can enhance the real-time estimation of brain states in brain–computer interfaces and brain stimulation protocols. It may also be useful for answering neuroscientific questions and for diagnostic purposes.
预测脑电图(EEG)信号,即根据过去的时间序列估计未来的值,在许多基于脑电图的实时应用中是必不可少的,如脑机接口和闭环脑刺激。随着这些应用变得越来越普遍,一个好的预测模型的重要性增加了。在此之前,自回归模型(AR)已被用于该任务;然而,随着预测步骤的增多,其预测精度会迅速下降。我们的目标是通过应用概率深度学习来进行稳健的长期预测来改进这一点。为此,我们应用概率深度神经网络模型WaveNet来预测θ - (4-7.5 Hz)和α -频率(8-13 Hz)频段的静息状态脑电图,并将其与AR模型进行比较。WaveNet在θ和α频率下都能可靠地提前150 ms预测脑电图信号,平均绝对误差为1.0 $pm$ 1.1 $mu$V (θ)和0.9 $pm$ 1.1 $mu$V (α),并且在估计信号幅度和相位方面优于AR模型。此外,我们发现概率方法提供了一种更准确的预测方法,同时有效地丢弃了不确定的预测。我们首次证明了概率深度学习可以用于预测静息状态脑电图时间序列。在未来,所开发的模型可以增强对脑机接口和脑刺激协议中大脑状态的实时估计。它也可能用于回答神经科学问题和诊断目的。
{"title":"Enhanced EEG Forecasting: A Probabilistic Deep Learning Approach","authors":"Hanna Pankka;Jaakko Lehtinen;Risto J. Ilmoniemi;Timo Roine","doi":"10.1162/neco_a_01743","DOIUrl":"10.1162/neco_a_01743","url":null,"abstract":"Forecasting electroencephalography (EEG) signals, that is, estimating future values of the time series based on the past ones, is essential in many real-time EEG-based applications, such as brain–computer interfaces and closed-loop brain stimulation. As these applications are becoming more and more common, the importance of a good prediction model has increased. Previously, the autoregressive model (AR) has been employed for this task; however, its prediction accuracy tends to fade quickly as multiple steps are predicted. We aim to improve on this by applying probabilistic deep learning to make robust longer-range forecasts. For this, we applied the probabilistic deep neural network model WaveNet to forecast resting-state EEG in theta- (4–7.5 Hz) and alpha-frequency (8–13 Hz) bands and compared it to the AR model. WaveNet reliably predicted EEG signals in both theta and alpha frequencies 150 ms ahead, with mean absolute errors of 1.0 ± 1.1 µV (theta) and 0.9 ± 1.1 µV (alpha), and outperformed the AR model in estimating the signal amplitude and phase. Furthermore, we found that the probabilistic approach offers a way of forecasting even more accurately while effectively discarding uncertain predictions. We demonstrate for the first time that probabilistic deep learning can be used to forecast resting-state EEG time series. In the future, the developed model can enhance the real-time estimation of brain states in brain–computer interfaces and brain stimulation protocols. It may also be useful for answering neuroscientific questions and for diagnostic purposes.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"793-814"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiking Neuron-Astrocyte Networks for Image Recognition 脉冲神经元-星形胶质细胞网络用于图像识别。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01740
Jhunlyn Lorenzo;Juan-Antonio Rico-Gallego;Stéphane Binczak;Sabir Jacquir
From biological and artificial network perspectives, researchers have started acknowledging astrocytes as computational units mediating neural processes. Here, we propose a novel biologically inspired neuron-astrocyte network model for image recognition, one of the first attempts at implementing astrocytes in spiking neuron networks (SNNs) using a standard data set. The architecture for image recognition has three primary units: the preprocessing unit for converting the image pixels into spiking patterns, the neuron-astrocyte network forming bipartite (neural connections) and tripartite synapses (neural and astrocytic connections), and the classifier unit. In the astrocyte-mediated SNNs, an astrocyte integrates neural signals following the simplified Postnov model. It then modulates the integrate-and-fire (IF) neurons via gliotransmission, thereby strengthening the synaptic connections of the neurons within the astrocytic territory. We develop an architecture derived from a baseline SNN model for unsupervised digit classification. The spiking neuron-astrocyte networks (SNANs) display better network performance with an optimal variance-bias trade-off than SNN alone. We demonstrate that astrocytes promote faster learning, support memory formation and recognition, and provide a simplified network architecture. Our proposed SNAN can serve as a benchmark for future researchers on astrocyte implementation in artificial networks, particularly in neuromorphic systems, for its simplified design.
从生物学和人工网络的角度来看,研究人员已经开始承认星形胶质细胞是调节神经过程的计算单位。在这里,我们提出了一种新的受生物学启发的神经元-星形胶质细胞网络模型用于图像识别,这是使用标准数据集在峰值神经元网络(snn)中实现星形胶质细胞的首次尝试之一。图像识别的架构有三个主要单元:将图像像素转换为峰值模式的预处理单元,形成二部(神经连接)和三部突触(神经和星形细胞连接)的神经元-星形胶质细胞网络,以及分类器单元。在星形胶质细胞介导的snn中,星形胶质细胞按照简化的Postnov模型整合神经信号。然后,它通过胶质传递调节整合-激活(IF)神经元,从而加强星形细胞区域内神经元的突触连接。我们开发了一种基于基线SNN模型的架构,用于无监督数字分类。峰值神经元-星形胶质细胞网络(SNANs)具有最佳方差-偏差权衡,比单独SNN具有更好的网络性能。我们证明星形胶质细胞促进更快的学习,支持记忆形成和识别,并提供简化的网络结构。我们提出的SNAN可以作为未来人工网络中星形胶质细胞实现的基准,特别是在神经形态系统中,因为它的简化设计。
{"title":"Spiking Neuron-Astrocyte Networks for Image Recognition","authors":"Jhunlyn Lorenzo;Juan-Antonio Rico-Gallego;Stéphane Binczak;Sabir Jacquir","doi":"10.1162/neco_a_01740","DOIUrl":"10.1162/neco_a_01740","url":null,"abstract":"From biological and artificial network perspectives, researchers have started acknowledging astrocytes as computational units mediating neural processes. Here, we propose a novel biologically inspired neuron-astrocyte network model for image recognition, one of the first attempts at implementing astrocytes in spiking neuron networks (SNNs) using a standard data set. The architecture for image recognition has three primary units: the preprocessing unit for converting the image pixels into spiking patterns, the neuron-astrocyte network forming bipartite (neural connections) and tripartite synapses (neural and astrocytic connections), and the classifier unit. In the astrocyte-mediated SNNs, an astrocyte integrates neural signals following the simplified Postnov model. It then modulates the integrate-and-fire (IF) neurons via gliotransmission, thereby strengthening the synaptic connections of the neurons within the astrocytic territory. We develop an architecture derived from a baseline SNN model for unsupervised digit classification. The spiking neuron-astrocyte networks (SNANs) display better network performance with an optimal variance-bias trade-off than SNN alone. We demonstrate that astrocytes promote faster learning, support memory formation and recognition, and provide a simplified network architecture. Our proposed SNAN can serve as a benchmark for future researchers on astrocyte implementation in artificial networks, particularly in neuromorphic systems, for its simplified design.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"635-665"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation 具有尖峰频率自适应的连续吸引子神经网络动力学。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01757
Yujun Li;Tianhao Chu;Si Wu
Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of rapid update of network states, which can impair information update and search in the brain. To overcome this difficulty, a solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize what are otherwise permanently stable states. In such a way, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations wherever rapid state updating is involved. Previous studies have shown that continuous attractor neural networks with adaptation (A-CANNs) exhibit rich dynamical behaviors accounting for various brain functions. In this review, we present a comprehensive view of the rich diverse dynamics of A-CANNs. Moreover, we provide a unified mathematical framework to understand these different dynamical behaviors and briefly discuss their biological implications.
吸引子神经网络将神经信息存储为由大量相互连接的神经元组成的动态系统的定态。吸引子的特性使神经系统具有鲁棒性,但也带来了网络状态快速更新的困难,影响了大脑对信息的更新和搜索。为了克服这一困难,一种解决方案是在吸引子网络动力学中加入适应,即适应作为一种缓慢的负反馈机制来破坏原本永久稳定的状态。通过这种方式,神经系统一方面可以使用吸引子状态可靠地表示信息,另一方面,在涉及快速状态更新的情况下执行计算。已有研究表明,具有适应性的连续吸引子神经网络(a - cns)具有丰富的动态行为,可以解释各种脑功能。在这篇综述中,我们提出了一个全面的观点,丰富多样的动态的a - can。此外,我们提供了一个统一的数学框架来理解这些不同的动力学行为,并简要讨论了它们的生物学意义。
{"title":"Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation","authors":"Yujun Li;Tianhao Chu;Si Wu","doi":"10.1162/neco_a_01757","DOIUrl":"10.1162/neco_a_01757","url":null,"abstract":"Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of rapid update of network states, which can impair information update and search in the brain. To overcome this difficulty, a solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize what are otherwise permanently stable states. In such a way, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations wherever rapid state updating is involved. Previous studies have shown that continuous attractor neural networks with adaptation (A-CANNs) exhibit rich dynamical behaviors accounting for various brain functions. In this review, we present a comprehensive view of the rich diverse dynamics of A-CANNs. Moreover, we provide a unified mathematical framework to understand these different dynamical behaviors and briefly discuss their biological implications.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1057-1101"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144026069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Code Translation With LIF Neuron Microcircuits 用LIF神经元微电路进行神经代码翻译。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01754
Ville Karlsson;Joni Kämäräinen
Spiking neural networks (SNNs) provide an energy-efficient alternative to traditional artificial neural networks, leveraging diverse neural encoding schemes such as rate, time-to-first-spike (TTFS), and population-based binary codes. Each encoding method offers distinct advantages: TTFS enables rapid and precise transmission with minimal energy use, rate encoding provides robust signal representation, and binary population encoding aligns well with digital hardware implementations. This letter introduces a set of neural microcircuits based on leaky integrate-and-fire neurons that enable translation between these encoding schemes. We propose two applications showcasing the utility of these microcircuits. First, we demonstrate a number comparison operation that significantly reduces spike transmission by switching from rate to TTFS encoding. Second, we present a high-bandwidth neural transmitter capable of encoding and transmitting binary population-encoded data through a single axon and reconstructing it at the target site. Additionally, we conduct a detailed analysis of these microcircuits, providing quantitative metrics to assess their efficiency in terms of neuron count, synaptic complexity, spike overhead, and runtime. Our findings highlight the potential of LIF neuron microcircuits in computational neuroscience and neuromorphic computing, offering a pathway to more interpretable and efficient SNN designs.
峰值神经网络(snn)提供了传统人工神经网络的节能替代方案,利用各种神经编码方案,如速率、第一次峰值时间(TTFS)和基于种群的二进制代码。每种编码方法都有其独特的优点:TTFS能够以最小的能耗实现快速和精确的传输,速率编码提供健壮的信号表示,二进制总体编码与数字硬件实现很好地一致。这封信介绍了一套基于泄漏的集成和激活神经元的神经微电路,可以在这些编码方案之间进行转换。我们提出了两个应用来展示这些微电路的效用。首先,我们演示了一个数字比较操作,通过从速率编码切换到TTFS编码,显著减少了尖峰传输。其次,我们提出了一种高带宽的神经递质,能够通过单个轴突编码和传输二进制种群编码数据,并在目标位点重建它。此外,我们对这些微电路进行了详细的分析,提供了定量指标来评估它们在神经元计数、突触复杂性、尖峰开销和运行时间方面的效率。我们的研究结果强调了LIF神经元微电路在计算神经科学和神经形态计算中的潜力,为更可解释和更有效的SNN设计提供了一条途径。
{"title":"Neural Code Translation With LIF Neuron Microcircuits","authors":"Ville Karlsson;Joni Kämäräinen","doi":"10.1162/neco_a_01754","DOIUrl":"10.1162/neco_a_01754","url":null,"abstract":"Spiking neural networks (SNNs) provide an energy-efficient alternative to traditional artificial neural networks, leveraging diverse neural encoding schemes such as rate, time-to-first-spike (TTFS), and population-based binary codes. Each encoding method offers distinct advantages: TTFS enables rapid and precise transmission with minimal energy use, rate encoding provides robust signal representation, and binary population encoding aligns well with digital hardware implementations. This letter introduces a set of neural microcircuits based on leaky integrate-and-fire neurons that enable translation between these encoding schemes. We propose two applications showcasing the utility of these microcircuits. First, we demonstrate a number comparison operation that significantly reduces spike transmission by switching from rate to TTFS encoding. Second, we present a high-bandwidth neural transmitter capable of encoding and transmitting binary population-encoded data through a single axon and reconstructing it at the target site. Additionally, we conduct a detailed analysis of these microcircuits, providing quantitative metrics to assess their efficiency in terms of neuron count, synaptic complexity, spike overhead, and runtime. Our findings highlight the potential of LIF neuron microcircuits in computational neuroscience and neuromorphic computing, offering a pathway to more interpretable and efficient SNN designs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1124-1153"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144046411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamics and Bifurcation Structure of a Mean-Field Model of Adaptive Exponential Integrate-and-Fire Networks 自适应指数型积分网络平均场模型的动力学和分岔结构。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01758
Lionel Kusch;Damien Depannemaecker;Alain Destexhe;Viktor Jirsa
The study of brain activity spans diverse scales and levels of description and requires the development of computational models alongside experimental investigations to explore integrations across scales. The high dimensionality of spiking networks presents challenges for understanding their dynamics. To tackle this, a mean-field formulation offers a potential approach for dimensionality reduction while retaining essential elements. Here, we focus on a previously developed mean-field model of adaptive exponential integrate and fire (AdEx) networks used in various research work. We observe qualitative similarities in the bifurcation structure but quantitative differences in mean firing rates between the mean-field model and AdEx spiking network simulations. Even if the mean-field model does not accurately predict phase shift during transients and oscillatory input, it generally captures the qualitative dynamics of the spiking network’s response to both constant and varying inputs. Finally, we offer an overview of the dynamical properties of the AdExMF to assist future users in interpreting their results of simulations.
大脑活动的研究跨越了不同的尺度和描述水平,需要在实验调查的基础上发展计算模型来探索跨尺度的整合。尖峰网络的高维性给理解其动力学带来了挑战。为了解决这个问题,平均场公式提供了一种潜在的降维方法,同时保留了基本元素。在这里,我们将重点关注先前开发的自适应指数积分和火焰(AdEx)网络的平均场模型,该模型用于各种研究工作。我们观察到平均场模型和AdEx尖峰网络模拟在分岔结构上的定性相似性,但在平均发射率上的定量差异。即使平均场模型不能准确地预测瞬态和振荡输入期间的相移,它通常也能捕捉到脉冲网络对恒定和变化输入响应的定性动态。最后,我们概述了AdExMF的动态特性,以帮助未来的用户解释他们的模拟结果。
{"title":"Dynamics and Bifurcation Structure of a Mean-Field Model of Adaptive Exponential Integrate-and-Fire Networks","authors":"Lionel Kusch;Damien Depannemaecker;Alain Destexhe;Viktor Jirsa","doi":"10.1162/neco_a_01758","DOIUrl":"10.1162/neco_a_01758","url":null,"abstract":"The study of brain activity spans diverse scales and levels of description and requires the development of computational models alongside experimental investigations to explore integrations across scales. The high dimensionality of spiking networks presents challenges for understanding their dynamics. To tackle this, a mean-field formulation offers a potential approach for dimensionality reduction while retaining essential elements. Here, we focus on a previously developed mean-field model of adaptive exponential integrate and fire (AdEx) networks used in various research work. We observe qualitative similarities in the bifurcation structure but quantitative differences in mean firing rates between the mean-field model and AdEx spiking network simulations. Even if the mean-field model does not accurately predict phase shift during transients and oscillatory input, it generally captures the qualitative dynamics of the spiking network’s response to both constant and varying inputs. Finally, we offer an overview of the dynamical properties of the AdExMF to assist future users in interpreting their results of simulations.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1102-1123"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory States From Almost Nothing: Representing and Computing in a Nonassociative Algebra 记忆状态:非结合代数的表示与计算。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01755
Stefan Reimann
This letter presents a nonassociative algebraic framework for the representation and computation of information items in high-dimensional space. This framework is consistent with the principles of spatial computing and with the empirical findings in cognitive science about memory. Computations are performed through a process of multiplication-like binding and nonassociative interference-like bundling. Models that rely on associative bundling typically lose order information, which necessitates the use of auxiliary order structures, such as position markers, to represent sequential information that is important for cognitive tasks. In contrast, the nonassociative bundling proposed allows the construction of sparse representations of arbitrarily long sequences that maintain their temporal structure across arbitrary lengths. In this operation, noise is a constituent element of the representation of order information rather than a means of obscuring it. The nonassociative nature of the proposed framework results in the representation of a single sequence by two distinct states. The L-state, generated through left-associative bundling, continuously updates and emphasizes a recency effect, while the R-state, formed through right-associative bundling, encodes finite sequences or chunks, capturing a primacy effect. The construction of these states may be associated with activity in the prefrontal cortex in relation to short-term memory and hippocampal encoding in long-term memory, respectively. The accuracy of retrieval is contingent on a decision-making process that is based on the mutual information between the memory states and the cue. The model is able to replicate the serial position curve, which reflects the empirical recency and primacy effects observed in cognitive experiments.
这封信提出了高维空间中信息项的表示和计算的非关联代数框架。这一框架与空间计算原理和认知科学关于记忆的实证研究结果是一致的。计算是通过类似乘法的绑定和类似非关联干涉的绑定过程来完成的。依赖于关联捆绑的模型通常会丢失顺序信息,这就需要使用辅助顺序结构(如位置标记)来表示对认知任务很重要的顺序信息。相反,提出的非关联捆绑允许构建任意长序列的稀疏表示,这些序列在任意长度上保持其时间结构。在这个操作中,噪声是表示有序信息的一个组成元素,而不是使其模糊的一种手段。所提出的框架的非关联性质导致用两个不同的状态表示单个序列。左结合捆绑形成的l态不断更新,强调近因效应;右结合捆绑形成的r态编码有限序列或块,捕捉质数效应。这些状态的构建可能分别与与短期记忆有关的前额叶皮层活动和与长期记忆有关的海马编码有关。检索的准确性取决于基于记忆状态和线索之间相互信息的决策过程。该模型能够复制序列位置曲线,反映了认知实验中观察到的经验近因效应和因因效应。
{"title":"Memory States From Almost Nothing: Representing and Computing in a Nonassociative Algebra","authors":"Stefan Reimann","doi":"10.1162/neco_a_01755","DOIUrl":"10.1162/neco_a_01755","url":null,"abstract":"This letter presents a nonassociative algebraic framework for the representation and computation of information items in high-dimensional space. This framework is consistent with the principles of spatial computing and with the empirical findings in cognitive science about memory. Computations are performed through a process of multiplication-like binding and nonassociative interference-like bundling. Models that rely on associative bundling typically lose order information, which necessitates the use of auxiliary order structures, such as position markers, to represent sequential information that is important for cognitive tasks. In contrast, the nonassociative bundling proposed allows the construction of sparse representations of arbitrarily long sequences that maintain their temporal structure across arbitrary lengths. In this operation, noise is a constituent element of the representation of order information rather than a means of obscuring it. The nonassociative nature of the proposed framework results in the representation of a single sequence by two distinct states. The L-state, generated through left-associative bundling, continuously updates and emphasizes a recency effect, while the R-state, formed through right-associative bundling, encodes finite sequences or chunks, capturing a primacy effect. The construction of these states may be associated with activity in the prefrontal cortex in relation to short-term memory and hippocampal encoding in long-term memory, respectively. The accuracy of retrieval is contingent on a decision-making process that is based on the mutual information between the memory states and the cue. The model is able to replicate the serial position curve, which reflects the empirical recency and primacy effects observed in cognitive experiments.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1154-1170"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Rank, High-Order Tensor Completion via t- Product-Induced Tucker (tTucker) Decomposition 通过t积诱导塔克(tTucker)分解的低秩、高阶张量补全。
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01756
Yaodong Li;Jun Tan;Peilin Yang;Guoxu Zhou;Qibin Zhao
Recently, tensor singular value decomposition (t-SVD)–based methods were proposed to solve the low-rank tensor completion (LRTC) problem, which has achieved unprecedented success on image and video inpainting tasks. The t-SVD is limited to process third-order tensors. When faced with higher-order tensors, it reshapes them into third-order tensors, leading to the destruction of interdimensional correlations. To address this limitation, this letter introduces a tproductinduced Tucker decomposition (tTucker) model that replaces the mode product in Tucker decomposition with t-product, which jointly extends the ideas of t-SVD and high-order SVD. This letter defines the rank of the tTucker decomposition and presents an LRTC model that minimizes the induced Schatten-p norm. An efficient alternating direction multiplier method (ADMM) algorithm is developed to optimize the proposed LRTC model, and its effectiveness is demonstrated through experiments conducted on both synthetic and real data sets, showcasing excellent performance.
近年来,基于张量奇异值分解(t-SVD)的方法被提出用于解决低秩张量补全(LRTC)问题,并在图像和视频补全任务中取得了前所未有的成功。t-SVD仅限于处理三阶张量。当面对高阶张量时,它将其重塑为三阶张量,导致维间相关性的破坏。为了解决这一局限性,本文引入了一种t-product Tucker分解(tTucker)模型,该模型用t-product代替Tucker分解中的模态积,它共同扩展了t-SVD和高阶SVD的思想。这封信定义了塔克分解的秩,并提出了一个最小化诱导schattenp范数的LRTC模型。为了优化LRTC模型,提出了一种高效的交变方向乘子算法(ADMM),并在合成数据集和真实数据集上进行了实验,证明了该算法的有效性。
{"title":"Low-Rank, High-Order Tensor Completion via t- Product-Induced Tucker (tTucker) Decomposition","authors":"Yaodong Li;Jun Tan;Peilin Yang;Guoxu Zhou;Qibin Zhao","doi":"10.1162/neco_a_01756","DOIUrl":"10.1162/neco_a_01756","url":null,"abstract":"Recently, tensor singular value decomposition (t-SVD)–based methods were proposed to solve the low-rank tensor completion (LRTC) problem, which has achieved unprecedented success on image and video inpainting tasks. The t-SVD is limited to process third-order tensors. When faced with higher-order tensors, it reshapes them into third-order tensors, leading to the destruction of interdimensional correlations. To address this limitation, this letter introduces a tproductinduced Tucker decomposition (tTucker) model that replaces the mode product in Tucker decomposition with t-product, which jointly extends the ideas of t-SVD and high-order SVD. This letter defines the rank of the tTucker decomposition and presents an LRTC model that minimizes the induced Schatten-p norm. An efficient alternating direction multiplier method (ADMM) algorithm is developed to optimize the proposed LRTC model, and its effectiveness is demonstrated through experiments conducted on both synthetic and real data sets, showcasing excellent performance.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1171-1192"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144029302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1