首页 > 最新文献

Neural Computation最新文献

英文 中文
Inhibitory Feedback Enables Predictive Learning of Multiple Sequences in Neural Networks. 抑制反馈使神经网络中多序列的预测学习成为可能。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1162/NECO.a.1504
Matteo Saponati, Martin Vinck

Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced postsynaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.

预测未来事件是神经网络的一项关键计算任务。实验证据表明,神经活动中可靠的时间序列在时间事件的关联和预测中起着功能性作用。然而,神经元如何区分和预测多个脉冲序列在很大程度上仍然未知。我们实现了一种基于预测处理的学习规则,其中神经元只对尖峰序列中的初始、不可预测的输入放电,从而减少突触后放电,从而产生有效的表征。将这种机制与抑制性反馈相结合,导致网络中的稀疏放电,使神经元能够选择性地预测输入中的不同序列。我们证明,中间水平的抑制是最佳的去关联神经元活动,并使未来的输入预测。值得注意的是,每个序列都是在网络的稀疏预期发射中独立编码的。总的来说,我们的研究结果表明,自监督预测学习规则和抑制反馈的相互作用可以快速有效地分类不同的输入序列。
{"title":"Inhibitory Feedback Enables Predictive Learning of Multiple Sequences in Neural Networks.","authors":"Matteo Saponati, Martin Vinck","doi":"10.1162/NECO.a.1504","DOIUrl":"10.1162/NECO.a.1504","url":null,"abstract":"<p><p>Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced postsynaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"471-498"},"PeriodicalIF":2.1,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147366926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiclass Linear Perceptrons With Multiplicative Margins. 具有乘边的多类线性感知器。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1162/NECO.a.1502
Dmitri Rachkovskij, Evgeny Osipov, Olexander Volkov, Daswin De Silva, Denis Kleyko

This article introduces a family of multiclass linear perceptron classifiers with a multiplicative margin mechanism (MMPerc), as an alternative to standard margin-free and additive margin perceptrons. The multiplicative formulation enforces classification confidence by requiring the true class score to exceed that of competing classes by a specified fraction of itself rather than by a fixed additive threshold. This avoids dependence on score magnitudes arising from varied norms of data and class weight vectors. We propose several architectural and algorithmic variants of MMPerc, derive associated loss functions and mistake bounds for both linearly separable and nonseparable data, and analyze key design considerations, including bias, margin threshold selection, and training modes. Extensive experiments on synthetic and real data sets show that MMPerc classifiers typically outperform the standard perceptron, as well as classic baselines such as support vector machines and ridge classifiers. Owing to their simplicity, minimalistic design, and computational efficiency, MMPerc classifiers are promising candidates for conventional machine learning tasks, linear evaluation of deep neural networks, integration with hyperdimensional computing and vector symbolic architecture representations, and deployment in resource-constrained applications.

本文介绍了一类具有乘法边际机制(MMPerc)的多类线性感知器分类器,作为标准无边际和加性边际感知器的替代方法。乘法公式通过要求真实类分数超过竞争类分数的指定分数而不是固定的加法阈值来增强分类置信度。这避免了对由不同的数据规范和类权重向量引起的分数大小的依赖。我们提出了MMPerc的几种架构和算法变体,推导了线性可分和不可分数据的相关损失函数和错误界,并分析了关键的设计考虑因素,包括偏差、边际阈值选择和训练模式。在合成和真实数据集上的大量实验表明,MMPerc分类器通常优于标准感知器,以及经典基线(如支持向量机和脊分类器)。由于其简单、简约的设计和计算效率,MMPerc分类器是传统机器学习任务、深度神经网络的线性评估、与超维计算和向量符号体系结构表示的集成以及在资源受限应用中的部署的有希望的候选者。
{"title":"Multiclass Linear Perceptrons With Multiplicative Margins.","authors":"Dmitri Rachkovskij, Evgeny Osipov, Olexander Volkov, Daswin De Silva, Denis Kleyko","doi":"10.1162/NECO.a.1502","DOIUrl":"10.1162/NECO.a.1502","url":null,"abstract":"<p><p>This article introduces a family of multiclass linear perceptron classifiers with a multiplicative margin mechanism (MMPerc), as an alternative to standard margin-free and additive margin perceptrons. The multiplicative formulation enforces classification confidence by requiring the true class score to exceed that of competing classes by a specified fraction of itself rather than by a fixed additive threshold. This avoids dependence on score magnitudes arising from varied norms of data and class weight vectors. We propose several architectural and algorithmic variants of MMPerc, derive associated loss functions and mistake bounds for both linearly separable and nonseparable data, and analyze key design considerations, including bias, margin threshold selection, and training modes. Extensive experiments on synthetic and real data sets show that MMPerc classifiers typically outperform the standard perceptron, as well as classic baselines such as support vector machines and ridge classifiers. Owing to their simplicity, minimalistic design, and computational efficiency, MMPerc classifiers are promising candidates for conventional machine learning tasks, linear evaluation of deep neural networks, integration with hyperdimensional computing and vector symbolic architecture representations, and deployment in resource-constrained applications.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"602-650"},"PeriodicalIF":2.1,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147366897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Cooperative Network Architecture: Learning Structured Networks as Representation of Sensory Patterns. 合作网络架构:作为感觉模式表征的学习结构化网络。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1162/NECO.a.1505
Pascal J Sager, Jan M Deriu, Benjamin F Grewe, Thilo Stadelmann, Christoph von der Malsburg

We introduce the cooperative network architecture (CNA), a model that represents sensory signals using structured, recurrently connected networks of neurons, termed "nets." Nets are dynamically assembled from overlapping net fragments, which are learned based on statistical regularities in sensory input. This architecture offers robustness to noise, deformation, and generalization to out-of-distribution data, addressing challenges in current vision systems from a novel perspective. We demonstrate that net fragments can be learned without supervision and flexibly recombined to encode novel patterns, enabling figure completion and resilience to noise. Our findings establish CNA as a promising paradigm for developing neural representations that integrate local feature processing with global structure formation, providing a foundation for future research on invariant object recognition.

我们介绍了合作网络架构(CNA),这是一个使用结构化的、循环连接的神经元网络(称为“网络”)来表示感官信号的模型。网络是根据感官输入的统计规律,由重叠的网络碎片动态组合而成的。该架构提供了对噪声、变形的鲁棒性和对分布外数据的泛化,从一个新的角度解决了当前视觉系统中的挑战。我们证明了网片段可以在没有监督的情况下学习,并灵活地重组以编码新的模式,从而实现图形补全和抗噪声能力。我们的研究结果表明,CNA是一种很有前途的范例,用于开发将局部特征处理与全局结构形成相结合的神经表征,为未来的不变目标识别研究提供了基础。
{"title":"The Cooperative Network Architecture: Learning Structured Networks as Representation of Sensory Patterns.","authors":"Pascal J Sager, Jan M Deriu, Benjamin F Grewe, Thilo Stadelmann, Christoph von der Malsburg","doi":"10.1162/NECO.a.1505","DOIUrl":"10.1162/NECO.a.1505","url":null,"abstract":"<p><p>We introduce the cooperative network architecture (CNA), a model that represents sensory signals using structured, recurrently connected networks of neurons, termed \"nets.\" Nets are dynamically assembled from overlapping net fragments, which are learned based on statistical regularities in sensory input. This architecture offers robustness to noise, deformation, and generalization to out-of-distribution data, addressing challenges in current vision systems from a novel perspective. We demonstrate that net fragments can be learned without supervision and flexibly recombined to encode novel patterns, enabling figure completion and resilience to noise. Our findings establish CNA as a promising paradigm for developing neural representations that integrate local feature processing with global structure formation, providing a foundation for future research on invariant object recognition.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"538-572"},"PeriodicalIF":2.1,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147367235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Potential for Reinforcement Learning in the Cerebellum. 小脑强化学习的潜力。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1162/NECO.a.1507
Richard W Prager, Richard Apps

This article explores how simple reinforcement learning algorithms might be implemented by the anatomy of the cerebellum. In doing this, we highlight which anatomical and physiological details are most important for assessing algorithmic fit, and we discuss which algorithm components are easiest to accommodate in a neural system. We describe hypothetical cerebellar implementations of four reinforcement learning algorithms and discuss the anatomical plausibility of the various components required. We show how one of the algorithms can learn to generate short sequences of actions without continuous information on the resulting changes to the environment. We finish with simulations that illustrate the way that the algorithms learn to solve the problem of balancing an inverted pendulum, commonly known as the cart-pole problem. We highlight two physiological features: reward signals and combining information across time, that indicate that some sort of reinforcement learning adaptation may be taking place. We also describe why the commonly used algorithmic feature, an eligibility trace, presents particular problems to implement in known neural anatomy.

本文探讨了简单的强化学习算法如何通过小脑的解剖来实现。在此过程中,我们强调了哪些解剖和生理细节对评估算法适合度最重要,并讨论了哪些算法组件最容易适应神经系统。我们描述了四种强化学习算法的假设小脑实现,并讨论了所需的各种组件的解剖学合理性。我们展示了其中一种算法如何在没有环境变化的连续信息的情况下学习生成简短的动作序列。我们最后以模拟来说明演算法如何学习解决平衡倒立摆的问题,也就是我们常说的车杆问题。我们强调了两个生理特征:奖励信号和跨越时间的信息组合,这表明某种强化学习适应可能正在发生。我们还描述了为什么常用的算法特征,资格跟踪,在已知的神经解剖学中提出了具体的问题。
{"title":"Potential for Reinforcement Learning in the Cerebellum.","authors":"Richard W Prager, Richard Apps","doi":"10.1162/NECO.a.1507","DOIUrl":"10.1162/NECO.a.1507","url":null,"abstract":"<p><p>This article explores how simple reinforcement learning algorithms might be implemented by the anatomy of the cerebellum. In doing this, we highlight which anatomical and physiological details are most important for assessing algorithmic fit, and we discuss which algorithm components are easiest to accommodate in a neural system. We describe hypothetical cerebellar implementations of four reinforcement learning algorithms and discuss the anatomical plausibility of the various components required. We show how one of the algorithms can learn to generate short sequences of actions without continuous information on the resulting changes to the environment. We finish with simulations that illustrate the way that the algorithms learn to solve the problem of balancing an inverted pendulum, commonly known as the cart-pole problem. We highlight two physiological features: reward signals and combining information across time, that indicate that some sort of reinforcement learning adaptation may be taking place. We also describe why the commonly used algorithmic feature, an eligibility trace, presents particular problems to implement in known neural anatomy.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"499-537"},"PeriodicalIF":2.1,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147367212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Force Learning in Balanced Cortical E-I Networks. 平衡皮质E-I网络的力学习。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1162/NECO.a.1503
Takashi Kanamaru, Kazuyuki Aihara

Force learning is a learning method for generating various types of complex dynamics in recurrent neural networks (RNNs), which is related to the reservoir computing (RC). RC uses an RNN called reservoir whose synaptic weights are randomly generated and fixed during learning. Force learning trains these synaptic weights inside the reservoir networks. Although force learning can be used as an effective tool for machine learning, possibilities of its realization in the brain are not often discussed. Here, in order to consider the possibilities of its realization in the brain, force learning is applied to an excitatory and inhibitory (E-I) network that models the cerebral cortex. A multimodule network composed of excitatory and inhibitory neurons is defined, and a readout is put outside, similar to a conventional reservoir. The output of this network is calculated at the readout as a linear combination of the filtered average firing rates of the excitatory neurons in the modules. Feedback connections that provide output back to the excitatory neurons in the modules with random strength are also added to this network. This network typically shows transitive chaotic synchronization, in which synchronizing modules are rearranged chaotically and intermittently. Under such conditions, our E-I network is trained to generate sinusoidal periodic signals for simplicity with force learning. When adjusting the E-I activity, it is observed that the efficiency of force learning is maximized at an optimal E-I balance near an edge of chaos. These results imply that the cooperation of excitatory and inhibitory neurons is required when force learning works effectively in the brain, although usual reservoir networks don't distinguish these two kinds of neurons.

力学习是递归神经网络(rnn)中生成各种类型复杂动力学的一种学习方法,与储层计算(RC)有关。RC使用一种称为“储层”的RNN,其突触权是在学习过程中随机生成和固定的。强制学习在存储网络中训练这些突触权重。虽然力学习可以作为机器学习的有效工具,但它在大脑中实现的可能性却很少被讨论。在这里,为了考虑其在大脑中实现的可能性,将力学习应用于模拟大脑皮层的兴奋和抑制(E-I)网络。定义了一个由兴奋性和抑制性神经元组成的多模块网络,并将读数放在外部,类似于传统的储存器。该网络的输出在读出时计算为模块中兴奋性神经元过滤后的平均放电率的线性组合。向模块中的兴奋性神经元提供随机强度输出的反馈连接也被添加到该网络中。该网络典型的表现为传递性混沌同步,即同步模块无序地、间歇地重新排列。在这种条件下,我们的E-I网络被训练为生成简单的正弦周期信号。当调整E-I活动时,可以观察到,在混乱边缘附近的最佳E-I平衡处,力学习的效率最大化。这些结果表明,当强迫学习在大脑中有效工作时,需要兴奋性和抑制性神经元的合作,尽管通常的存储网络并不能区分这两种神经元。
{"title":"Force Learning in Balanced Cortical E-I Networks.","authors":"Takashi Kanamaru, Kazuyuki Aihara","doi":"10.1162/NECO.a.1503","DOIUrl":"10.1162/NECO.a.1503","url":null,"abstract":"<p><p>Force learning is a learning method for generating various types of complex dynamics in recurrent neural networks (RNNs), which is related to the reservoir computing (RC). RC uses an RNN called reservoir whose synaptic weights are randomly generated and fixed during learning. Force learning trains these synaptic weights inside the reservoir networks. Although force learning can be used as an effective tool for machine learning, possibilities of its realization in the brain are not often discussed. Here, in order to consider the possibilities of its realization in the brain, force learning is applied to an excitatory and inhibitory (E-I) network that models the cerebral cortex. A multimodule network composed of excitatory and inhibitory neurons is defined, and a readout is put outside, similar to a conventional reservoir. The output of this network is calculated at the readout as a linear combination of the filtered average firing rates of the excitatory neurons in the modules. Feedback connections that provide output back to the excitatory neurons in the modules with random strength are also added to this network. This network typically shows transitive chaotic synchronization, in which synchronizing modules are rearranged chaotically and intermittently. Under such conditions, our E-I network is trained to generate sinusoidal periodic signals for simplicity with force learning. When adjusting the E-I activity, it is observed that the efficiency of force learning is maximized at an optimal E-I balance near an edge of chaos. These results imply that the cooperation of excitatory and inhibitory neurons is required when force learning works effectively in the brain, although usual reservoir networks don't distinguish these two kinds of neurons.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"573-601"},"PeriodicalIF":2.1,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147366928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ReBaCCA-ss: Relevance-Balanced Continuum Correlation Analysis With Smoothing and Surrogating for Quantifying Similarity Between Population Spiking Activities. ReBaCCA-ss:基于平滑和代入的关联平衡连续体相关分析
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1162/NECO.a.1501
Xiang Zhang, Chenlin Xu, Zhouxiao Lu, Haonan Wang, Dong Song

Quantifying similarity between population spike patterns is essential for understanding how neural dynamics encode information. Traditional approaches, which combine kernel smoothing, principal component analysis, and canonical correlation analysis (CCA), have limitations: smoothing kernel bandwidths are often empirically chosen, CCA maximizes alignment between patterns without considering the variance explained within patterns, and baseline correlations from stochastic spiking are rarely corrected. We introduce ReBaCCA-ss (relevance-balanced continuum correlation analysis with smoothing and surrogating), a novel framework that addresses these challenges through three innovations: (1) balancing alignment and variance explanation via continuum canonical correlation, (2) correcting for noise using surrogate spike trains, and (3) selecting the optimal kernel bandwidth by maximizing the difference between true and surrogate correlations. ReBaCCA-ss is validated on both simulated data and hippocampal recordings from rats performing a delayed nonmatch-to-sample task. It reliably identifies spatiotemporal similarities between spike patterns. Combined with multidimensional scaling, ReBaCCA-ss reveals structured neural representations across trials, events, sessions, and animals, offering a powerful tool for neural population analysis.

量化种群峰值模式之间的相似性对于理解神经动力学如何编码信息至关重要。结合核平滑、主成分分析和典型相关分析(CCA)的传统方法具有局限性:平滑核带宽通常是经验选择的,CCA在不考虑模式内解释的方差的情况下最大化模式之间的一致性,并且随机峰值的基线相关性很少得到纠正。我们引入了ReBaCCA-ss(带平滑和代理的关联平衡连续体相关分析),这是一个通过三个创新来解决这些挑战的新框架:(1)通过连续体典型相关平衡对齐和方差解释,(2)使用代理尖峰串校正噪声,以及(3)通过最大化真相关性和代理相关性之间的差异来选择最佳核带宽。ReBaCCA-ss在执行延迟非匹配样本任务的大鼠的模拟数据和海马记录上进行了验证。它可以可靠地识别出脉冲模式之间的时空相似性。结合多维尺度,ReBaCCA-ss揭示了跨试验、事件、会话和动物的结构化神经表征,为神经种群分析提供了强大的工具。
{"title":"ReBaCCA-ss: Relevance-Balanced Continuum Correlation Analysis With Smoothing and Surrogating for Quantifying Similarity Between Population Spiking Activities.","authors":"Xiang Zhang, Chenlin Xu, Zhouxiao Lu, Haonan Wang, Dong Song","doi":"10.1162/NECO.a.1501","DOIUrl":"10.1162/NECO.a.1501","url":null,"abstract":"<p><p>Quantifying similarity between population spike patterns is essential for understanding how neural dynamics encode information. Traditional approaches, which combine kernel smoothing, principal component analysis, and canonical correlation analysis (CCA), have limitations: smoothing kernel bandwidths are often empirically chosen, CCA maximizes alignment between patterns without considering the variance explained within patterns, and baseline correlations from stochastic spiking are rarely corrected. We introduce ReBaCCA-ss (relevance-balanced continuum correlation analysis with smoothing and surrogating), a novel framework that addresses these challenges through three innovations: (1) balancing alignment and variance explanation via continuum canonical correlation, (2) correcting for noise using surrogate spike trains, and (3) selecting the optimal kernel bandwidth by maximizing the difference between true and surrogate correlations. ReBaCCA-ss is validated on both simulated data and hippocampal recordings from rats performing a delayed nonmatch-to-sample task. It reliably identifies spatiotemporal similarities between spike patterns. Combined with multidimensional scaling, ReBaCCA-ss reveals structured neural representations across trials, events, sessions, and animals, offering a powerful tool for neural population analysis.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"651-680"},"PeriodicalIF":2.1,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147367240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual Processes as Charting Operators. 作为图表运算符的感知过程。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-05 DOI: 10.1162/NECO.a.1506
Peter Neri

Sensory operators are classically modeled using small circuits involving canonical computations, such as energy extraction and gain control. Notwithstanding their utility, circuit models do not provide a unified framework encompassing the variety of effects observed experimentally. We develop a novel, alternative framework that recasts sensory operators in the language of intrinsic geometry. We start from a plausible representation of perceptual processes that is akin to measuring distances over a sensory manifold. We show that this representation is sufficiently expressive to capture a wide range of empirical effects associated with elementary sensory computations. The resulting geometrical framework offers a new perspective on state-of-the-art empirical descriptors of sensory behavior, such as first-order and second-order perceptual kernels. For example, it relates these descriptors to notions of flatness and curvature in perceptual space.

感官算子的经典建模使用小型电路,包括规范计算,如能量提取和增益控制。尽管它们很实用,但电路模型并没有提供一个包含实验观察到的各种效应的统一框架。我们开发了一种新颖的替代框架,以内在几何的语言重塑感官算子。我们从感知过程的一个似是而非的表征开始,这个表征类似于在一个感觉歧管上测量距离。我们表明,这种表示具有足够的表现力,可以捕获与基本感官计算相关的广泛的经验效应。由此产生的几何框架为感官行为的最先进的经验描述符提供了新的视角,例如一阶和二阶感知核。例如,它将这些描述符与感知空间中的平面和曲率概念联系起来。
{"title":"Perceptual Processes as Charting Operators.","authors":"Peter Neri","doi":"10.1162/NECO.a.1506","DOIUrl":"https://doi.org/10.1162/NECO.a.1506","url":null,"abstract":"<p><p>Sensory operators are classically modeled using small circuits involving canonical computations, such as energy extraction and gain control. Notwithstanding their utility, circuit models do not provide a unified framework encompassing the variety of effects observed experimentally. We develop a novel, alternative framework that recasts sensory operators in the language of intrinsic geometry. We start from a plausible representation of perceptual processes that is akin to measuring distances over a sensory manifold. We show that this representation is sufficiently expressive to capture a wide range of empirical effects associated with elementary sensory computations. The resulting geometrical framework offers a new perspective on state-of-the-art empirical descriptors of sensory behavior, such as first-order and second-order perceptual kernels. For example, it relates these descriptors to notions of flatness and curvature in perceptual space.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-54"},"PeriodicalIF":2.1,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147366910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attractor-Based Models for Sequences and Pattern Generation in Neural Circuits 神经回路中基于吸引子的序列和模式生成模型。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-03 DOI: 10.1162/NECO.a.1492
Juliana Londono Alvarez;Katherine Morrison;Carina Curto
Neural circuits in the brain perform a variety of essential functions, including input classification, pattern completion, and the generation of rhythms and oscillations that support functions such as breathing and locomotion. There is also substantial evidence that the brain encodes memories and processes information via sequences of neural activity. Traditionally, rhythmic activity and pattern generation have been modeled using coupled oscillators, whereas input classification and pattern completion have been modeled using attractor neural networks. Here, we present a theoretical framework that demonstrates how attractor-based networks can also generate diverse rhythmic patterns, such as those of central pattern generator circuits (CPGs). Additionally, we propose a mechanism for transitioning between patterns. Specifically, we construct a network that can step through a sequence of five different quadruped gaits. It is composed of two dynamically distinct modules: a “counter” network that can count the number of external inputs it receives via a sequence of fixed points and a locomotion network that encodes five different quadruped gaits as limit cycles. A sequence of locomotive gaits is obtained by connecting the counter network with the locomotion network. Specifically, we introduce a new architecture for layering networks that produces fusion attractors, binding pairs of attractors from individual layers. All of this is accomplished within a unified framework of attractor-based models using threshold-linear networks.
大脑中的神经回路执行各种基本功能,包括输入分类、模式完成以及支持呼吸和运动等功能的节奏和振荡的产生。也有大量证据表明,大脑通过一系列神经活动来编码记忆和处理信息。传统上,节奏活动和模式生成是使用耦合振荡器建模的,而输入分类和模式完成是使用牵引器神经网络建模的。在这里,我们提出了一个理论框架,证明了基于吸引子的网络如何也能产生不同的节奏模式,例如中央模式产生电路(cpg)的节奏模式。此外,我们还提出了一种模式之间转换的机制。具体来说,我们构建了一个网络,可以步进五种不同的四足动物步态序列。它由两个动态不同的模块组成:一个计数器网络,可以计算通过一系列固定点接收的外部输入的数量;一个运动网络,将五种不同的四足步态编码为极限环。将反网络与运动网络连接,得到机车步态序列。具体来说,我们为分层网络引入了一种新的结构,该结构产生融合吸引子,将来自各个层的吸引子对绑定在一起。所有这些都是在使用阈值线性网络的基于吸引子的模型的统一框架内完成的。
{"title":"Attractor-Based Models for Sequences and Pattern Generation in Neural Circuits","authors":"Juliana Londono Alvarez;Katherine Morrison;Carina Curto","doi":"10.1162/NECO.a.1492","DOIUrl":"10.1162/NECO.a.1492","url":null,"abstract":"Neural circuits in the brain perform a variety of essential functions, including input classification, pattern completion, and the generation of rhythms and oscillations that support functions such as breathing and locomotion. There is also substantial evidence that the brain encodes memories and processes information via sequences of neural activity. Traditionally, rhythmic activity and pattern generation have been modeled using coupled oscillators, whereas input classification and pattern completion have been modeled using attractor neural networks. Here, we present a theoretical framework that demonstrates how attractor-based networks can also generate diverse rhythmic patterns, such as those of central pattern generator circuits (CPGs). Additionally, we propose a mechanism for transitioning between patterns. Specifically, we construct a network that can step through a sequence of five different quadruped gaits. It is composed of two dynamically distinct modules: a “counter” network that can count the number of external inputs it receives via a sequence of fixed points and a locomotion network that encodes five different quadruped gaits as limit cycles. A sequence of locomotive gaits is obtained by connecting the counter network with the locomotion network. Specifically, we introduce a new architecture for layering networks that produces fusion attractors, binding pairs of attractors from individual layers. All of this is accomplished within a unified framework of attractor-based models using threshold-linear networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 3","pages":"257-291"},"PeriodicalIF":2.1,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Glutamate-Glutamine Cycling Underlies Presynaptic ATP Homeostasis 局部谷氨酸-谷氨酰胺循环是突触前ATP稳态的基础。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-03 DOI: 10.1162/NECO.a.1490
Reinoud Maex
Presynaptic axon terminals maintain in their cytosol an almost constant level of adenosine triphosphate (ATP) to safeguard neurotransmission during varying workloads. In the study reported in this letter, it is argued that the vesicular release of neurotransmitter and the recycling of transmitter via astrocytes may itself be a mechanism of ATP homeostasis. In a minimal metabolic model of a presynaptic axon bouton, the accumulation of glutamate into vesicles and the activity-dependent supply of its precursor glutamine by astrocytes generated a steady-state level of ATP that was independent of the workload. When the workload increased, an enhanced supply of glutamine raised the rate of ATP production through the conversion of glutamate to the Krebs cycle intermediate α-ketoglutarate. The accumulation and release of glutamate, on the other hand, acted as a leak that diminished ATP production when the workload decreased. The fraction of ATP that the axon spent on the release and recycling of glutamate was small (4.7%), irrespective of the workload. Increasing this fraction enhanced the speed of ATP homeostasis and reduced the futile production of ATP. The model can be extended to axons releasing other, or coreleasing multiple, transmitters. Hence, the activity-dependent formation and release of neurotransmitter may be a universal mechanism of ATP homeostasis.
突触前轴突末端在其细胞质中维持几乎恒定水平的三磷酸腺苷(ATP),以保护不同负荷下的神经传递。在这封信中报道的研究中,认为神经递质的囊泡释放和递质通过星形胶质细胞的再循环本身可能是ATP稳态的一种机制。在突触前轴突钮扣的最小代谢模型中,谷氨酸在囊泡中的积累和星形胶质细胞对其前体谷氨酰胺的活性依赖性供应产生了与负荷无关的稳态ATP水平。当负荷增加时,通过谷氨酸转化为克雷布斯循环的中间体α-酮戊二酸,谷氨酰胺供应的增加提高了ATP的产生速度。另一方面,当工作量减少时,谷氨酸的积累和释放就像泄漏一样减少了ATP的产生。与负荷无关,轴突用于谷氨酸释放和再循环的ATP比例很小(4.7%)。增加这个分数提高了ATP稳态的速度,减少了无用的ATP产生。该模型可以扩展到轴突释放其他或共同释放多个递质。因此,神经递质的活性依赖性形成和释放可能是ATP稳态的普遍机制。
{"title":"Local Glutamate-Glutamine Cycling Underlies Presynaptic ATP Homeostasis","authors":"Reinoud Maex","doi":"10.1162/NECO.a.1490","DOIUrl":"10.1162/NECO.a.1490","url":null,"abstract":"Presynaptic axon terminals maintain in their cytosol an almost constant level of adenosine triphosphate (ATP) to safeguard neurotransmission during varying workloads. In the study reported in this letter, it is argued that the vesicular release of neurotransmitter and the recycling of transmitter via astrocytes may itself be a mechanism of ATP homeostasis. In a minimal metabolic model of a presynaptic axon bouton, the accumulation of glutamate into vesicles and the activity-dependent supply of its precursor glutamine by astrocytes generated a steady-state level of ATP that was independent of the workload. When the workload increased, an enhanced supply of glutamine raised the rate of ATP production through the conversion of glutamate to the Krebs cycle intermediate α-ketoglutarate. The accumulation and release of glutamate, on the other hand, acted as a leak that diminished ATP production when the workload decreased. The fraction of ATP that the axon spent on the release and recycling of glutamate was small (4.7%), irrespective of the workload. Increasing this fraction enhanced the speed of ATP homeostasis and reduced the futile production of ATP. The model can be extended to axons releasing other, or coreleasing multiple, transmitters. Hence, the activity-dependent formation and release of neurotransmitter may be a universal mechanism of ATP homeostasis.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 3","pages":"403-438"},"PeriodicalIF":2.1,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reframing the Expected Free Energy: Four Formulations and a Unification 重构预期自由能:四种表述与统一。
IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-03 DOI: 10.1162/NECO.a.1491
Théophile Champion;Howard Bowman;Dimitrije Marković;Marek Grześ
Active inference is a process theory of perception, learning, and decision making that is applied to a range of research fields, including neuroscience, robotics, psychology, and machine learning. Active inference rests on an objective function called the expected free energy, which can be justified by the intuitive plausibility of its formulations—for example, the risk plus ambiguity and information gain/pragmatic value formulations. This letter seeks to formalize the problem of deriving these formulations from a single root expected free energy definition—the unification problem. Then we analyze two approaches to defining expected free energy. More precisely, the expected free energy is either defined as (1) the risk over observations plus ambiguity or (2) the risk over states plus ambiguity. In the first setting, no rigorous mathematical justification for the expected free energy has been proposed to date, but all the formulations can be recovered from it by assuming that the likelihood of target distribution T(o|s) is the likelihood of the generative model P(o|s). Importantly, under this likelihood constraint, if the likelihood is lossless,1 then prior preferences over observations can be defined arbitrarily. However, in the more general case of partially observable Markov decision processes (POMDPs), we demonstrate that the likelihood constraint effectively restricts the set of valid prior preferences over observations. Indeed, only a limited class of prior preferences over observations is compatible with the likelihood mapping of the generative model. In the second setting, a justification of the root expected free energy definition exists, but this setting only accounts for two formulations: the risk over states plus ambiguity and entropy plus expected energy formulations. We conclude with a discussion of the conditions under which a unification of expected free energy formulations has been proposed in the literature by appeal to the free energy principle in the specific context of systems without random fluctuations.
主动推理是一种关于感知、学习和决策的过程理论,应用于一系列研究领域,包括神经科学、机器人、心理学和机器学习。主动推理依赖于一个被称为期望自由能的目标函数,它可以通过其公式的直观合理性来证明——例如,风险加模糊性和信息增益/实用价值公式。这封信试图形式化从单一根期望自由能定义推导出这些公式的问题——统一问题。然后分析了定义期望自由能的两种方法。更准确地说,期望自由能要么定义为(1)观测值加上模糊性的风险,要么定义为(2)状态加上模糊性的风险。在第一种设置中,迄今为止还没有对期望自由能提出严格的数学证明,但通过假设目标分布的似然T(o|s)是生成模型P(o|s)的似然,可以从中恢复所有的公式。重要的是,在这种似然约束下,如果似然是无损的,那么可以任意定义对观测的先验偏好。然而,在部分可观察马尔可夫决策过程(pomdp)的更一般的情况下,我们证明了似然约束有效地限制了有效先验偏好的集合。事实上,只有一类有限的先验偏好与生成模型的似然映射是相容的。在第二种设置中,存在根期望自由能定义的证明,但这种设置只考虑两种公式:状态风险加上模糊性和熵加上期望能量公式。最后,我们讨论了在没有随机波动的系统的特定情况下,通过诉诸自由能原理,在文献中提出期望自由能公式的统一的条件。
{"title":"Reframing the Expected Free Energy: Four Formulations and a Unification","authors":"Théophile Champion;Howard Bowman;Dimitrije Marković;Marek Grześ","doi":"10.1162/NECO.a.1491","DOIUrl":"10.1162/NECO.a.1491","url":null,"abstract":"Active inference is a process theory of perception, learning, and decision making that is applied to a range of research fields, including neuroscience, robotics, psychology, and machine learning. Active inference rests on an objective function called the expected free energy, which can be justified by the intuitive plausibility of its formulations—for example, the risk plus ambiguity and information gain/pragmatic value formulations. This letter seeks to formalize the problem of deriving these formulations from a single root expected free energy definition—the unification problem. Then we analyze two approaches to defining expected free energy. More precisely, the expected free energy is either defined as (1) the risk over observations plus ambiguity or (2) the risk over states plus ambiguity. In the first setting, no rigorous mathematical justification for the expected free energy has been proposed to date, but all the formulations can be recovered from it by assuming that the likelihood of target distribution T(o|s) is the likelihood of the generative model P(o|s). Importantly, under this likelihood constraint, if the likelihood is lossless,1 then prior preferences over observations can be defined arbitrarily. However, in the more general case of partially observable Markov decision processes (POMDPs), we demonstrate that the likelihood constraint effectively restricts the set of valid prior preferences over observations. Indeed, only a limited class of prior preferences over observations is compatible with the likelihood mapping of the generative model. In the second setting, a justification of the root expected free energy definition exists, but this setting only accounts for two formulations: the risk over states plus ambiguity and entropy plus expected energy formulations. We conclude with a discussion of the conditions under which a unification of expected free energy formulations has been proposed in the literature by appeal to the free energy principle in the specific context of systems without random fluctuations.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 3","pages":"439-469"},"PeriodicalIF":2.1,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1