Spiking neural networks (SNNs) have attracted significant interest in the development of brain-inspired computing systems due to their energy efficiency and similarities to biological information processing. In contrast to continuous-valued artificial neural networks, which produce results in a single step, SNNs require multiple steps during inference to achieve a desired accuracy level, resulting in a burden in real-time response and energy efficiency. Inspired by the tradeoff between speed and accuracy in human and animal decision-making processes, which exhibit correlations among reaction times, task complexity, and decision confidence, an inquiry emerges regarding how an SNN model can benefit by implementing these attributes. Here, we introduce a theory of decision making in SNNs by untangling the interplay between signal and noise. Under this theory, we introduce a new learning objective that trains an SNN not only to make the correct decisions but also to shape its confidence. Numerical experiments demonstrate that SNNs trained in this way exhibit improved confidence expression, reduced trial-to-trial variability, and shorter latency to reach the desired accuracy. We then introduce a stopping policy that can stop inference in a way that further enhances the time efficiency of SNNs. The stopping time can serve as an indicator to whether a decision is correct, akin to the reaction time in animal behavior experiments. By integrating stochasticity into decision making, this study opens up new possibilities to explore the capabilities of SNNs and advance SNNs and their applications in complex decision-making scenarios where model performance is limited.
{"title":"Toward a Free-Response Paradigm of Decision Making in Spiking Neural Networks","authors":"Zhichao Zhu;Yang Qi;Wenlian Lu;Zhigang Wang;Lu Cao;Jianfeng Feng","doi":"10.1162/neco_a_01733","DOIUrl":"10.1162/neco_a_01733","url":null,"abstract":"Spiking neural networks (SNNs) have attracted significant interest in the development of brain-inspired computing systems due to their energy efficiency and similarities to biological information processing. In contrast to continuous-valued artificial neural networks, which produce results in a single step, SNNs require multiple steps during inference to achieve a desired accuracy level, resulting in a burden in real-time response and energy efficiency. Inspired by the tradeoff between speed and accuracy in human and animal decision-making processes, which exhibit correlations among reaction times, task complexity, and decision confidence, an inquiry emerges regarding how an SNN model can benefit by implementing these attributes. Here, we introduce a theory of decision making in SNNs by untangling the interplay between signal and noise. Under this theory, we introduce a new learning objective that trains an SNN not only to make the correct decisions but also to shape its confidence. Numerical experiments demonstrate that SNNs trained in this way exhibit improved confidence expression, reduced trial-to-trial variability, and shorter latency to reach the desired accuracy. We then introduce a stopping policy that can stop inference in a way that further enhances the time efficiency of SNNs. The stopping time can serve as an indicator to whether a decision is correct, akin to the reaction time in animal behavior experiments. By integrating stochasticity into decision making, this study opens up new possibilities to explore the capabilities of SNNs and advance SNNs and their applications in complex decision-making scenarios where model performance is limited.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"481-521"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The creation of future low-power neuromorphic solutions requires specialist spiking neural network (SNN) algorithms that are optimized for neuromorphic settings. One such algorithmic challenge is the ability to recall learned patterns from their noisy variants. Solutions to this problem may be required to memorize vast numbers of patterns based on limited training data and subsequently recall the patterns in the presence of noise. To solve this problem, previous work has explored sparse associative memory (SAM)—associative memory neural models that exploit the principle of sparse neural coding observed in the brain. Research into a subcategory of SAM has been inspired by the biological process of adult neurogenesis, whereby new neurons are generated to facilitate adaptive and effective lifelong learning. Although these neurogenesis models have been demonstrated in previous research, they have limitations in terms of recall memory capacity and robustness to noise. In this article, we provide a unifying framework for characterizing a type of SAM network that has been pretrained using a learning strategy that incorporated a simple neurogenesis model. Using this characterization, we formally define network topology and threshold optimization methods to empirically demonstrate greater than 104 times improvement in memory capacity compared to previous work. We show that these optimizations can facilitate the development of networks that have reduced interneuron connectivity while maintaining high recall efficacy. This paves the way for ongoing research into fast, effective, low-power realizations of associative memory on neuromorphic platforms.
{"title":"Improving Recall in Sparse Associative Memories That Use Neurogenesis","authors":"Katy Warr;Jonathon Hare;David Thomas","doi":"10.1162/neco_a_01732","DOIUrl":"10.1162/neco_a_01732","url":null,"abstract":"The creation of future low-power neuromorphic solutions requires specialist spiking neural network (SNN) algorithms that are optimized for neuromorphic settings. One such algorithmic challenge is the ability to recall learned patterns from their noisy variants. Solutions to this problem may be required to memorize vast numbers of patterns based on limited training data and subsequently recall the patterns in the presence of noise. To solve this problem, previous work has explored sparse associative memory (SAM)—associative memory neural models that exploit the principle of sparse neural coding observed in the brain. Research into a subcategory of SAM has been inspired by the biological process of adult neurogenesis, whereby new neurons are generated to facilitate adaptive and effective lifelong learning. Although these neurogenesis models have been demonstrated in previous research, they have limitations in terms of recall memory capacity and robustness to noise. In this article, we provide a unifying framework for characterizing a type of SAM network that has been pretrained using a learning strategy that incorporated a simple neurogenesis model. Using this characterization, we formally define network topology and threshold optimization methods to empirically demonstrate greater than 104 times improvement in memory capacity compared to previous work. We show that these optimizations can facilitate the development of networks that have reduced interneuron connectivity while maintaining high recall efficacy. This paves the way for ongoing research into fast, effective, low-power realizations of associative memory on neuromorphic platforms.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"437-480"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the real-valued combinatorial pure exploration problem in the stochastic multi-armed bandit (R-CPE-MAB). We study the case where the size of the action set is polynomial with respect to the number of arms. In such a case, the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits. We introduce the combinatorial gap-based exploration (CombGapE) algorithm, whose sample complexity upper-bound-matches the lower bound up to a problem-dependent constant factor. We numerically show that the CombGapE algorithm outperforms existing methods significantly in both synthetic and real-world data sets.
{"title":"A Fast Algorithm for the Real-Valued Combinatorial Pure Exploration of the Multi-Armed Bandit","authors":"Shintaro Nakamura;Masashi Sugiyama","doi":"10.1162/neco_a_01728","DOIUrl":"10.1162/neco_a_01728","url":null,"abstract":"We study the real-valued combinatorial pure exploration problem in the stochastic multi-armed bandit (R-CPE-MAB). We study the case where the size of the action set is polynomial with respect to the number of arms. In such a case, the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits. We introduce the combinatorial gap-based exploration (CombGapE) algorithm, whose sample complexity upper-bound-matches the lower bound up to a problem-dependent constant factor. We numerically show that the CombGapE algorithm outperforms existing methods significantly in both synthetic and real-world data sets.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 2","pages":"294-310"},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we mainly study the depth and width of autoencoders consisting of rectified linear unit (ReLU) activation functions. An autoencoder is a layered neural network consisting of an encoder, which compresses an input vector to a lower-dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector exactly (or approximately). In a previous study, Melkman et al. (2023) studied the depth and width of autoencoders using linear threshold activation functions with binary input and output vectors. We show that similar theoretical results hold if autoencoders using ReLU activation functions with real input and output vectors are used. Furthermore, we show that it is possible to compress input vectors to one-dimensional vectors using ReLU activation functions, although the size of compressed vectors is trivially Ω(log n) for autoencoders with linear threshold activation functions, where n is the number of input vectors. We also study the cases of linear activation functions. The results suggest that the compressive power of autoencoders using linear activation functions is considerably limited compared with those using ReLU activation functions.
{"title":"On the Compressive Power of Autoencoders With Linear and ReLU Activation Functions","authors":"Liangjie Sun;Chenyao Wu;Wai-Ki Ching;Tatsuya Akutsu","doi":"10.1162/neco_a_01729","DOIUrl":"10.1162/neco_a_01729","url":null,"abstract":"In this article, we mainly study the depth and width of autoencoders consisting of rectified linear unit (ReLU) activation functions. An autoencoder is a layered neural network consisting of an encoder, which compresses an input vector to a lower-dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector exactly (or approximately). In a previous study, Melkman et al. (2023) studied the depth and width of autoencoders using linear threshold activation functions with binary input and output vectors. We show that similar theoretical results hold if autoencoders using ReLU activation functions with real input and output vectors are used. Furthermore, we show that it is possible to compress input vectors to one-dimensional vectors using ReLU activation functions, although the size of compressed vectors is trivially Ω(log n) for autoencoders with linear threshold activation functions, where n is the number of input vectors. We also study the cases of linear activation functions. The results suggest that the compressive power of autoencoders using linear activation functions is considerably limited compared with those using ReLU activation functions.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 2","pages":"235-259"},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, models based on the transformer architecture have seen widespread applications and have become one of the core tools in the field of deep learning. Numerous successful and efficient techniques, such as parameter-efficient fine-tuning and efficient scaling, have been proposed surrounding their applications to further enhance performance. However, the success of these strategies has always lacked the support of rigorous mathematical theory. To study the underlying mechanisms behind transformers and related techniques, we first propose a transformer learning framework motivated by distribution regression, with distributions being inputs, connect a two-stage sampling process with natural language processing, and present a mathematical formulation of the attention mechanism called attention operator. We demonstrate that by the attention operator, transformers can compress distributions into function representations without loss of information. Moreover, with the advantages of our novel attention operator, transformers exhibit a stronger capability to learn functionals with more complex structures than convolutional neural networks and fully connected networks. Finally, we obtain a generalization bound within the distribution regression framework. Throughout theoretical results, we further discuss some successful techniques emerging with large language models (LLMs), such as prompt tuning, parameter-efficient fine-tuning, and efficient scaling. We also provide theoretical insights behind these techniques within our novel analysis framework.
{"title":"Generalization Analysis of Transformers in Distribution Regression","authors":"Peilin Liu;Ding-Xuan Zhou","doi":"10.1162/neco_a_01726","DOIUrl":"10.1162/neco_a_01726","url":null,"abstract":"In recent years, models based on the transformer architecture have seen widespread applications and have become one of the core tools in the field of deep learning. Numerous successful and efficient techniques, such as parameter-efficient fine-tuning and efficient scaling, have been proposed surrounding their applications to further enhance performance. However, the success of these strategies has always lacked the support of rigorous mathematical theory. To study the underlying mechanisms behind transformers and related techniques, we first propose a transformer learning framework motivated by distribution regression, with distributions being inputs, connect a two-stage sampling process with natural language processing, and present a mathematical formulation of the attention mechanism called attention operator. We demonstrate that by the attention operator, transformers can compress distributions into function representations without loss of information. Moreover, with the advantages of our novel attention operator, transformers exhibit a stronger capability to learn functionals with more complex structures than convolutional neural networks and fully connected networks. Finally, we obtain a generalization bound within the distribution regression framework. Throughout theoretical results, we further discuss some successful techniques emerging with large language models (LLMs), such as prompt tuning, parameter-efficient fine-tuning, and efficient scaling. We also provide theoretical insights behind these techniques within our novel analysis framework.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 2","pages":"260-293"},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hebbian learning theory is rooted in Pavlov’s classical conditioning While mathematical models of the former have been proposed and studied in the past decades, especially in spin glass theory, only recently has it been numerically shown that it is possible to write neural and synaptic dynamics that mirror Pavlov conditioning mechanisms and also give rise to synaptic weights that correspond to the Hebbian learning rule. In this article we show that the same dynamics can be derived with equilibrium statistical mechanics tools and basic and motivated modeling assumptions. Then we show how to study the resulting system of coupled stochastic differential equations assuming the reasonable separation of neural and synaptic timescale. In particular, we analytically demonstrate that this synaptic evolution converges to the Hebbian learning rule in various settings and compute the variance of the stochastic process. Finally, drawing from evidence on pure memory reinforcement during sleep stages, we show how the proposed model can simulate neural networks that undergo sleep-associated memory consolidation processes, thereby proving the compatibility of Pavlovian learning with dreaming mechanisms.
{"title":"Learning in Associative Networks Through Pavlovian Dynamics","authors":"Daniele Lotito;Miriam Aquaro;Chiara Marullo","doi":"10.1162/neco_a_01730","DOIUrl":"10.1162/neco_a_01730","url":null,"abstract":"Hebbian learning theory is rooted in Pavlov’s classical conditioning While mathematical models of the former have been proposed and studied in the past decades, especially in spin glass theory, only recently has it been numerically shown that it is possible to write neural and synaptic dynamics that mirror Pavlov conditioning mechanisms and also give rise to synaptic weights that correspond to the Hebbian learning rule. In this article we show that the same dynamics can be derived with equilibrium statistical mechanics tools and basic and motivated modeling assumptions. Then we show how to study the resulting system of coupled stochastic differential equations assuming the reasonable separation of neural and synaptic timescale. In particular, we analytically demonstrate that this synaptic evolution converges to the Hebbian learning rule in various settings and compute the variance of the stochastic process. Finally, drawing from evidence on pure memory reinforcement during sleep stages, we show how the proposed model can simulate neural networks that undergo sleep-associated memory consolidation processes, thereby proving the compatibility of Pavlovian learning with dreaming mechanisms.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 2","pages":"311-343"},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Significant progress has been made recently in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-hidden-layer NNs and has not addressed the impact of different network scaling. Here, network scaling corresponds to the normalization of the layers. In this article, we greatly extend the previous work (Lei et al., 2022; Richards & Kuzborskij, 2021) by conducting a comprehensive stability and generalization analysis of GD for two-layer and three-layer NNs. For two-layer NNs, our results are established under general network scaling, relaxing previous conditions. In the case of three-layer NNs, our technical contribution lies in demonstrating its nearly co-coercive property by utilizing a novel induction strategy that thoroughly explores the effects of overparameterization. As a direct application of our general findings, we derive the excess risk rate of O(1/n) for GD in both two-layer and three-layer NNs. This sheds light on sufficient or necessary conditions for underparameterized and overparameterized NNs trained by GD to attain the desired risk rate of O(1/n). Moreover, we demonstrate that as the scaling factor increases or the network complexity decreases, less overparameterization is required for GD to achieve the desired error rates. Additionally, under a low-noise condition, we obtain a fast risk rate of O(1/n) for GD in both two-layer and three-layer NNs.
近来,在利用算法稳定性方法理解通过梯度下降(GD)训练的神经网络(NN)的泛化方面取得了重大进展。然而,现有研究大多集中于单隐层神经网络,并未涉及不同网络规模的影响。在这里,网络缩放相当于层的规范化。在本文中,我们大大扩展了之前的工作(Lei 等人,2022;Richards & Kuzborskij,2021),对两层和三层 NN 的 GD 进行了全面的稳定性和泛化分析。对于两层 NN,我们的结果是在一般网络缩放条件下建立的,放宽了之前的条件。对于三层网络,我们的技术贡献在于利用一种新颖的归纳策略,彻底探讨了过参数化的影响,从而证明了其近乎协迫的特性。作为我们一般发现的直接应用,我们得出了两层和三层网络中 GD 的超额风险率为 O(1/n)。这揭示了通过 GD 训练的欠参数化和过参数化 NN 达到 O(1/n) 期望风险率的充分或必要条件。此外,我们还证明,随着缩放因子的增加或网络复杂度的降低,GD 所需的过参数化程度也会降低,从而达到所需的错误率。此外,在低噪声条件下,我们在两层和三层 NN 中都获得了 O(1/n)的快速风险率。
{"title":"Generalization Guarantees of Gradient Descent for Shallow Neural Networks","authors":"Puyu Wang;Yunwen Lei;Di Wang;Yiming Ying;Ding-Xuan Zhou","doi":"10.1162/neco_a_01725","DOIUrl":"10.1162/neco_a_01725","url":null,"abstract":"Significant progress has been made recently in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-hidden-layer NNs and has not addressed the impact of different network scaling. Here, network scaling corresponds to the normalization of the layers. In this article, we greatly extend the previous work (Lei et al., 2022; Richards & Kuzborskij, 2021) by conducting a comprehensive stability and generalization analysis of GD for two-layer and three-layer NNs. For two-layer NNs, our results are established under general network scaling, relaxing previous conditions. In the case of three-layer NNs, our technical contribution lies in demonstrating its nearly co-coercive property by utilizing a novel induction strategy that thoroughly explores the effects of overparameterization. As a direct application of our general findings, we derive the excess risk rate of O(1/n) for GD in both two-layer and three-layer NNs. This sheds light on sufficient or necessary conditions for underparameterized and overparameterized NNs trained by GD to attain the desired risk rate of O(1/n). Moreover, we demonstrate that as the scaling factor increases or the network complexity decreases, less overparameterization is required for GD to achieve the desired error rates. Additionally, under a low-noise condition, we obtain a fast risk rate of O(1/n) for GD in both two-layer and three-layer NNs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 2","pages":"344-402"},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142666383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complex information processing systems that are capable of a wide variety of tasks, such as the human brain, are composed of specialized units that collaborate and communicate with each other. An important property of such information processing networks is locality: there is no single global unit controlling the modules, but information is exchanged locally. Here, we consider a decision-theoretic approach to study networks of bounded rational decision makers that are allowed to specialize and communicate with each other. In contrast to previous work that has focused on feedforward communication between decision-making agents, we consider cyclical information processing paths allowing for back-and-forth communication. We adapt message-passing algorithms to suit this purpose, essentially allowing for local information flow between units and thus enabling circular dependency structures. We provide examples that show how repeated communication can increase performance given that each unit’s information processing capability is limited and that decision-making systems with too few or too many connections and feedback loops achieve suboptimal utility.
{"title":"Bounded Rational Decision Networks With Belief Propagation","authors":"Gerrit Schmid;Sebastian Gottwald;Daniel A. Braun","doi":"10.1162/neco_a_01719","DOIUrl":"10.1162/neco_a_01719","url":null,"abstract":"Complex information processing systems that are capable of a wide variety of tasks, such as the human brain, are composed of specialized units that collaborate and communicate with each other. An important property of such information processing networks is locality: there is no single global unit controlling the modules, but information is exchanged locally. Here, we consider a decision-theoretic approach to study networks of bounded rational decision makers that are allowed to specialize and communicate with each other. In contrast to previous work that has focused on feedforward communication between decision-making agents, we consider cyclical information processing paths allowing for back-and-forth communication. We adapt message-passing algorithms to suit this purpose, essentially allowing for local information flow between units and thus enabling circular dependency structures. We provide examples that show how repeated communication can increase performance given that each unit’s information processing capability is limited and that decision-making systems with too few or too many connections and feedback loops achieve suboptimal utility.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"76-127"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10810330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Dabagia;Christos H. Papadimitriou;Santosh S. Vempala
Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain’s learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou et al. (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that in the same model, sequential precedence can be captured naturally through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. Taken together, these results provide a concrete hypothesis for the basis of the brain’s remarkable abilities to compute and learn, with sequences playing a vital role.
{"title":"Computation With Sequences of Assemblies in a Model of the Brain","authors":"Max Dabagia;Christos H. Papadimitriou;Santosh S. Vempala","doi":"10.1162/neco_a_01720","DOIUrl":"10.1162/neco_a_01720","url":null,"abstract":"Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain’s learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou et al. (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that in the same model, sequential precedence can be captured naturally through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. Taken together, these results provide a concrete hypothesis for the basis of the brain’s remarkable abilities to compute and learn, with sequences playing a vital role.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"193-233"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher J. Kymn;Denis Kleyko;E. Paxon Frady;Connor Bybee;Pentti Kanerva;Friedrich T. Sommer;Bruno A. Olshausen
We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
{"title":"Computing With Residue Numbers in High-Dimensional Representation","authors":"Christopher J. Kymn;Denis Kleyko;E. Paxon Frady;Connor Bybee;Pentti Kanerva;Friedrich T. Sommer;Bruno A. Olshausen","doi":"10.1162/neco_a_01723","DOIUrl":"10.1162/neco_a_01723","url":null,"abstract":"We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"1-37"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}