Consistent observations across recording modalities, experiments, and neural systems find neural field spectra with 1/f-like scaling, eliciting many alternative theories to explain this universal phenomenon. We show that a general dynamical system with stochastic drive and minimal assumptions generates 1/f-like spectra consistent with the range of values observed in vivo without requiring a specific biological mechanism or collective critical behavior.
{"title":"A General, Noise-Driven Mechanism for the 1/f-Like Behavior of Neural Field Spectra","authors":"Mark A. Kramer;Catherine J. Chu","doi":"10.1162/neco_a_01682","DOIUrl":"10.1162/neco_a_01682","url":null,"abstract":"Consistent observations across recording modalities, experiments, and neural systems find neural field spectra with 1/f-like scaling, eliciting many alternative theories to explain this universal phenomenon. We show that a general dynamical system with stochastic drive and minimal assumptions generates 1/f-like spectra consistent with the range of values observed in vivo without requiring a specific biological mechanism or collective critical behavior.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1643-1668"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.
{"title":"Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks","authors":"Bastian Pietras","doi":"10.1162/neco_a_01680","DOIUrl":"10.1162/neco_a_01680","url":null,"abstract":"Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1476-1540"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
{"title":"Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model","authors":"Vicky Zhu;Robert Rosenbaum","doi":"10.1162/neco_a_01681","DOIUrl":"10.1162/neco_a_01681","url":null,"abstract":"In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1568-1600"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mean-field models are a class of models used in computational neuroscience to study the behavior of large populations of neurons. These models are based on the idea of representing the activity of a large number of neurons as the average behavior of mean-field variables. This abstraction allows the study of large-scale neural dynamics in a computationally efficient and mathematically tractable manner. One of these methods, based on a semianalytical approach, has previously been applied to different types of single-neuron models, but never to models based on a quadratic form. In this work, we adapted this method to quadratic integrate-and-fire neuron models with adaptation and conductance-based synaptic interactions. We validated the mean-field model by comparing it to the spiking network model. This mean-field model should be useful to model large-scale activity based on quadratic neurons interacting with conductance-based synapses.
{"title":"A Mean Field to Capture Asynchronous Irregular Dynamics of Conductance-Based Networks of Adaptive Quadratic Integrate-and-Fire Neuron Models","authors":"Christoffer G. Alexandersen;Chloé Duprat;Aitakin Ezzati;Pierre Houzelstein;Ambre Ledoux;Yuhong Liu;Sandra Saghir;Alain Destexhe;Federico Tesler;Damien Depannemaecker","doi":"10.1162/neco_a_01670","DOIUrl":"10.1162/neco_a_01670","url":null,"abstract":"Mean-field models are a class of models used in computational neuroscience to study the behavior of large populations of neurons. These models are based on the idea of representing the activity of a large number of neurons as the average behavior of mean-field variables. This abstraction allows the study of large-scale neural dynamics in a computationally efficient and mathematically tractable manner. One of these methods, based on a semianalytical approach, has previously been applied to different types of single-neuron models, but never to models based on a quadratic form. In this work, we adapted this method to quadratic integrate-and-fire neuron models with adaptation and conductance-based synaptic interactions. We validated the mean-field model by comparing it to the spiking network model. This mean-field model should be useful to model large-scale activity based on quadratic neurons interacting with conductance-based synapses.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1433-1448"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.
{"title":"Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes","authors":"Sören Christensen;Jan Kallsen","doi":"10.1162/neco_a_01668","DOIUrl":"10.1162/neco_a_01668","url":null,"abstract":"In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1424-1432"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
{"title":"Desiderata for Normative Models of Synaptic Plasticity","authors":"Colin Bredenberg;Cristina Savin","doi":"10.1162/neco_a_01671","DOIUrl":"10.1162/neco_a_01671","url":null,"abstract":"Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1245-1285"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The symmetric information bottleneck (SIB), an extension of the more familiar information bottleneck, is a dimensionality-reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the generalized symmetric information bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the data set size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables.
{"title":"Data Efficiency, Dimensionality Reduction, and the Generalized Symmetric Information Bottleneck","authors":"K. Michael Martini;Ilya Nemenman","doi":"10.1162/neco_a_01667","DOIUrl":"10.1162/neco_a_01667","url":null,"abstract":"The symmetric information bottleneck (SIB), an extension of the more familiar information bottleneck, is a dimensionality-reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the generalized symmetric information bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the data set size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1353-1379"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessio Paolo Buccino;Tanguy Damart;Julian Bartram;Darshan Mandge;Xiaohan Xue;Mickael Zbili;Tobias Gänswein;Aurélien Jaquier;Vishalini Emmenegger;Henry Markram;Andreas Hierlemann;Werner Van Geit
In computational neuroscience, multicompartment models are among the most biophysically realistic representations of single neurons. Constructing such models usually involves the use of the patch-clamp technique to record somatic voltage signals under different experimental conditions. The experimental data are then used to fit the many parameters of the model. While patching of the soma is currently the gold-standard approach to build multicompartment models, several studies have also evidenced a richness of dynamics in dendritic and axonal sections. Recording from the soma alone makes it hard to observe and correctly parameterize the activity of nonsomatic compartments. In order to provide a richer set of data as input to multicompartment models, we here investigate the combination of somatic patch-clamp recordings with recordings of high-density microelectrode arrays (HD-MEAs). HD-MEAs enable the observation of extracellular potentials and neural activity of neuronal compartments at subcellular resolution. In this work, we introduce a novel framework to combine patch-clamp and HD-MEA data to construct multicompartment models. We first validate our method on a ground-truth model with known parameters and show that the use of features extracted from extracellular signals, in addition to intracellular ones, yields models enabling better fits than using intracellular features alone. We also demonstrate our procedure using experimental data by constructing cell models from in vitro cell cultures. The proposed multimodal fitting procedure has the potential to augment the modeling efforts of the computational neuroscience community and provide the field with neuronal models that are more realistic and can be better validated.
{"title":"A Multimodal Fitting Approach to Construct Single-Neuron Models With Patch Clamp and High-Density Microelectrode Arrays","authors":"Alessio Paolo Buccino;Tanguy Damart;Julian Bartram;Darshan Mandge;Xiaohan Xue;Mickael Zbili;Tobias Gänswein;Aurélien Jaquier;Vishalini Emmenegger;Henry Markram;Andreas Hierlemann;Werner Van Geit","doi":"10.1162/neco_a_01672","DOIUrl":"10.1162/neco_a_01672","url":null,"abstract":"In computational neuroscience, multicompartment models are among the most biophysically realistic representations of single neurons. Constructing such models usually involves the use of the patch-clamp technique to record somatic voltage signals under different experimental conditions. The experimental data are then used to fit the many parameters of the model. While patching of the soma is currently the gold-standard approach to build multicompartment models, several studies have also evidenced a richness of dynamics in dendritic and axonal sections. Recording from the soma alone makes it hard to observe and correctly parameterize the activity of nonsomatic compartments. In order to provide a richer set of data as input to multicompartment models, we here investigate the combination of somatic patch-clamp recordings with recordings of high-density microelectrode arrays (HD-MEAs). HD-MEAs enable the observation of extracellular potentials and neural activity of neuronal compartments at subcellular resolution. In this work, we introduce a novel framework to combine patch-clamp and HD-MEA data to construct multicompartment models. We first validate our method on a ground-truth model with known parameters and show that the use of features extracted from extracellular signals, in addition to intracellular ones, yields models enabling better fits than using intracellular features alone. We also demonstrate our procedure using experimental data by constructing cell models from in vitro cell cultures. The proposed multimodal fitting procedure has the potential to augment the modeling efforts of the computational neuroscience community and provide the field with neuronal models that are more realistic and can be better validated.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1286-1331"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10661254","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The successor representation is known to relate to temporal associations learned in the temporal context model (Gershman et al., 2012), and subsequent work suggests a wide relevance of the successor representation across spatial, visual, and abstract relational tasks. I demonstrate that the successor representation and purely associative learning have an even deeper relationship than initially indicated: Hebbian temporal associations are an unnormalized form of the successor representation, such that the two converge on an identical representation whenever all states are equally frequent and can correlate highly in practice even when the state distribution is nonuniform.
{"title":"Associative Learning of an Unnormalized Successor Representation","authors":"Niels J. Verosky","doi":"10.1162/neco_a_01675","DOIUrl":"10.1162/neco_a_01675","url":null,"abstract":"The successor representation is known to relate to temporal associations learned in the temporal context model (Gershman et al., 2012), and subsequent work suggests a wide relevance of the successor representation across spatial, visual, and abstract relational tasks. I demonstrate that the successor representation and purely associative learning have an even deeper relationship than initially indicated: Hebbian temporal associations are an unnormalized form of the successor representation, such that the two converge on an identical representation whenever all states are equally frequent and can correlate highly in practice even when the state distribution is nonuniform.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1410-1423"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The plasticity of the conduction delay between neurons plays a fundamental role in learning temporal features that are essential for processing videos, speech, and many high-level functions. However, the exact underlying mechanisms in the brain for this modulation are still under investigation. Devising a rule for precisely adjusting the synaptic delays could eventually help in developing more efficient and powerful brain-inspired computational models. In this article, we propose an unsupervised bioplausible learning rule for adjusting the synaptic delays in spiking neural networks. We also provide the mathematical proofs to show the convergence of our rule in learning spatiotemporal patterns. Furthermore, to show the effectiveness of our learning rule, we conducted several experiments on random dot kinematogram and a subset of DVS128 Gesture data sets. The experimental results indicate the efficiency of applying our proposed delay learning rule in extracting spatiotemporal features in an STDP-based spiking neural network.
{"title":"Bioplausible Unsupervised Delay Learning for Extracting Spatiotemporal Features in Spiking Neural Networks","authors":"Alireza Nadafian;Mohammad Ganjtabesh","doi":"10.1162/neco_a_01674","DOIUrl":"10.1162/neco_a_01674","url":null,"abstract":"The plasticity of the conduction delay between neurons plays a fundamental role in learning temporal features that are essential for processing videos, speech, and many high-level functions. However, the exact underlying mechanisms in the brain for this modulation are still under investigation. Devising a rule for precisely adjusting the synaptic delays could eventually help in developing more efficient and powerful brain-inspired computational models. In this article, we propose an unsupervised bioplausible learning rule for adjusting the synaptic delays in spiking neural networks. We also provide the mathematical proofs to show the convergence of our rule in learning spatiotemporal patterns. Furthermore, to show the effectiveness of our learning rule, we conducted several experiments on random dot kinematogram and a subset of DVS128 Gesture data sets. The experimental results indicate the efficiency of applying our proposed delay learning rule in extracting spatiotemporal features in an STDP-based spiking neural network.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1332-1352"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}