Pub Date : 2025-09-01Epub Date: 2025-06-20DOI: 10.1007/s10827-025-00907-4
Tony Lindeberg
This paper presents the results of combining (i) theoretical analysis regarding connections between the orientation selectivity and the elongation of receptive fields for the affine Gaussian derivative model with (ii) biological measurements of orientation selectivity in the primary visual cortex to investigate if (iii) the receptive fields can be regarded as spanning a variability in the degree of elongation. From an in-depth theoretical analysis of idealized models for the receptive fields of simple and complex cells in the primary visual cortex, we established that the orientation selectivity becomes more narrow with increasing elongation of the receptive fields. Combined with previously established biological results, concerning broad vs. sharp orientation tuning of visual neurons in the primary visual cortex, as well as previous experimental results concerning distributions of the resultant of the orientation selectivity curves for simple and complex cells, we show that these results are consistent with the receptive fields spanning a variability over the degree of elongation of the receptive fields. We also show that our principled theoretical model for visual receptive fields leads to qualitatively similar types of deviations from a uniform histogram of the resultant descriptor of the orientation selectivity curves for simple cells, as can be observed in the results from biological experiments. To firmly investigate the validity of the underlying working hypothesis, we finally formulate a set of testable predictions for biological experiments, to characterize the predicted systematic variability in the elongation over the orientation maps in higher mammals, and its relations to the pinwheel structure.
{"title":"Do the receptive fields in the primary visual cortex span a variability over the degree of elongation of the receptive fields?","authors":"Tony Lindeberg","doi":"10.1007/s10827-025-00907-4","DOIUrl":"10.1007/s10827-025-00907-4","url":null,"abstract":"<p><p>This paper presents the results of combining (i) theoretical analysis regarding connections between the orientation selectivity and the elongation of receptive fields for the affine Gaussian derivative model with (ii) biological measurements of orientation selectivity in the primary visual cortex to investigate if (iii) the receptive fields can be regarded as spanning a variability in the degree of elongation. From an in-depth theoretical analysis of idealized models for the receptive fields of simple and complex cells in the primary visual cortex, we established that the orientation selectivity becomes more narrow with increasing elongation of the receptive fields. Combined with previously established biological results, concerning broad vs. sharp orientation tuning of visual neurons in the primary visual cortex, as well as previous experimental results concerning distributions of the resultant of the orientation selectivity curves for simple and complex cells, we show that these results are consistent with the receptive fields spanning a variability over the degree of elongation of the receptive fields. We also show that our principled theoretical model for visual receptive fields leads to qualitatively similar types of deviations from a uniform histogram of the resultant descriptor of the orientation selectivity curves for simple cells, as can be observed in the results from biological experiments. To firmly investigate the validity of the underlying working hypothesis, we finally formulate a set of testable predictions for biological experiments, to characterize the predicted systematic variability in the elongation over the orientation maps in higher mammals, and its relations to the pinwheel structure.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"397-417"},"PeriodicalIF":2.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12417286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-07-29DOI: 10.1007/s10827-025-00910-9
Xuewen Shen, Fangting Li, Bin Min
Understanding the mechanism of accumulating evidence over time in deliberate decision-making is crucial for both humans and animals. While numerous models have been proposed over the past few decades to characterize the temporal weighting of evidence, the dynamical principle governing the neural circuits in decision making remain elusive. In this study, we proposed a solvable rank-1 neural circuit model to address this problem. We first derived an analytical expression for integration kernel, a key quantity describing how sensory evidence at different time points is weighted with respect to the final decision. Based on this expression, we illustrated that how the dynamics introduced in the auxiliary space-namely, a subspace orthogonal to the decision variable-modulates the flow fields of decision variable through a gain modulation mechanism, resulting in a variety of integration kernel types, including not only monotonic ones (recency and primacy) but also non-monotonic ones (convex and concave). Furthermore, we quantitatively validated that integration kernel shapes can be understood from dynamical landscapes and non-monotonic temporal weighting reflects topological transitions in the landscape. Additionally, we showed that training on networks with non-optimal weighting leads to convergence toward optimal weighting. Finally, we demonstrate that rank-1 connectivity induces symmetric competition to generate pitchfork bifurcation. In summary, we present a solvable neural circuit model that unifies diverse types of temporal weighting, providing an intriguing link between non-monotonic integration kernel structure and topological transitions of dynamical landscape.
{"title":"A solvable neural circuit model revealing the dynamical principle of non-optimal temporal weighting in perceptual decision making.","authors":"Xuewen Shen, Fangting Li, Bin Min","doi":"10.1007/s10827-025-00910-9","DOIUrl":"10.1007/s10827-025-00910-9","url":null,"abstract":"<p><p>Understanding the mechanism of accumulating evidence over time in deliberate decision-making is crucial for both humans and animals. While numerous models have been proposed over the past few decades to characterize the temporal weighting of evidence, the dynamical principle governing the neural circuits in decision making remain elusive. In this study, we proposed a solvable rank-1 neural circuit model to address this problem. We first derived an analytical expression for integration kernel, a key quantity describing how sensory evidence at different time points is weighted with respect to the final decision. Based on this expression, we illustrated that how the dynamics introduced in the auxiliary space-namely, a subspace orthogonal to the decision variable-modulates the flow fields of decision variable through a gain modulation mechanism, resulting in a variety of integration kernel types, including not only monotonic ones (recency and primacy) but also non-monotonic ones (convex and concave). Furthermore, we quantitatively validated that integration kernel shapes can be understood from dynamical landscapes and non-monotonic temporal weighting reflects topological transitions in the landscape. Additionally, we showed that training on networks with non-optimal weighting leads to convergence toward optimal weighting. Finally, we demonstrate that rank-1 connectivity induces symmetric competition to generate pitchfork bifurcation. In summary, we present a solvable neural circuit model that unifies diverse types of temporal weighting, providing an intriguing link between non-monotonic integration kernel structure and topological transitions of dynamical landscape.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"441-458"},"PeriodicalIF":2.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-20DOI: 10.1007/s10827-025-00909-2
Carly Ferrell, Qile Jiang, Margaret Olivia Leu, Thomas Wichmann, Michael Caiola
According to current anatomical models, motor cortical areas, the basal ganglia, and the ventral motor thalamus form partially closed (re-entrant) loop structures. The normal patterning of neuronal activity within this network regulates aspects of movement planning and execution, while abnormal firing patterns can contribute to movement impairments, such as those seen in Parkinson's disease. Most previous research on such firing pattern abnormalities has focused on parkinsonism-associated changes in the basal ganglia, demonstrating, among other abnormalities, prominent changes in firing rates, as well as the emergence of synchronized beta-band oscillatory burst patterns. In contrast, abnormalities of neuronal activity in the thalamus and cortex are less explored. However, recent studies have shown both changes in thalamocortical connectivity and anatomical changes in corticothalamic terminals in Parkinson's disease. To explore these changes, we created a computational framework to model the effects of changes in thalamocortical connections as they may occur when an individual transitions from the healthy to the parkinsonian state. A 5-dimensional average neuronal firing rate model was fitted to replicate neuronal firing rate information recorded in healthy and parkinsonian primates. The study focused on the effects of (1) changes in synaptic weights of the reciprocal projections between cortical neurons and thalamic principal neurons, and (2) changes in synaptic weights of the cortical projection to thalamic interneurons. We found that it is possible to force the system to change from a healthy to a parkinsonian state, including the emergent oscillatory activity, by only adjusting these two sets of synaptic weights. Thus, this study demonstrates that small changes in the afferent and efferent connections of thalamic neurons could contribute to the emergence of network-wide firing patterns that are characteristic for the parkinsonian state.
{"title":"Modeling characteristics of neuronal firing in the thalamocortical network of connections in control and parkinsonian primates.","authors":"Carly Ferrell, Qile Jiang, Margaret Olivia Leu, Thomas Wichmann, Michael Caiola","doi":"10.1007/s10827-025-00909-2","DOIUrl":"10.1007/s10827-025-00909-2","url":null,"abstract":"<p><p>According to current anatomical models, motor cortical areas, the basal ganglia, and the ventral motor thalamus form partially closed (re-entrant) loop structures. The normal patterning of neuronal activity within this network regulates aspects of movement planning and execution, while abnormal firing patterns can contribute to movement impairments, such as those seen in Parkinson's disease. Most previous research on such firing pattern abnormalities has focused on parkinsonism-associated changes in the basal ganglia, demonstrating, among other abnormalities, prominent changes in firing rates, as well as the emergence of synchronized beta-band oscillatory burst patterns. In contrast, abnormalities of neuronal activity in the thalamus and cortex are less explored. However, recent studies have shown both changes in thalamocortical connectivity and anatomical changes in corticothalamic terminals in Parkinson's disease. To explore these changes, we created a computational framework to model the effects of changes in thalamocortical connections as they may occur when an individual transitions from the healthy to the parkinsonian state. A 5-dimensional average neuronal firing rate model was fitted to replicate neuronal firing rate information recorded in healthy and parkinsonian primates. The study focused on the effects of (1) changes in synaptic weights of the reciprocal projections between cortical neurons and thalamic principal neurons, and (2) changes in synaptic weights of the cortical projection to thalamic interneurons. We found that it is possible to force the system to change from a healthy to a parkinsonian state, including the emergent oscillatory activity, by only adjusting these two sets of synaptic weights. Thus, this study demonstrates that small changes in the afferent and efferent connections of thalamic neurons could contribute to the emergence of network-wide firing patterns that are characteristic for the parkinsonian state.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"419-439"},"PeriodicalIF":2.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12417298/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-08-05DOI: 10.1007/s10827-025-00911-8
Jan-Eirik Welle Skaar, Nicolai Haug, Hans Ekkehard Plesser
A model for NMDA-receptor-mediated synaptic currents in leaky integrate-and-fire neurons, first proposed by Wang (J Neurosci, 1999), has been widely studied in computational neuroscience. The model features a fast rise in the NMDA conductance upon spikes in a pre-synaptic neuron followed by a slow decay. In a general implementation of this model which allows for arbitrary network connectivity and delay distributions, the summed NMDA current from all neurons in a pre-synaptic population cannot be simulated in aggregated form. Simulating each synapse separately is prohibitively slow for all but small networks, which has largely limited the use of the model to fully connected networks with identical delays, for which an efficient simulation scheme exists. We propose an approximation to the original model that can be efficiently simulated for arbitrary network connectivity and delay distributions. Our results demonstrate that the approximation incurs minimal error and preserves network dynamics. We further use the approximate model to explore binary decision making in sparsely coupled networks.
{"title":"A simplified model of NMDA-receptor-mediated dynamics in leaky integrate-and-fire neurons.","authors":"Jan-Eirik Welle Skaar, Nicolai Haug, Hans Ekkehard Plesser","doi":"10.1007/s10827-025-00911-8","DOIUrl":"10.1007/s10827-025-00911-8","url":null,"abstract":"<p><p>A model for NMDA-receptor-mediated synaptic currents in leaky integrate-and-fire neurons, first proposed by Wang (J Neurosci, 1999), has been widely studied in computational neuroscience. The model features a fast rise in the NMDA conductance upon spikes in a pre-synaptic neuron followed by a slow decay. In a general implementation of this model which allows for arbitrary network connectivity and delay distributions, the summed NMDA current from all neurons in a pre-synaptic population cannot be simulated in aggregated form. Simulating each synapse separately is prohibitively slow for all but small networks, which has largely limited the use of the model to fully connected networks with identical delays, for which an efficient simulation scheme exists. We propose an approximation to the original model that can be efficiently simulated for arbitrary network connectivity and delay distributions. Our results demonstrate that the approximation incurs minimal error and preserves network dynamics. We further use the approximate model to explore binary decision making in sparsely coupled networks.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"475-487"},"PeriodicalIF":2.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12417261/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1007/s10827-025-00906-5
Ibeachu P Chinagorom, Peter Oseghale Ohue
Computational Neuroscience (CN) is an interdisciplinary field that combines neuroscience, mathematics, artificial intelligence, theoretical models and experimental data to understand how the brain works. It unravels the intricacies of the nervous system contributing significantly to cognitive science, neuroengineering and machine learning. CN importance in artificial intelligence and medical research remains underrepresented in Africa's academic landscape. This paper explores the current state of CN in Africa, the challenges hindering its integration, the emerging opportunities, and the evidence-based strategies for curriculum implementation. Capacity building, interdisciplinary collaboration, open science, theoretical neuroscience, development of local capacity, and leveraging international partnerships are emphasized.
{"title":"Integrating computational neuroscience into Africa's academic curriculum: Challenges, opportunities, and strategic implementation.","authors":"Ibeachu P Chinagorom, Peter Oseghale Ohue","doi":"10.1007/s10827-025-00906-5","DOIUrl":"10.1007/s10827-025-00906-5","url":null,"abstract":"<p><p>Computational Neuroscience (CN) is an interdisciplinary field that combines neuroscience, mathematics, artificial intelligence, theoretical models and experimental data to understand how the brain works. It unravels the intricacies of the nervous system contributing significantly to cognitive science, neuroengineering and machine learning. CN importance in artificial intelligence and medical research remains underrepresented in Africa's academic landscape. This paper explores the current state of CN in Africa, the challenges hindering its integration, the emerging opportunities, and the evidence-based strategies for curriculum implementation. Capacity building, interdisciplinary collaboration, open science, theoretical neuroscience, development of local capacity, and leveraging international partnerships are emphasized.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"393-395"},"PeriodicalIF":2.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-04-02DOI: 10.1007/s10827-025-00898-2
Patrick A Shoemaker, Bo M B Bekkouche
We report a parametric simulation study of traveling calcium waves in two classes of cellular structures: dendrite-like processes and an idealized cell body. It is motivated by the hypothesis that calcium waves may participate in spatiotemporal sensory processing; accordingly, its objective is to elucidate the dependence of traveling wave characteristics (e.g., propagation speed and amplitude) on various anatomical and physiological parameters. The models include representations of inositol trisphosphate and ryanodine receptors (which mediate transient calcium entry into the cytoplasm from the endoplasmic reticulum), as well as other entities involved in calcium transport or reactions. These support traveling cytoplasmic calcium waves, which are fully regenerative for significant ranges of model parameters. We also observe Hopf bifurcations between stable and unstable regimes, the latter being characterized by periodic calcium spikes. Traveling waves are possible in unstable processes during phases with sufficiently high calcium levels in the endoplasmic reticulum. Damped and abortive waves are observed for some parameter values. When both receptor types are present and functional, we find wave speeds on the order of 100 to several hundred micrometers per second and cytosolic calcium transients with amplitudes of tens of micromolar; when ryanodine receptors are absent, these values are on the order of tens of micrometers per second and 1-6 micromolar. Even with significantly downgraded channel conductance, ryanodine receptors can significantly impact wave speeds and amplitudes. Receptor areal densities and the diffusion coefficient for cytoplasmic calcium are the parameters to which wave characteristics are most sensitive.
{"title":"Modeling traveling calcium waves in cellular structures.","authors":"Patrick A Shoemaker, Bo M B Bekkouche","doi":"10.1007/s10827-025-00898-2","DOIUrl":"10.1007/s10827-025-00898-2","url":null,"abstract":"<p><p>We report a parametric simulation study of traveling calcium waves in two classes of cellular structures: dendrite-like processes and an idealized cell body. It is motivated by the hypothesis that calcium waves may participate in spatiotemporal sensory processing; accordingly, its objective is to elucidate the dependence of traveling wave characteristics (e.g., propagation speed and amplitude) on various anatomical and physiological parameters. The models include representations of inositol trisphosphate and ryanodine receptors (which mediate transient calcium entry into the cytoplasm from the endoplasmic reticulum), as well as other entities involved in calcium transport or reactions. These support traveling cytoplasmic calcium waves, which are fully regenerative for significant ranges of model parameters. We also observe Hopf bifurcations between stable and unstable regimes, the latter being characterized by periodic calcium spikes. Traveling waves are possible in unstable processes during phases with sufficiently high calcium levels in the endoplasmic reticulum. Damped and abortive waves are observed for some parameter values. When both receptor types are present and functional, we find wave speeds on the order of 100 to several hundred micrometers per second and cytosolic calcium transients with amplitudes of tens of micromolar; when ryanodine receptors are absent, these values are on the order of tens of micrometers per second and 1-6 micromolar. Even with significantly downgraded channel conductance, ryanodine receptors can significantly impact wave speeds and amplitudes. Receptor areal densities and the diffusion coefficient for cytoplasmic calcium are the parameters to which wave characteristics are most sensitive.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"219-245"},"PeriodicalIF":1.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12181221/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-04-10DOI: 10.1007/s10827-025-00903-8
Gerald K Cooray, Vernon Cooray, Karl J Friston
Macroscopic studies of cortical tissue reveal a prevalence of oscillatory activity, that reflect a fine tuning of neural interactions. This research extends neural field theories by incorporating generalized oscillatory dynamics into previous work on conservative or semi-conservative neural field dynamics. Prior studies have largely assumed isotropic connections among neural units; however, this study demonstrates that a broad range of anisotropic and fluctuating connections can still sustain oscillations. Using Lagrangian field methods, we examine different types of connectivity, their dynamics, and potential interactions with neural fields. From this theoretical foundation, we derive a framework that incorporates Hebbian and non-Hebbian learning - i.e., plasticity - into the study of neural fields via the concept of a connectivity field.
{"title":"Cortical dynamics of neural-connectivity fields.","authors":"Gerald K Cooray, Vernon Cooray, Karl J Friston","doi":"10.1007/s10827-025-00903-8","DOIUrl":"10.1007/s10827-025-00903-8","url":null,"abstract":"<p><p>Macroscopic studies of cortical tissue reveal a prevalence of oscillatory activity, that reflect a fine tuning of neural interactions. This research extends neural field theories by incorporating generalized oscillatory dynamics into previous work on conservative or semi-conservative neural field dynamics. Prior studies have largely assumed isotropic connections among neural units; however, this study demonstrates that a broad range of anisotropic and fluctuating connections can still sustain oscillations. Using Lagrangian field methods, we examine different types of connectivity, their dynamics, and potential interactions with neural fields. From this theoretical foundation, we derive a framework that incorporates Hebbian and non-Hebbian learning - i.e., plasticity - into the study of neural fields via the concept of a connectivity field.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"373-391"},"PeriodicalIF":1.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12181116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-22DOI: 10.1007/s10827-025-00900-x
Arkady Pikovsky, Michael Rosenblum
We tackle a quantification of synchrony in a large ensemble of interacting neurons from the observation of spiking events. In a simulation study, we efficiently infer the synchrony level in a neuronal population from a point process reflecting spiking of a small number of units and even from a single neuron. We introduce a synchrony measure (order parameter) based on the Bartlett covariance density; this quantity can be easily computed from the recorded point process. This measure is robust concerning missed spikes and, if computed from observing several neurons, does not require spike sorting. We illustrate the approach by modeling populations of spiking or bursting neurons, including the case of sparse synchrony.
{"title":"Inferring collective synchrony observing spiking of one or several neurons.","authors":"Arkady Pikovsky, Michael Rosenblum","doi":"10.1007/s10827-025-00900-x","DOIUrl":"10.1007/s10827-025-00900-x","url":null,"abstract":"<p><p>We tackle a quantification of synchrony in a large ensemble of interacting neurons from the observation of spiking events. In a simulation study, we efficiently infer the synchrony level in a neuronal population from a point process reflecting spiking of a small number of units and even from a single neuron. We introduce a synchrony measure (order parameter) based on the Bartlett covariance density; this quantity can be easily computed from the recorded point process. This measure is robust concerning missed spikes and, if computed from observing several neurons, does not require spike sorting. We illustrate the approach by modeling populations of spiking or bursting neurons, including the case of sparse synchrony.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"305-320"},"PeriodicalIF":1.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-22DOI: 10.1007/s10827-025-00901-w
Gabriele Scheler, Martin L Schumann, Johann Schumann
We present a model of pattern memory and retrieval with novel, technically useful and biologically realistic properties. Specifically, we enter n variations of k pattern classes (n*k patterns) onto a cortex-like balanced inhibitory-excitatory network with heterogeneous neurons, and let the pattern spread within the recurrent network. We show that we can identify high mutual-information (MI) neurons as major information-bearing elements within each pattern representation. We employ a simple one-shot adaptive (learning) process focusing on high MI neurons and inhibition. Such 'localist plasticity' has high efficiency, because it requires only few adaptations for each pattern. Specifically, we store k=10 patterns of size s=400 in a 1000/1200 neuron network. We stimulate high MI neurons and in this way recall patterns, such that the whole network represents this pattern. We assess the quality of the representation (a) before learning, when entering the pattern into a naive network, (b) after learning, on the adapted network, and (c) after recall by stimulation. The recalled patterns could be easily recognized by a trained classifier. The recalled pattern 'unfolds' over the recurrent network with high similarity to the original input pattern. We discuss the distribution of neuron properties in the network, and find that an initial Gaussian distribution changes into a more heavy-tailed, lognormal distribution during the adaptation process. The remarkable result is that we are able to achieve reliable pattern recall by stimulating only high information neurons. This work provides a biologically-inspired model of cortical memory and may have interesting technical applications.
{"title":"Localist neural plasticity identified by mutual information.","authors":"Gabriele Scheler, Martin L Schumann, Johann Schumann","doi":"10.1007/s10827-025-00901-w","DOIUrl":"10.1007/s10827-025-00901-w","url":null,"abstract":"<p><p>We present a model of pattern memory and retrieval with novel, technically useful and biologically realistic properties. Specifically, we enter n variations of k pattern classes (n*k patterns) onto a cortex-like balanced inhibitory-excitatory network with heterogeneous neurons, and let the pattern spread within the recurrent network. We show that we can identify high mutual-information (MI) neurons as major information-bearing elements within each pattern representation. We employ a simple one-shot adaptive (learning) process focusing on high MI neurons and inhibition. Such 'localist plasticity' has high efficiency, because it requires only few adaptations for each pattern. Specifically, we store k=10 patterns of size s=400 in a 1000/1200 neuron network. We stimulate high MI neurons and in this way recall patterns, such that the whole network represents this pattern. We assess the quality of the representation (a) before learning, when entering the pattern into a naive network, (b) after learning, on the adapted network, and (c) after recall by stimulation. The recalled patterns could be easily recognized by a trained classifier. The recalled pattern 'unfolds' over the recurrent network with high similarity to the original input pattern. We discuss the distribution of neuron properties in the network, and find that an initial Gaussian distribution changes into a more heavy-tailed, lognormal distribution during the adaptation process. The remarkable result is that we are able to achieve reliable pattern recall by stimulating only high information neurons. This work provides a biologically-inspired model of cortical memory and may have interesting technical applications.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"321-331"},"PeriodicalIF":1.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}