Pub Date : 2024-02-01Epub Date: 2024-02-21DOI: 10.1007/s10827-024-00866-2
Stephen Selesnick
The computational resources of a neuromorphic network model introduced earlier are investigated in the context of such hierarchical systems as the mammalian visual cortex. It is argued that a form of ubiquitous spontaneous local convolution, driven by spontaneously arising wave-like activity-which itself promotes local Hebbian modulation-enables logical gate-like neural motifs to form into hierarchical feed-forward structures of the Hubel-Wiesel type. Extra-synaptic effects are shown to play a significant rôle in these processes. The type of logic that emerges is not Boolean, confirming and extending earlier findings on the logic of schizophrenia.
{"title":"Neural waves and computation in a neural net model I: Convolutional hierarchies.","authors":"Stephen Selesnick","doi":"10.1007/s10827-024-00866-2","DOIUrl":"10.1007/s10827-024-00866-2","url":null,"abstract":"<p><p>The computational resources of a neuromorphic network model introduced earlier are investigated in the context of such hierarchical systems as the mammalian visual cortex. It is argued that a form of ubiquitous spontaneous local convolution, driven by spontaneously arising wave-like activity-which itself promotes local Hebbian modulation-enables logical gate-like neural motifs to form into hierarchical feed-forward structures of the Hubel-Wiesel type. Extra-synaptic effects are shown to play a significant rôle in these processes. The type of logic that emerges is not Boolean, confirming and extending earlier findings on the logic of schizophrenia.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"39-71"},"PeriodicalIF":1.5,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-10-14DOI: 10.1007/s10827-023-00863-x
Farshad Shirani, Hannah Choi
Overall balance of excitation and inhibition in cortical networks is central to their functionality and normal operation. Such orchestrated co-evolution of excitation and inhibition is established through convoluted local interactions between neurons, which are organized by specific network connectivity structures and are dynamically controlled by modulating synaptic activities. Therefore, identifying how such structural and physiological factors contribute to establishment of overall balance of excitation and inhibition is crucial in understanding the homeostatic plasticity mechanisms that regulate the balance. We use biologically plausible mathematical models to extensively study the effects of multiple key factors on overall balance of a network. We characterize a network's baseline balanced state by certain functional properties, and demonstrate how variations in physiological and structural parameters of the network deviate this balance and, in particular, result in transitions in spontaneous activity of the network to high-amplitude slow oscillatory regimes. We show that deviations from the reference balanced state can be continuously quantified by measuring the ratio of mean excitatory to mean inhibitory synaptic conductances in the network. Our results suggest that the commonly observed ratio of the number of inhibitory to the number of excitatory neurons in local cortical networks is almost optimal for their stability and excitability. Moreover, the values of inhibitory synaptic decay time constants and density of inhibitory-to-inhibitory network connectivity are critical to overall balance and stability of cortical networks. However, network stability in our results is sufficiently robust against modulations of synaptic quantal conductances, as required by their role in learning and memory. Our study based on extensive bifurcation analyses thus reveal the functional optimality and criticality of structural and physiological parameters in establishing the baseline operating state of local cortical networks.
{"title":"On the physiological and structural contributors to the overall balance of excitation and inhibition in local cortical networks.","authors":"Farshad Shirani, Hannah Choi","doi":"10.1007/s10827-023-00863-x","DOIUrl":"10.1007/s10827-023-00863-x","url":null,"abstract":"<p><p>Overall balance of excitation and inhibition in cortical networks is central to their functionality and normal operation. Such orchestrated co-evolution of excitation and inhibition is established through convoluted local interactions between neurons, which are organized by specific network connectivity structures and are dynamically controlled by modulating synaptic activities. Therefore, identifying how such structural and physiological factors contribute to establishment of overall balance of excitation and inhibition is crucial in understanding the homeostatic plasticity mechanisms that regulate the balance. We use biologically plausible mathematical models to extensively study the effects of multiple key factors on overall balance of a network. We characterize a network's baseline balanced state by certain functional properties, and demonstrate how variations in physiological and structural parameters of the network deviate this balance and, in particular, result in transitions in spontaneous activity of the network to high-amplitude slow oscillatory regimes. We show that deviations from the reference balanced state can be continuously quantified by measuring the ratio of mean excitatory to mean inhibitory synaptic conductances in the network. Our results suggest that the commonly observed ratio of the number of inhibitory to the number of excitatory neurons in local cortical networks is almost optimal for their stability and excitability. Moreover, the values of inhibitory synaptic decay time constants and density of inhibitory-to-inhibitory network connectivity are critical to overall balance and stability of cortical networks. However, network stability in our results is sufficiently robust against modulations of synaptic quantal conductances, as required by their role in learning and memory. Our study based on extensive bifurcation analyses thus reveal the functional optimality and criticality of structural and physiological parameters in establishing the baseline operating state of local cortical networks.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"73-107"},"PeriodicalIF":1.5,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582336/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41220465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-09-05DOI: 10.1007/s10827-023-00859-7
Matthew P Szuromi, Viktor K Jirsa, William C Stacey
Electrical stimulation is an increasingly popular method to terminate epileptic seizures, yet it is not always successful. A potential reason for inconsistent efficacy is that stimuli are applied empirically without considering the underlying dynamical properties of a given seizure. We use a computational model of seizure dynamics to show that different bursting classes have disparate responses to aborting stimulation. This model was previously validated in a large set of human seizures and led to a description of the Taxonomy of Seizure Dynamics and the dynamotype, which is the clinical analog of the bursting class. In the model, the stimulation is realized as an applied input, which successfully aborts the burst when it forces the system from a bursting state to a quiescent state. This transition requires bistability, which is not present in all bursters. We examine how topological and geometric differences in the bistable state affect the probability of termination as the burster progresses from onset to offset. We find that the most significant determining factors are the burster class (dynamotype) and whether the burster has a DC (baseline) shift. Bursters with a baseline shift are far more likely to be terminated due to the necessary structure of their state space. Furthermore, we observe that the probability of termination varies throughout the burster's duration, is often dependent on the phase when it was applied, and is highly correlated to dynamotype. Our model provides a method to predict the optimal method of termination for each dynamotype. These results lead to the prediction that optimization of ictal aborting stimulation should account for seizure dynamotype, the presence of a DC shift, and the timing of the stimulation.
{"title":"Optimization of ictal aborting stimulation using the dynamotype taxonomy.","authors":"Matthew P Szuromi, Viktor K Jirsa, William C Stacey","doi":"10.1007/s10827-023-00859-7","DOIUrl":"10.1007/s10827-023-00859-7","url":null,"abstract":"<p><p>Electrical stimulation is an increasingly popular method to terminate epileptic seizures, yet it is not always successful. A potential reason for inconsistent efficacy is that stimuli are applied empirically without considering the underlying dynamical properties of a given seizure. We use a computational model of seizure dynamics to show that different bursting classes have disparate responses to aborting stimulation. This model was previously validated in a large set of human seizures and led to a description of the Taxonomy of Seizure Dynamics and the dynamotype, which is the clinical analog of the bursting class. In the model, the stimulation is realized as an applied input, which successfully aborts the burst when it forces the system from a bursting state to a quiescent state. This transition requires bistability, which is not present in all bursters. We examine how topological and geometric differences in the bistable state affect the probability of termination as the burster progresses from onset to offset. We find that the most significant determining factors are the burster class (dynamotype) and whether the burster has a DC (baseline) shift. Bursters with a baseline shift are far more likely to be terminated due to the necessary structure of their state space. Furthermore, we observe that the probability of termination varies throughout the burster's duration, is often dependent on the phase when it was applied, and is highly correlated to dynamotype. Our model provides a method to predict the optimal method of termination for each dynamotype. These results lead to the prediction that optimization of ictal aborting stimulation should account for seizure dynamotype, the presence of a DC shift, and the timing of the stimulation.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"445-462"},"PeriodicalIF":1.2,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10754472/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10210364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-08-25DOI: 10.1007/s10827-023-00858-8
Aleksey E Matukhno, Mikhail V Petrushan, Valery N Kiroy, Fedor V Arsenyev, Larisa V Lysenko
The comparison of odor functional maps in rodents demonstrates a high degree of inter-individual variability in glomerular activity patterns. There are substantial methodological difficulties in the interindividual assessment of local permutations in the glomerular patterns, since the position of anatomical extracranial landmarks, as well as the size, shape and angular orientation of olfactory bulbs can vary significantly. A new method for defining anatomical coordinates of active glomeruli in the rat olfactory bulb has been developed. The method compares the interindividual odor functional maps and calculates probabilistic maps of glomerular activity with adjustment. This adjustment involves rotation, scaling and shift of the functional map relative to its expected position in probabilistic map, computed according to the anatomical coordinates. The calculation of the probabilistic map of the odorant-specific response compensates for potential anatoamical errors due to individual variability in olfactory bulb dimensions and angular orientation. We show its efficiency on real data from a large animal sample recorded by two-photon calcium imaging in dorsal surface of the rat olfactory bulb. The proposed method with probabilistic map calculation enables the spatial consistency of the effects of individual odorants in different rats to be assessed and allow stereotypical positions of odor-specific clusters in the glomerular layer of the olfactory bulb to be identified.
{"title":"The method for assessment of local permutations in the glomerular patterns of the rat olfactory bulb by aligning interindividual odor maps.","authors":"Aleksey E Matukhno, Mikhail V Petrushan, Valery N Kiroy, Fedor V Arsenyev, Larisa V Lysenko","doi":"10.1007/s10827-023-00858-8","DOIUrl":"10.1007/s10827-023-00858-8","url":null,"abstract":"<p><p>The comparison of odor functional maps in rodents demonstrates a high degree of inter-individual variability in glomerular activity patterns. There are substantial methodological difficulties in the interindividual assessment of local permutations in the glomerular patterns, since the position of anatomical extracranial landmarks, as well as the size, shape and angular orientation of olfactory bulbs can vary significantly. A new method for defining anatomical coordinates of active glomeruli in the rat olfactory bulb has been developed. The method compares the interindividual odor functional maps and calculates probabilistic maps of glomerular activity with adjustment. This adjustment involves rotation, scaling and shift of the functional map relative to its expected position in probabilistic map, computed according to the anatomical coordinates. The calculation of the probabilistic map of the odorant-specific response compensates for potential anatoamical errors due to individual variability in olfactory bulb dimensions and angular orientation. We show its efficiency on real data from a large animal sample recorded by two-photon calcium imaging in dorsal surface of the rat olfactory bulb. The proposed method with probabilistic map calculation enables the spatial consistency of the effects of individual odorants in different rats to be assessed and allow stereotypical positions of odor-specific clusters in the glomerular layer of the olfactory bulb to be identified.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"433-444"},"PeriodicalIF":1.2,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10058605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-09-18DOI: 10.1007/s10827-023-00861-z
Zahra Imani, Mehdi Ezoji, Timothée Masquelier
Spiking neural networks (SNNs), as the third generation of neural networks, are based on biological models of human brain neurons. In this work, a shallow SNN plays the role of an explicit image decoder in the image classification. An LSTM-based EEG encoder is used to construct the EEG-based feature space, which is a discriminative space in viewpoint of classification accuracy by SVM. Then, the visual feature vectors extracted from SNN is mapped to the EEG-based discriminative features space by manifold transferring based on mutual k-Nearest Neighbors (Mk-NN MT). This proposed "Brain-guided system" improves the separability of the SNN-based visual feature space. In the test phase, the spike patterns extracted by SNN from the input image is mapped to LSTM-based EEG feature space, and then classified without need for the EEG signals. The SNN-based image encoder is trained by the conversion method and the results are evaluated and compared with other training methods on the challenging small ImageNet-EEG dataset. Experimental results show that the proposed transferring the manifold of the SNN-based feature space to LSTM-based EEG feature space leads to 14.25% improvement at most in the accuracy of image classification. Thus, embedding SNN in the brain-guided system which is trained on a small set, improves its performance in image classification.
{"title":"Brain-guided manifold transferring to improve the performance of spiking neural networks in image classification.","authors":"Zahra Imani, Mehdi Ezoji, Timothée Masquelier","doi":"10.1007/s10827-023-00861-z","DOIUrl":"10.1007/s10827-023-00861-z","url":null,"abstract":"<p><p>Spiking neural networks (SNNs), as the third generation of neural networks, are based on biological models of human brain neurons. In this work, a shallow SNN plays the role of an explicit image decoder in the image classification. An LSTM-based EEG encoder is used to construct the EEG-based feature space, which is a discriminative space in viewpoint of classification accuracy by SVM. Then, the visual feature vectors extracted from SNN is mapped to the EEG-based discriminative features space by manifold transferring based on mutual k-Nearest Neighbors (Mk-NN MT). This proposed \"Brain-guided system\" improves the separability of the SNN-based visual feature space. In the test phase, the spike patterns extracted by SNN from the input image is mapped to LSTM-based EEG feature space, and then classified without need for the EEG signals. The SNN-based image encoder is trained by the conversion method and the results are evaluated and compared with other training methods on the challenging small ImageNet-EEG dataset. Experimental results show that the proposed transferring the manifold of the SNN-based feature space to LSTM-based EEG feature space leads to 14.25% improvement at most in the accuracy of image classification. Thus, embedding SNN in the brain-guided system which is trained on a small set, improves its performance in image classification.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"475-490"},"PeriodicalIF":1.2,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10340607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-08-26DOI: 10.1007/s10827-023-00860-0
Brian L Frost, Stanislav M Mintchev
Recent investigations of traumatic brain injuries have shown that these injuries can result in conformational changes at the level of individual neurons in the cerebral cortex. Focal axonal swelling is one consequence of such injuries and leads to a variable width along the cell axon. Simulations of the electrical properties of axons impacted in such a way show that this damage may have a nonlinear deleterious effect on spike-encoded signal transmission. The computational cost of these simulations complicates the investigation of the effects of such damage at a network level. We have developed an efficient algorithm that faithfully reproduces the spike train filtering properties seen in physical simulations. We use this algorithm to explore the impact of focal axonal swelling on small networks of integrate and fire neurons. We explore also the effects of architecture modifications to networks impacted in this manner. In all tested networks, our results indicate that the addition of presynaptic inhibitory neurons either increases or leaves unchanged the fidelity, in terms of bandwidth, of the network's processing properties with respect to this damage.
{"title":"A high-efficiency model indicating the role of inhibition in the resilience of neuronal networks to damage resulting from traumatic injury.","authors":"Brian L Frost, Stanislav M Mintchev","doi":"10.1007/s10827-023-00860-0","DOIUrl":"10.1007/s10827-023-00860-0","url":null,"abstract":"<p><p>Recent investigations of traumatic brain injuries have shown that these injuries can result in conformational changes at the level of individual neurons in the cerebral cortex. Focal axonal swelling is one consequence of such injuries and leads to a variable width along the cell axon. Simulations of the electrical properties of axons impacted in such a way show that this damage may have a nonlinear deleterious effect on spike-encoded signal transmission. The computational cost of these simulations complicates the investigation of the effects of such damage at a network level. We have developed an efficient algorithm that faithfully reproduces the spike train filtering properties seen in physical simulations. We use this algorithm to explore the impact of focal axonal swelling on small networks of integrate and fire neurons. We explore also the effects of architecture modifications to networks impacted in this manner. In all tested networks, our results indicate that the addition of presynaptic inhibitory neurons either increases or leaves unchanged the fidelity, in terms of bandwidth, of the network's processing properties with respect to this damage.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"463-474"},"PeriodicalIF":1.2,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10450908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-08-10DOI: 10.1007/s10827-023-00857-9
Cecilia Jarne, Rodrigo Laje
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and also how the performance gracefully degrades as either network size is decreased, interval duration is increased, or connectivity damage is induced. For the considered tasks, we explored how robust the network obtained after training can be according to task parameterization. In the process, we developed a framework that can be useful to parameterize other tasks of interest in computational neuroscience. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in order to model the biological response of cerebral cortex areas.
{"title":"Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks.","authors":"Cecilia Jarne, Rodrigo Laje","doi":"10.1007/s10827-023-00857-9","DOIUrl":"10.1007/s10827-023-00857-9","url":null,"abstract":"<p><p>Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and also how the performance gracefully degrades as either network size is decreased, interval duration is increased, or connectivity damage is induced. For the considered tasks, we explored how robust the network obtained after training can be according to task parameterization. In the process, we developed a framework that can be useful to parameterize other tasks of interest in computational neuroscience. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in order to model the biological response of cerebral cortex areas.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":"407-431"},"PeriodicalIF":1.2,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10320984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1007/s10827-023-00849-9
Kine Ødegård Hanssen, Sverre Grødem, Marianne Fyhn, Torkel Hafting, Gaute T Einevoll, Torbjørn Vefferstad Ness, Geir Halnes
The perineuronal nets (PNNs) are sugar coated protein structures that encapsulate certain neurons in the brain, such as parvalbumin positive (PV) inhibitory neurons. As PNNs are theorized to act as a barrier to ion transport, they may effectively increase the membrane charge-separation distance, thereby affecting the membrane capacitance. Tewari et al. (2018) found that degradation of PNNs induced a 25%-50% increase in membrane capacitance [Formula: see text] and a reduction in the firing rates of PV-cells. In the current work, we explore how changes in [Formula: see text] affects the firing rate in a selection of computational neuron models, ranging in complexity from a single compartment Hodgkin-Huxley model to morphologically detailed PV-neuron models. In all models, an increased [Formula: see text] lead to reduced firing, but the experimentally reported increase in [Formula: see text] was not alone sufficient to explain the experimentally reported reduction in firing rate. We therefore hypothesized that PNN degradation in the experiments affected not only [Formula: see text], but also ionic reversal potentials and ion channel conductances. In simulations, we explored how various model parameters affected the firing rate of the model neurons, and identified which parameter variations in addition to [Formula: see text] that are most likely candidates for explaining the experimentally reported reduction in firing rate.
{"title":"Responses in fast-spiking interneuron firing rates to parameter variations associated with degradation of perineuronal nets.","authors":"Kine Ødegård Hanssen, Sverre Grødem, Marianne Fyhn, Torkel Hafting, Gaute T Einevoll, Torbjørn Vefferstad Ness, Geir Halnes","doi":"10.1007/s10827-023-00849-9","DOIUrl":"https://doi.org/10.1007/s10827-023-00849-9","url":null,"abstract":"<p><p>The perineuronal nets (PNNs) are sugar coated protein structures that encapsulate certain neurons in the brain, such as parvalbumin positive (PV) inhibitory neurons. As PNNs are theorized to act as a barrier to ion transport, they may effectively increase the membrane charge-separation distance, thereby affecting the membrane capacitance. Tewari et al. (2018) found that degradation of PNNs induced a 25%-50% increase in membrane capacitance [Formula: see text] and a reduction in the firing rates of PV-cells. In the current work, we explore how changes in [Formula: see text] affects the firing rate in a selection of computational neuron models, ranging in complexity from a single compartment Hodgkin-Huxley model to morphologically detailed PV-neuron models. In all models, an increased [Formula: see text] lead to reduced firing, but the experimentally reported increase in [Formula: see text] was not alone sufficient to explain the experimentally reported reduction in firing rate. We therefore hypothesized that PNN degradation in the experiments affected not only [Formula: see text], but also ionic reversal potentials and ion channel conductances. In simulations, we explored how various model parameters affected the firing rate of the model neurons, and identified which parameter variations in addition to [Formula: see text] that are most likely candidates for explaining the experimentally reported reduction in firing rate.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 2","pages":"283-298"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10182141/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9997089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1007/s10827-023-00845-z
Loïc J Azzalini, David Crompton, Gabriele M T D'Eleuterio, Frances Skinner, Milad Lankarany
Data assimilation techniques for state and parameter estimation are frequently applied in the context of computational neuroscience. In this work, we show how an adaptive variant of the unscented Kalman filter (UKF) performs on the tracking of a conductance-based neuron model. Unlike standard recursive filter implementations, the robust adaptive unscented Kalman filter (RAUKF) jointly estimates the states and parameters of the neuronal model while adjusting noise covariance matrices online based on innovation and residual information. We benchmark the adaptive filter's performance against existing nonlinear Kalman filters and explore the sensitivity of the filter parameters to the system being modelled. To evaluate the robustness of the proposed solution, we simulate practical settings that challenge tracking performance, such as a model mismatch and measurement faults. Compared to standard variants of the Kalman filter the adaptive variant implemented here is more accurate and robust to faults.
{"title":"Adaptive unscented Kalman filter for neuronal state and parameter estimation.","authors":"Loïc J Azzalini, David Crompton, Gabriele M T D'Eleuterio, Frances Skinner, Milad Lankarany","doi":"10.1007/s10827-023-00845-z","DOIUrl":"https://doi.org/10.1007/s10827-023-00845-z","url":null,"abstract":"<p><p>Data assimilation techniques for state and parameter estimation are frequently applied in the context of computational neuroscience. In this work, we show how an adaptive variant of the unscented Kalman filter (UKF) performs on the tracking of a conductance-based neuron model. Unlike standard recursive filter implementations, the robust adaptive unscented Kalman filter (RAUKF) jointly estimates the states and parameters of the neuronal model while adjusting noise covariance matrices online based on innovation and residual information. We benchmark the adaptive filter's performance against existing nonlinear Kalman filters and explore the sensitivity of the filter parameters to the system being modelled. To evaluate the robustness of the proposed solution, we simulate practical settings that challenge tracking performance, such as a model mismatch and measurement faults. Compared to standard variants of the Kalman filter the adaptive variant implemented here is more accurate and robust to faults.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 2","pages":"223-237"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1007/s10827-023-00848-w
Loïs Naudin
{"title":"Different parameter solutions of a conductance-based model that behave identically are not necessarily degenerate.","authors":"Loïs Naudin","doi":"10.1007/s10827-023-00848-w","DOIUrl":"https://doi.org/10.1007/s10827-023-00848-w","url":null,"abstract":"","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 2","pages":"201-206"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9627283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}