Pub Date : 2022-11-01DOI: 10.1007/s10827-022-00830-y
Elif Köksal Ersöz, Pascal Chossat, Martin Krupa, Frédéric Lavigne
An important function of the brain is to predict which stimulus is likely to occur based on the perceived cues. The present research studied the branching behavior of a computational network model of populations of excitatory and inhibitory neurons, both analytically and through simulations. Results show how synaptic efficacy, retroactive inhibition and short-term synaptic depression determine the dynamics of selection between different branches predicting sequences of stimuli of different probabilities. Further results show that changes in the probability of the different predictions depend on variations of neuronal gain. Such variations allow the network to optimize the probability of its predictions to changing probabilities of the sequences without changing synaptic efficacy.
{"title":"Dynamic branching in a neural network model for probabilistic prediction of sequences.","authors":"Elif Köksal Ersöz, Pascal Chossat, Martin Krupa, Frédéric Lavigne","doi":"10.1007/s10827-022-00830-y","DOIUrl":"https://doi.org/10.1007/s10827-022-00830-y","url":null,"abstract":"<p><p>An important function of the brain is to predict which stimulus is likely to occur based on the perceived cues. The present research studied the branching behavior of a computational network model of populations of excitatory and inhibitory neurons, both analytically and through simulations. Results show how synaptic efficacy, retroactive inhibition and short-term synaptic depression determine the dynamics of selection between different branches predicting sequences of stimuli of different probabilities. Further results show that changes in the probability of the different predictions depend on variations of neuronal gain. Such variations allow the network to optimize the probability of its predictions to changing probabilities of the sequences without changing synaptic efficacy.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 4","pages":"537-557"},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9836067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-01DOI: 10.1007/s10827-022-00822-y
Yugarshi Mondal, Rodrigo F O Pena, Horacio G Rotstein
Temporal filters, the ability of postsynaptic neurons to preferentially select certain presynaptic input patterns over others, have been shown to be associated with the notion of information filtering and coding of sensory inputs. Short-term plasticity (depression and facilitation; STP) has been proposed to be an important player in the generation of temporal filters. We carry out a systematic modeling, analysis and computational study to understand how characteristic postsynaptic (low-, high- and band-pass) temporal filters are generated in response to periodic presynaptic spike trains in the presence STP. We investigate how the dynamic properties of these filters depend on the interplay of a hierarchy of processes, including the arrival of the presynaptic spikes, the activation of STP, its effect on the excitatory synaptic connection efficacy, and the response of the postsynaptic cell. These mechanisms involve the interplay of a collection of time scales that operate at the single-event level (roughly, during each presynaptic interspike-interval) and control the long-term development of the temporal filters over multiple presynaptic events. These time scales are generated at the levels of the presynaptic cell (captured by the presynaptic interspike-intervals), short-term depression and facilitation, synaptic dynamics and the post-synaptic cellular currents. We develop mathematical tools to link the single-event time scales with the time scales governing the long-term dynamics of the resulting temporal filters for a relatively simple model where depression and facilitation interact at the level of the synaptic efficacy change. We extend our results and tools to account for more complex models. These include multiple STP time scales and non-periodic presynaptic inputs. The results and ideas we develop have implications for the understanding of the generation of temporal filters in complex networks for which the simple feedforward network we investigate here is a building block.
{"title":"Temporal filters in response to presynaptic spike trains: interplay of cellular, synaptic and short-term plasticity time scales.","authors":"Yugarshi Mondal, Rodrigo F O Pena, Horacio G Rotstein","doi":"10.1007/s10827-022-00822-y","DOIUrl":"https://doi.org/10.1007/s10827-022-00822-y","url":null,"abstract":"<p><p>Temporal filters, the ability of postsynaptic neurons to preferentially select certain presynaptic input patterns over others, have been shown to be associated with the notion of information filtering and coding of sensory inputs. Short-term plasticity (depression and facilitation; STP) has been proposed to be an important player in the generation of temporal filters. We carry out a systematic modeling, analysis and computational study to understand how characteristic postsynaptic (low-, high- and band-pass) temporal filters are generated in response to periodic presynaptic spike trains in the presence STP. We investigate how the dynamic properties of these filters depend on the interplay of a hierarchy of processes, including the arrival of the presynaptic spikes, the activation of STP, its effect on the excitatory synaptic connection efficacy, and the response of the postsynaptic cell. These mechanisms involve the interplay of a collection of time scales that operate at the single-event level (roughly, during each presynaptic interspike-interval) and control the long-term development of the temporal filters over multiple presynaptic events. These time scales are generated at the levels of the presynaptic cell (captured by the presynaptic interspike-intervals), short-term depression and facilitation, synaptic dynamics and the post-synaptic cellular currents. We develop mathematical tools to link the single-event time scales with the time scales governing the long-term dynamics of the resulting temporal filters for a relatively simple model where depression and facilitation interact at the level of the synaptic efficacy change. We extend our results and tools to account for more complex models. These include multiple STP time scales and non-periodic presynaptic inputs. The results and ideas we develop have implications for the understanding of the generation of temporal filters in complex networks for which the simple feedforward network we investigate here is a building block.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 4","pages":"395-429"},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10138737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-01DOI: 10.1007/s10827-022-00824-w
Albert Albesa-González, Maxime Froc, Oliver Williamson, Mark C W van Rossum
Models of synaptic plasticity have been used to better understand neural development as well as learning and memory. One prominent classic model is the Bienenstock-Cooper-Munro (BCM) model that has been particularly successful in explaining plasticity of the visual cortex. Here, in an effort to include more biophysical detail in the BCM model, we incorporate 1) feedforward inhibition, and 2) the experimental observation that large synapses are relatively harder to potentiate than weak ones, while synaptic depression is proportional to the synaptic strength. These modifications change the outcome of unsupervised plasticity under the BCM model. The amount of feed-forward inhibition adds a parameter to BCM that turns out to determine the strength of competition. In the limit of strong inhibition the learning outcome is identical to standard BCM and the neuron becomes selective to one stimulus only (winner-take-all). For smaller values of inhibition, competition is weaker and the receptive fields are less selective. However, both BCM variants can yield realistic receptive fields.
{"title":"Weight dependence in BCM leads to adjustable synaptic competition.","authors":"Albert Albesa-González, Maxime Froc, Oliver Williamson, Mark C W van Rossum","doi":"10.1007/s10827-022-00824-w","DOIUrl":"https://doi.org/10.1007/s10827-022-00824-w","url":null,"abstract":"<p><p>Models of synaptic plasticity have been used to better understand neural development as well as learning and memory. One prominent classic model is the Bienenstock-Cooper-Munro (BCM) model that has been particularly successful in explaining plasticity of the visual cortex. Here, in an effort to include more biophysical detail in the BCM model, we incorporate 1) feedforward inhibition, and 2) the experimental observation that large synapses are relatively harder to potentiate than weak ones, while synaptic depression is proportional to the synaptic strength. These modifications change the outcome of unsupervised plasticity under the BCM model. The amount of feed-forward inhibition adds a parameter to BCM that turns out to determine the strength of competition. In the limit of strong inhibition the learning outcome is identical to standard BCM and the neuron becomes selective to one stimulus only (winner-take-all). For smaller values of inhibition, competition is weaker and the receptive fields are less selective. However, both BCM variants can yield realistic receptive fields.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 4","pages":"431-444"},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9666303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10156523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-01DOI: 10.1007/s10827-022-00826-8
Ilaria Demori, Giulia Giordano, Viviana Mucci, Serena Losacco, Lucio Marinelli, Paolo Massobrio, Franco Blanchini, Bruno Burlando
Fibromyalgia (FM) is an unsolved central pain processing disturbance. We aim to provide a unifying model for FM pathogenesis based on a loop network involving thalamocortical regions, i.e., the ventroposterior lateral thalamus (VPL), the somatosensory cortex (SC), and the thalamic reticular nucleus (TRN). The dynamics of the loop have been described by three differential equations having neuron mean firing rates as variables and containing Hill functions to model mutual interactions among the loop elements. A computational analysis conducted with MATLAB has shown a transition from monostability to bistability of the loop behavior for a weakening of GABAergic transmission between TRN and VPL. This involves the appearance of a high-firing-rate steady state, which becomes dominant and is assumed to represent pathogenic pain processing giving rise to chronic pain. Our model is consistent with a bulk of literature evidence, such as neuroimaging and pharmacological data collected on FM patients, and with correlations between FM and immunoendocrine conditions, such as stress, perimenopause, chronic inflammation, obesity, and chronic dizziness. The model suggests that critical targets for FM treatment are to be found among immunoendocrine pathways leading to GABA/glutamate imbalance having an impact on the thalamocortical system.
{"title":"Thalamocortical bistable switch as a theoretical model of fibromyalgia pathogenesis inferred from a literature survey.","authors":"Ilaria Demori, Giulia Giordano, Viviana Mucci, Serena Losacco, Lucio Marinelli, Paolo Massobrio, Franco Blanchini, Bruno Burlando","doi":"10.1007/s10827-022-00826-8","DOIUrl":"https://doi.org/10.1007/s10827-022-00826-8","url":null,"abstract":"<p><p>Fibromyalgia (FM) is an unsolved central pain processing disturbance. We aim to provide a unifying model for FM pathogenesis based on a loop network involving thalamocortical regions, i.e., the ventroposterior lateral thalamus (VPL), the somatosensory cortex (SC), and the thalamic reticular nucleus (TRN). The dynamics of the loop have been described by three differential equations having neuron mean firing rates as variables and containing Hill functions to model mutual interactions among the loop elements. A computational analysis conducted with MATLAB has shown a transition from monostability to bistability of the loop behavior for a weakening of GABAergic transmission between TRN and VPL. This involves the appearance of a high-firing-rate steady state, which becomes dominant and is assumed to represent pathogenic pain processing giving rise to chronic pain. Our model is consistent with a bulk of literature evidence, such as neuroimaging and pharmacological data collected on FM patients, and with correlations between FM and immunoendocrine conditions, such as stress, perimenopause, chronic inflammation, obesity, and chronic dizziness. The model suggests that critical targets for FM treatment are to be found among immunoendocrine pathways leading to GABA/glutamate imbalance having an impact on the thalamocortical system.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 4","pages":"471-484"},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9666334/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10156529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-01DOI: 10.1007/s10827-022-00827-7
Jonathan Oesterle, Nicholas Krämer, Philipp Hennig, Philipp Berens
Understanding neural computation on the mechanistic level requires models of neurons and neuronal networks. To analyze such models one typically has to solve coupled ordinary differential equations (ODEs), which describe the dynamics of the underlying neural system. These ODEs are solved numerically with deterministic ODE solvers that yield single solutions with either no, or only a global scalar error indicator on precision. It can therefore be challenging to estimate the effect of numerical uncertainty on quantities of interest, such as spike-times and the number of spikes. To overcome this problem, we propose to use recently developed sampling-based probabilistic solvers, which are able to quantify such numerical uncertainties. They neither require detailed insights into the kinetics of the models, nor are they difficult to implement. We show that numerical uncertainty can affect the outcome of typical neuroscience simulations, e.g. jittering spikes by milliseconds or even adding or removing individual spikes from simulations altogether, and demonstrate that probabilistic solvers reveal these numerical uncertainties with only moderate computational overhead.
{"title":"Probabilistic solvers enable a straight-forward exploration of numerical uncertainty in neuroscience models.","authors":"Jonathan Oesterle, Nicholas Krämer, Philipp Hennig, Philipp Berens","doi":"10.1007/s10827-022-00827-7","DOIUrl":"https://doi.org/10.1007/s10827-022-00827-7","url":null,"abstract":"<p><p>Understanding neural computation on the mechanistic level requires models of neurons and neuronal networks. To analyze such models one typically has to solve coupled ordinary differential equations (ODEs), which describe the dynamics of the underlying neural system. These ODEs are solved numerically with deterministic ODE solvers that yield single solutions with either no, or only a global scalar error indicator on precision. It can therefore be challenging to estimate the effect of numerical uncertainty on quantities of interest, such as spike-times and the number of spikes. To overcome this problem, we propose to use recently developed sampling-based probabilistic solvers, which are able to quantify such numerical uncertainties. They neither require detailed insights into the kinetics of the models, nor are they difficult to implement. We show that numerical uncertainty can affect the outcome of typical neuroscience simulations, e.g. jittering spikes by milliseconds or even adding or removing individual spikes from simulations altogether, and demonstrate that probabilistic solvers reveal these numerical uncertainties with only moderate computational overhead.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 4","pages":"485-503"},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9666333/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9836065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2023-07-05DOI: 10.1007/s10827-023-00855-x
Georgy Vandyshev, Ivan Mysin
Place cells are hippocampal neurons encoding the position of an animal in space. Studies of place cells are essential to understanding the processing of information by neural networks of the brain. An important characteristic of place cell spike trains is phase precession. When an animal is running through the place field, the discharges of the place cells shift from the ascending phase of the theta rhythm through the minimum to the descending phase. The role of excitatory inputs to pyramidal neurons along the Schaffer collaterals and the perforant pathway in phase precession is described, but the role of local interneurons is poorly understood. Our goal is estimating of the contribution of field CA1 interneurons to the phase precession of place cells using mathematical methods. The CA1 field is chosen because it provides the largest set of experimental data required to build and verify the model. Our simulations discover optimal parameters of the excitatory and inhibitory inputs to the pyramidal neuron so that it generates a spike train with the effect of phase precession. The uniform inhibition of pyramidal neurons best explains the effect of phase precession. Among interneurons, axo-axonal neurons make the greatest contribution to the inhibition of pyramidal cells.
{"title":"Homogeneous inhibition is optimal for the phase precession of place cells in the CA1 field.","authors":"Georgy Vandyshev, Ivan Mysin","doi":"10.1007/s10827-023-00855-x","DOIUrl":"10.1007/s10827-023-00855-x","url":null,"abstract":"<p><p>Place cells are hippocampal neurons encoding the position of an animal in space. Studies of place cells are essential to understanding the processing of information by neural networks of the brain. An important characteristic of place cell spike trains is phase precession. When an animal is running through the place field, the discharges of the place cells shift from the ascending phase of the theta rhythm through the minimum to the descending phase. The role of excitatory inputs to pyramidal neurons along the Schaffer collaterals and the perforant pathway in phase precession is described, but the role of local interneurons is poorly understood. Our goal is estimating of the contribution of field CA1 interneurons to the phase precession of place cells using mathematical methods. The CA1 field is chosen because it provides the largest set of experimental data required to build and verify the model. Our simulations discover optimal parameters of the excitatory and inhibitory inputs to the pyramidal neuron so that it generates a spike train with the effect of phase precession. The uniform inhibition of pyramidal neurons best explains the effect of phase precession. Among interneurons, axo-axonal neurons make the greatest contribution to the inhibition of pyramidal cells.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 3","pages":"389-403"},"PeriodicalIF":1.2,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9950436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2023-05-19DOI: 10.1007/s10827-023-00852-0
Farzaneh Darki, Andrea Ferrario, James Rankin
Ambiguous sensory information can lead to spontaneous alternations between perceptual states, recently shown to extend to tactile perception. The authors recently proposed a simplified form of tactile rivalry which evokes two competing percepts for a fixed difference in input amplitudes across antiphase, pulsatile stimulation of the left and right fingers. This study addresses the need for a tactile rivalry model that captures the dynamics of perceptual alternations and that incorporates the structure of the somatosensory system. The model features hierarchical processing with two stages. The first and the second stages of model could be located at the secondary somatosensory cortex (area S2), or in higher areas driven by S2. The model captures dynamical features specific to the tactile rivalry percepts and produces general characteristics of perceptual rivalry: input strength dependence of dominance times (Levelt's proposition II), short-tailed skewness of dominance time distributions and the ratio of distribution moments. The presented modelling work leads to experimentally testable predictions. The same hierarchical model could generalise to account for percept formation, competition and alternations for bistable stimuli that involve pulsatile inputs from the visual and auditory domains.
{"title":"Hierarchical processing underpins competition in tactile perceptual bistability.","authors":"Farzaneh Darki, Andrea Ferrario, James Rankin","doi":"10.1007/s10827-023-00852-0","DOIUrl":"10.1007/s10827-023-00852-0","url":null,"abstract":"<p><p>Ambiguous sensory information can lead to spontaneous alternations between perceptual states, recently shown to extend to tactile perception. The authors recently proposed a simplified form of tactile rivalry which evokes two competing percepts for a fixed difference in input amplitudes across antiphase, pulsatile stimulation of the left and right fingers. This study addresses the need for a tactile rivalry model that captures the dynamics of perceptual alternations and that incorporates the structure of the somatosensory system. The model features hierarchical processing with two stages. The first and the second stages of model could be located at the secondary somatosensory cortex (area S2), or in higher areas driven by S2. The model captures dynamical features specific to the tactile rivalry percepts and produces general characteristics of perceptual rivalry: input strength dependence of dominance times (Levelt's proposition II), short-tailed skewness of dominance time distributions and the ratio of distribution moments. The presented modelling work leads to experimentally testable predictions. The same hierarchical model could generalise to account for percept formation, competition and alternations for bistable stimuli that involve pulsatile inputs from the visual and auditory domains.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 3","pages":"343-360"},"PeriodicalIF":1.2,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10404575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9956697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2023-05-17DOI: 10.1007/s10827-023-00854-y
Narihisa Matsumoto, Mark A G Eldridge, J Megan Fredericks, Kaleb A Lowe, Barry J Richmond
In the canonical view of visual processing the neural representation of complex objects emerges as visual information is integrated through a set of convergent, hierarchically organized processing stages, ending in the primate inferior temporal lobe. It seems reasonable to infer that visual perceptual categorization requires the integrity of anterior inferior temporal cortex (area TE). Many deep neural networks (DNNs) are structured to simulate the canonical view of hierarchical processing within the visual system. However, there are some discrepancies between DNNs and the primate brain. Here we evaluated the performance of a simulated hierarchical model of vision in discriminating the same categorization problems presented to monkeys with TE removals. The model was able to simulate the performance of monkeys with TE removals in the categorization task but performed poorly when challenged with visually degraded stimuli. We conclude that further development of the model is required to match the level of visual flexibility present in the monkey visual system.
{"title":"Comparing performance between a deep neural network and monkeys with bilateral removals of visual area TE in categorizing feature-ambiguous stimuli.","authors":"Narihisa Matsumoto, Mark A G Eldridge, J Megan Fredericks, Kaleb A Lowe, Barry J Richmond","doi":"10.1007/s10827-023-00854-y","DOIUrl":"10.1007/s10827-023-00854-y","url":null,"abstract":"<p><p>In the canonical view of visual processing the neural representation of complex objects emerges as visual information is integrated through a set of convergent, hierarchically organized processing stages, ending in the primate inferior temporal lobe. It seems reasonable to infer that visual perceptual categorization requires the integrity of anterior inferior temporal cortex (area TE). Many deep neural networks (DNNs) are structured to simulate the canonical view of hierarchical processing within the visual system. However, there are some discrepancies between DNNs and the primate brain. Here we evaluated the performance of a simulated hierarchical model of vision in discriminating the same categorization problems presented to monkeys with TE removals. The model was able to simulate the performance of monkeys with TE removals in the categorization task but performed poorly when challenged with visually degraded stimuli. We conclude that further development of the model is required to match the level of visual flexibility present in the monkey visual system.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 3","pages":"381-387"},"PeriodicalIF":1.2,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10305678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2022-04-13DOI: 10.1007/s10827-022-00818-8
Bemin Ghobreal, Farzan Nadim, Mesut Sahin
Efforts on selective neural stimulation have concentrated on segregating axons based on their size and geometry. Nonetheless, axons of the white matter or peripheral nerves may also differ in their electrophysiological properties. The primary objective of this study was to investigate the possibility of selective activation of axons by leveraging an assumed level of diversity in passive (Cm & Gleak) and active membrane properties (Ktemp & Gnamax). First, the stimulus waveforms with hyperpolarizing (HPP) and depolarizing pre-pulsing (DPP) were tested on selectivity in a local membrane model. The default value of membrane capacitance (Cm) was found to play a critical role in sensitivity of the chronaxie time (Chr) and rheobase (Rhe) to variations of all the four membrane parameters. Decreasing the default value of Cm, and thus the passive time constant of the membrane, amplified the sensitivity to the active parameters, Ktemp and GNamax, on Chr. The HPP waveform could selectively activate neurons even if they were diversified by membrane leakage (Gleak) only, and produced higher selectivity than DPP when parameters are varied in pairs. Selectivity measures were larger when the passive parameters (Cm & Gleak) were varied together, compared to the active parameters. Second, this novel mechanism of selectivity was investigated with non-rectangular waveforms for the stimulating phase (and HPP) in the same local membrane model. Simulation results suggest that Kt2 is the most selective waveform followed by Linear and Gaussian waveforms. Traditional rectangular pulse was among the least selective of all. Finally, a compartmental axon model confirmed the main findings of the local model that Kt2 is the most selective, but rank ordered the other waveforms differently. These results suggest a potentially novel mechanism of stimulation selectivity, leveraging electrophysiological variations in membrane properties, that can lead to various neural prosthetic applications.
{"title":"Selective neural stimulation by leveraging electrophysiological differentiation and using pre-pulsing and non-rectangular waveforms.","authors":"Bemin Ghobreal, Farzan Nadim, Mesut Sahin","doi":"10.1007/s10827-022-00818-8","DOIUrl":"10.1007/s10827-022-00818-8","url":null,"abstract":"<p><p>Efforts on selective neural stimulation have concentrated on segregating axons based on their size and geometry. Nonetheless, axons of the white matter or peripheral nerves may also differ in their electrophysiological properties. The primary objective of this study was to investigate the possibility of selective activation of axons by leveraging an assumed level of diversity in passive (C<sub>m</sub> & G<sub>leak</sub>) and active membrane properties (K<sub>temp</sub> & G<sub>namax</sub>). First, the stimulus waveforms with hyperpolarizing (HPP) and depolarizing pre-pulsing (DPP) were tested on selectivity in a local membrane model. The default value of membrane capacitance (C<sub>m)</sub> was found to play a critical role in sensitivity of the chronaxie time (Chr) and rheobase (Rhe) to variations of all the four membrane parameters. Decreasing the default value of C<sub>m</sub>, and thus the passive time constant of the membrane, amplified the sensitivity to the active parameters, K<sub>temp</sub> and G<sub>Namax</sub>, on Chr. The HPP waveform could selectively activate neurons even if they were diversified by membrane leakage (G<sub>leak</sub>) only, and produced higher selectivity than DPP when parameters are varied in pairs. Selectivity measures were larger when the passive parameters (C<sub>m</sub> & G<sub>leak</sub>) were varied together, compared to the active parameters. Second, this novel mechanism of selectivity was investigated with non-rectangular waveforms for the stimulating phase (and HPP) in the same local membrane model. Simulation results suggest that Kt<sup>2</sup> is the most selective waveform followed by Linear and Gaussian waveforms. Traditional rectangular pulse was among the least selective of all. Finally, a compartmental axon model confirmed the main findings of the local model that Kt<sup>2</sup> is the most selective, but rank ordered the other waveforms differently. These results suggest a potentially novel mechanism of stimulation selectivity, leveraging electrophysiological variations in membrane properties, that can lead to various neural prosthetic applications.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 1","pages":"313-330"},"PeriodicalIF":1.5,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52406907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2023-06-02DOI: 10.1007/s10827-023-00853-z
Timothy C Whalen, John E Parker, Aryn H Gittis, Jonathan E Rubin
Parkinson's disease (PD) and animal models of PD feature enhanced oscillations in several frequency bands in the basal ganglia (BG). Past research has emphasized the enhancement of 13-30 Hz beta oscillations. Recently, however, oscillations in the delta band (0.5-4 Hz) have been identified as a robust predictor of dopamine loss and motor dysfunction in several BG regions in mouse models of PD. In particular, delta oscillations in the substantia nigra pars reticulata (SNr) were shown to lead oscillations in motor cortex (M1) and persist under M1 lesion, but it is not clear where these oscillations are initially generated. In this paper, we use a computational model to study how delta oscillations may arise in the SNr due to projections from the globus pallidus externa (GPe). We propose a network architecture that incorporates inhibition in SNr from oscillating GPe neurons and other SNr neurons. In our simulations, this configuration yields firing patterns in model SNr neurons that match those measured in vivo. In particular, we see the spontaneous emergence of near-antiphase active-predicting and inactive-predicting neural populations in the SNr, which persist under the inclusion of STN inputs based on experimental recordings. These results demonstrate how delta oscillations can propagate through BG nuclei despite imperfect oscillatory synchrony in the source site, narrowing down potential targets for the source of delta oscillations in PD models and giving new insight into the dynamics of SNr oscillations.
{"title":"Transmission of delta band (0.5-4 Hz) oscillations from the globus pallidus to the substantia nigra pars reticulata in dopamine depletion.","authors":"Timothy C Whalen, John E Parker, Aryn H Gittis, Jonathan E Rubin","doi":"10.1007/s10827-023-00853-z","DOIUrl":"10.1007/s10827-023-00853-z","url":null,"abstract":"<p><p>Parkinson's disease (PD) and animal models of PD feature enhanced oscillations in several frequency bands in the basal ganglia (BG). Past research has emphasized the enhancement of 13-30 Hz beta oscillations. Recently, however, oscillations in the delta band (0.5-4 Hz) have been identified as a robust predictor of dopamine loss and motor dysfunction in several BG regions in mouse models of PD. In particular, delta oscillations in the substantia nigra pars reticulata (SNr) were shown to lead oscillations in motor cortex (M1) and persist under M1 lesion, but it is not clear where these oscillations are initially generated. In this paper, we use a computational model to study how delta oscillations may arise in the SNr due to projections from the globus pallidus externa (GPe). We propose a network architecture that incorporates inhibition in SNr from oscillating GPe neurons and other SNr neurons. In our simulations, this configuration yields firing patterns in model SNr neurons that match those measured in vivo. In particular, we see the spontaneous emergence of near-antiphase active-predicting and inactive-predicting neural populations in the SNr, which persist under the inclusion of STN inputs based on experimental recordings. These results demonstrate how delta oscillations can propagate through BG nuclei despite imperfect oscillatory synchrony in the source site, narrowing down potential targets for the source of delta oscillations in PD models and giving new insight into the dynamics of SNr oscillations.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"51 3","pages":"361-380"},"PeriodicalIF":1.2,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10527635/pdf/nihms-1908672.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9949699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}