Pub Date : 2021-09-16DOI: 10.1186/s13408-021-00109-z
Elif Köksal Ersöz, Fabrice Wendling
Mathematical models at multiple temporal and spatial scales can unveil the fundamental mechanisms of critical transitions in brain activities. Neural mass models (NMMs) consider the average temporal dynamics of interconnected neuronal subpopulations without explicitly representing the underlying cellular activity. The mesoscopic level offered by the neural mass formulation has been used to model electroencephalographic (EEG) recordings and to investigate various cerebral mechanisms, such as the generation of physiological and pathological brain activities. In this work, we consider a NMM widely accepted in the context of epilepsy, which includes four interacting neuronal subpopulations with different synaptic kinetics. Due to the resulting three-time-scale structure, the model yields complex oscillations of relaxation and bursting types. By applying the principles of geometric singular perturbation theory, we unveil the existence of the canard solutions and detail how they organize the complex oscillations and excitability properties of the model. In particular, we show that boundaries between pathological epileptic discharges and physiological background activity are determined by the canard solutions. Finally we report the existence of canard-mediated small-amplitude frequency-specific oscillations in simulated local field potentials for decreased inhibition conditions. Interestingly, such oscillations are actually observed in intracerebral EEG signals recorded in epileptic patients during pre-ictal periods, close to seizure onsets.
{"title":"Canard solutions in neural mass models: consequences on critical regimes.","authors":"Elif Köksal Ersöz, Fabrice Wendling","doi":"10.1186/s13408-021-00109-z","DOIUrl":"https://doi.org/10.1186/s13408-021-00109-z","url":null,"abstract":"<p><p>Mathematical models at multiple temporal and spatial scales can unveil the fundamental mechanisms of critical transitions in brain activities. Neural mass models (NMMs) consider the average temporal dynamics of interconnected neuronal subpopulations without explicitly representing the underlying cellular activity. The mesoscopic level offered by the neural mass formulation has been used to model electroencephalographic (EEG) recordings and to investigate various cerebral mechanisms, such as the generation of physiological and pathological brain activities. In this work, we consider a NMM widely accepted in the context of epilepsy, which includes four interacting neuronal subpopulations with different synaptic kinetics. Due to the resulting three-time-scale structure, the model yields complex oscillations of relaxation and bursting types. By applying the principles of geometric singular perturbation theory, we unveil the existence of the canard solutions and detail how they organize the complex oscillations and excitability properties of the model. In particular, we show that boundaries between pathological epileptic discharges and physiological background activity are determined by the canard solutions. Finally we report the existence of canard-mediated small-amplitude frequency-specific oscillations in simulated local field potentials for decreased inhibition conditions. Interestingly, such oscillations are actually observed in intracerebral EEG signals recorded in epileptic patients during pre-ictal periods, close to seizure onsets.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8446153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39441892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-12DOI: 10.1186/s13408-021-00108-0
Erik D Fagerholm, W M C Foulkes, Karl J Friston, Rosalyn J Moran, Robert Leech
The principle of stationary action is a cornerstone of modern physics, providing a powerful framework for investigating dynamical systems found in classical mechanics through to quantum field theory. However, computational neuroscience, despite its heavy reliance on concepts in physics, is anomalous in this regard as its main equations of motion are not compatible with a Lagrangian formulation and hence with the principle of stationary action. Taking the Dynamic Causal Modelling (DCM) neuronal state equation as an instructive archetype of the first-order linear differential equations commonly found in computational neuroscience, we show that it is possible to make certain modifications to this equation to render it compatible with the principle of stationary action. Specifically, we show that a Lagrangian formulation of the DCM neuronal state equation is facilitated using a complex dependent variable, an oscillatory solution, and a Hermitian intrinsic connectivity matrix. We first demonstrate proof of principle by using Bayesian model inversion to show that both the original and modified models can be correctly identified via in silico data generated directly from their respective equations of motion. We then provide motivation for adopting the modified models in neuroscience by using three different types of publicly available in vivo neuroimaging datasets, together with open source MATLAB code, to show that the modified (oscillatory) model provides a more parsimonious explanation for some of these empirical timeseries. It is our hope that this work will, in combination with existing techniques, allow people to explore the symmetries and associated conservation laws within neural systems - and to exploit the computational expediency facilitated by direct variational techniques.
{"title":"Rendering neuronal state equations compatible with the principle of stationary action.","authors":"Erik D Fagerholm, W M C Foulkes, Karl J Friston, Rosalyn J Moran, Robert Leech","doi":"10.1186/s13408-021-00108-0","DOIUrl":"10.1186/s13408-021-00108-0","url":null,"abstract":"<p><p>The principle of stationary action is a cornerstone of modern physics, providing a powerful framework for investigating dynamical systems found in classical mechanics through to quantum field theory. However, computational neuroscience, despite its heavy reliance on concepts in physics, is anomalous in this regard as its main equations of motion are not compatible with a Lagrangian formulation and hence with the principle of stationary action. Taking the Dynamic Causal Modelling (DCM) neuronal state equation as an instructive archetype of the first-order linear differential equations commonly found in computational neuroscience, we show that it is possible to make certain modifications to this equation to render it compatible with the principle of stationary action. Specifically, we show that a Lagrangian formulation of the DCM neuronal state equation is facilitated using a complex dependent variable, an oscillatory solution, and a Hermitian intrinsic connectivity matrix. We first demonstrate proof of principle by using Bayesian model inversion to show that both the original and modified models can be correctly identified via in silico data generated directly from their respective equations of motion. We then provide motivation for adopting the modified models in neuroscience by using three different types of publicly available in vivo neuroimaging datasets, together with open source MATLAB code, to show that the modified (oscillatory) model provides a more parsimonious explanation for some of these empirical timeseries. It is our hope that this work will, in combination with existing techniques, allow people to explore the symmetries and associated conservation laws within neural systems - and to exploit the computational expediency facilitated by direct variational techniques.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8360977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39306646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.1186/s13408-021-00107-1
Karina Kolodina, John Wyller, Anna Oleynik, Mads Peter Sørensen
We study pattern formation in a 2-population homogenized neural field model of the Hopfield type in one spatial dimension with periodic microstructure. The connectivity functions are periodically modulated in both the synaptic footprint and in the spatial scale. It is shown that the nonlocal synaptic interactions promote a finite band width instability. The stability method relies on a sequence of wave-number dependent invariants of [Formula: see text]-stability matrices representing the sequence of Fourier-transformed linearized evolution equations for the perturbation imposed on the homogeneous background. The generic picture of the instability structure consists of a finite set of well-separated gain bands. In the shallow firing rate regime the nonlinear development of the instability is determined by means of the translational invariant model with connectivity kernels replaced with the corresponding period averaged connectivity functions. In the steep firing rate regime the pattern formation process depends sensitively on the spatial localization of the connectivity kernels: For strongly localized kernels this process is determined by the translational invariant model with period averaged connectivity kernels, whereas in the complementary regime of weak and moderate localization requires the homogenized model as a starting point for the analysis. We follow the development of the instability numerically into the nonlinear regime for both steep and shallow firing rate functions when the connectivity kernels are modeled by means of an exponentially decaying function. We also study the pattern forming process numerically as a function of the heterogeneity parameters in four different regimes ranging from the weakly modulated case to the strongly heterogeneous case. For the weakly modulated regime, we observe that stable spatial oscillations are formed in the steep firing rate regime, whereas we get spatiotemporal oscillations in the shallow regime of the firing rate functions.
{"title":"Pattern formation in a 2-population homogenized neuronal network model.","authors":"Karina Kolodina, John Wyller, Anna Oleynik, Mads Peter Sørensen","doi":"10.1186/s13408-021-00107-1","DOIUrl":"https://doi.org/10.1186/s13408-021-00107-1","url":null,"abstract":"<p><p>We study pattern formation in a 2-population homogenized neural field model of the Hopfield type in one spatial dimension with periodic microstructure. The connectivity functions are periodically modulated in both the synaptic footprint and in the spatial scale. It is shown that the nonlocal synaptic interactions promote a finite band width instability. The stability method relies on a sequence of wave-number dependent invariants of [Formula: see text]-stability matrices representing the sequence of Fourier-transformed linearized evolution equations for the perturbation imposed on the homogeneous background. The generic picture of the instability structure consists of a finite set of well-separated gain bands. In the shallow firing rate regime the nonlinear development of the instability is determined by means of the translational invariant model with connectivity kernels replaced with the corresponding period averaged connectivity functions. In the steep firing rate regime the pattern formation process depends sensitively on the spatial localization of the connectivity kernels: For strongly localized kernels this process is determined by the translational invariant model with period averaged connectivity kernels, whereas in the complementary regime of weak and moderate localization requires the homogenized model as a starting point for the analysis. We follow the development of the instability numerically into the nonlinear regime for both steep and shallow firing rate functions when the connectivity kernels are modeled by means of an exponentially decaying function. We also study the pattern forming process numerically as a function of the heterogeneity parameters in four different regimes ranging from the weakly modulated case to the strongly heterogeneous case. For the weakly modulated regime, we observe that stable spatial oscillations are formed in the steep firing rate regime, whereas we get spatiotemporal oscillations in the shallow regime of the firing rate functions.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-021-00107-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39029130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-03DOI: 10.1186/s13408-021-00106-2
Andrea Ferrario, James Rankin
In the auditory streaming paradigm, alternating sequences of pure tones can be perceived as a single galloping rhythm (integration) or as two sequences with separated low and high tones (segregation). Although studied for decades, the neural mechanisms underlining this perceptual grouping of sound remains a mystery. With the aim of identifying a plausible minimal neural circuit that captures this phenomenon, we propose a firing rate model with two periodically forced neural populations coupled by fast direct excitation and slow delayed inhibition. By analyzing the model in a non-smooth, slow-fast regime we analytically prove the existence of a rich repertoire of dynamical states and of their parameter dependent transitions. We impose plausible parameter restrictions and link all states with perceptual interpretations. Regions of stimulus parameters occupied by states linked with each percept match those found in behavioural experiments. Our model suggests that slow inhibition masks the perception of subsequent tones during segregation (forward masking), whereas fast excitation enables integration for large pitch differences between the two tones.
{"title":"Auditory streaming emerges from fast excitation and slow delayed inhibition.","authors":"Andrea Ferrario, James Rankin","doi":"10.1186/s13408-021-00106-2","DOIUrl":"https://doi.org/10.1186/s13408-021-00106-2","url":null,"abstract":"<p><p>In the auditory streaming paradigm, alternating sequences of pure tones can be perceived as a single galloping rhythm (integration) or as two sequences with separated low and high tones (segregation). Although studied for decades, the neural mechanisms underlining this perceptual grouping of sound remains a mystery. With the aim of identifying a plausible minimal neural circuit that captures this phenomenon, we propose a firing rate model with two periodically forced neural populations coupled by fast direct excitation and slow delayed inhibition. By analyzing the model in a non-smooth, slow-fast regime we analytically prove the existence of a rich repertoire of dynamical states and of their parameter dependent transitions. We impose plausible parameter restrictions and link all states with perceptual interpretations. Regions of stimulus parameters occupied by states linked with each percept match those found in behavioural experiments. Our model suggests that slow inhibition masks the perception of subsequent tones during segregation (forward masking), whereas fast excitation enables integration for large pitch differences between the two tones.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-021-00106-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38875880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1186/s13408-021-00105-3
Hugues Berry, Stéphane Genet
The neurons of the deep cerebellar nuclei (DCNn) represent the main functional link between the cerebellar cortex and the rest of the central nervous system. Therefore, understanding the electrophysiological properties of DCNn is of fundamental importance to understand the overall functioning of the cerebellum. Experimental data suggest that DCNn can reversibly switch between two states: the firing of spikes (F state) and a stable depolarized state (SD state). We introduce a new biophysical model of the DCNn membrane electro-responsiveness to investigate how the interplay between the documented conductances identified in DCNn give rise to these states. In the model, the F state emerges as an isola of limit cycles, i.e. a closed loop of periodic solutions disconnected from the branch of SD fixed points. This bifurcation structure endows the model with the ability to reproduce the [Formula: see text] transition triggered by hyperpolarizing current pulses. The model also reproduces the [Formula: see text] transition induced by blocking Ca currents and ascribes this transition to the blocking of the high-threshold Ca current. The model suggests that intracellular current injections can trigger fully reversible [Formula: see text] transitions. Investigation of low-dimension reduced models suggests that the voltage-dependent Na current is prominent for these dynamical features. Finally, simulations of the model suggest that physiological synaptic inputs may trigger [Formula: see text] transitions. These transitions could explain the puzzling observation of positively correlated activities of connected Purkinje cells and DCNn despite the former inhibit the latter.
{"title":"A model of on/off transitions in neurons of the deep cerebellar nuclei: deciphering the underlying ionic mechanisms.","authors":"Hugues Berry, Stéphane Genet","doi":"10.1186/s13408-021-00105-3","DOIUrl":"https://doi.org/10.1186/s13408-021-00105-3","url":null,"abstract":"<p><p>The neurons of the deep cerebellar nuclei (DCNn) represent the main functional link between the cerebellar cortex and the rest of the central nervous system. Therefore, understanding the electrophysiological properties of DCNn is of fundamental importance to understand the overall functioning of the cerebellum. Experimental data suggest that DCNn can reversibly switch between two states: the firing of spikes (F state) and a stable depolarized state (SD state). We introduce a new biophysical model of the DCNn membrane electro-responsiveness to investigate how the interplay between the documented conductances identified in DCNn give rise to these states. In the model, the F state emerges as an isola of limit cycles, i.e. a closed loop of periodic solutions disconnected from the branch of SD fixed points. This bifurcation structure endows the model with the ability to reproduce the [Formula: see text] transition triggered by hyperpolarizing current pulses. The model also reproduces the [Formula: see text] transition induced by blocking Ca currents and ascribes this transition to the blocking of the high-threshold Ca current. The model suggests that intracellular current injections can trigger fully reversible [Formula: see text] transitions. Investigation of low-dimension reduced models suggests that the voltage-dependent Na current is prominent for these dynamical features. Finally, simulations of the model suggest that physiological synaptic inputs may trigger [Formula: see text] transitions. These transitions could explain the puzzling observation of positively correlated activities of connected Purkinje cells and DCNn despite the former inhibit the latter.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-021-00105-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25540982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-19DOI: 10.1186/s13408-021-00104-4
Matias Calderini, Jean-Philippe Thivierge
Decoding approaches provide a useful means of estimating the information contained in neuronal circuits. In this work, we analyze the expected classification error of a decoder based on Fisher linear discriminant analysis. We provide expressions that relate decoding error to the specific parameters of a population model that performs linear integration of sensory input. Results show conditions that lead to beneficial and detrimental effects of noise correlation on decoding. Further, the proposed framework sheds light on the contribution of neuronal noise, highlighting cases where, counter-intuitively, increased noise may lead to improved decoding performance. Finally, we examined the impact of dynamical parameters, including neuronal leak and integration time constant, on decoding. Overall, this work presents a fruitful approach to the study of decoding using a comprehensive theoretical framework that merges dynamical parameters with estimates of readout error.
{"title":"Estimating Fisher discriminant error in a linear integrator model of neural population activity.","authors":"Matias Calderini, Jean-Philippe Thivierge","doi":"10.1186/s13408-021-00104-4","DOIUrl":"10.1186/s13408-021-00104-4","url":null,"abstract":"<p><p>Decoding approaches provide a useful means of estimating the information contained in neuronal circuits. In this work, we analyze the expected classification error of a decoder based on Fisher linear discriminant analysis. We provide expressions that relate decoding error to the specific parameters of a population model that performs linear integration of sensory input. Results show conditions that lead to beneficial and detrimental effects of noise correlation on decoding. Further, the proposed framework sheds light on the contribution of neuronal noise, highlighting cases where, counter-intuitively, increased noise may lead to improved decoding performance. Finally, we examined the impact of dynamical parameters, including neuronal leak and integration time constant, on decoding. Overall, this work presents a fruitful approach to the study of decoding using a comprehensive theoretical framework that merges dynamical parameters with estimates of readout error.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7895896/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25389579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-15DOI: 10.1186/s13408-021-00103-5
Isam Al-Darabsah, Sue Ann Campbell
In this work, we consider a general conductance-based neuron model with the inclusion of the acetycholine sensitive, M-current. We study bifurcations in the parameter space consisting of the applied current [Formula: see text], the maximal conductance of the M-current [Formula: see text] and the conductance of the leak current [Formula: see text]. We give precise conditions for the model that ensure the existence of a Bogdanov-Takens (BT) point and show that such a point can occur by varying [Formula: see text] and [Formula: see text]. We discuss the case when the BT point becomes a Bogdanov-Takens-cusp (BTC) point and show that such a point can occur in the three-dimensional parameter space. The results of the bifurcation analysis are applied to different neuronal models and are verified and supplemented by numerical bifurcation diagrams generated using the package MATCONT. We conclude that there is a transition in the neuronal excitability type organised by the BT point and the neuron switches from Class-I to Class-II as conductance of the M-current increases.
{"title":"M-current induced Bogdanov-Takens bifurcation and switching of neuron excitability class.","authors":"Isam Al-Darabsah, Sue Ann Campbell","doi":"10.1186/s13408-021-00103-5","DOIUrl":"https://doi.org/10.1186/s13408-021-00103-5","url":null,"abstract":"<p><p>In this work, we consider a general conductance-based neuron model with the inclusion of the acetycholine sensitive, M-current. We study bifurcations in the parameter space consisting of the applied current [Formula: see text], the maximal conductance of the M-current [Formula: see text] and the conductance of the leak current [Formula: see text]. We give precise conditions for the model that ensure the existence of a Bogdanov-Takens (BT) point and show that such a point can occur by varying [Formula: see text] and [Formula: see text]. We discuss the case when the BT point becomes a Bogdanov-Takens-cusp (BTC) point and show that such a point can occur in the three-dimensional parameter space. The results of the bifurcation analysis are applied to different neuronal models and are verified and supplemented by numerical bifurcation diagrams generated using the package MATCONT. We conclude that there is a transition in the neuronal excitability type organised by the BT point and the neuron switches from Class-I to Class-II as conductance of the M-current increases.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-021-00103-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25373815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-23DOI: 10.1186/s13408-021-00102-6
Antonios Georgiou, Mikhail Katkov, Misha Tsodyks
Memory and forgetting constitute two sides of the same coin, and although the first has been extensively investigated, the latter is often overlooked. A possible approach to better understand forgetting is to develop phenomenological models that implement its putative mechanisms in the most elementary way possible, and then experimentally test the theoretical predictions of these models. One such mechanism proposed in previous studies is retrograde interference, stating that a memory can be erased due to subsequently acquired memories. In the current contribution, we hypothesize that retrograde erasure is controlled by the relevant "importance" measures such that more important memories eliminate less important ones acquired earlier. We show that some versions of the resulting mathematical model are broadly compatible with the previously reported power-law forgetting time course and match well the results of our recognition experiments with long, randomly assembled streams of words.
{"title":"Retroactive interference model of forgetting.","authors":"Antonios Georgiou, Mikhail Katkov, Misha Tsodyks","doi":"10.1186/s13408-021-00102-6","DOIUrl":"10.1186/s13408-021-00102-6","url":null,"abstract":"<p><p>Memory and forgetting constitute two sides of the same coin, and although the first has been extensively investigated, the latter is often overlooked. A possible approach to better understand forgetting is to develop phenomenological models that implement its putative mechanisms in the most elementary way possible, and then experimentally test the theoretical predictions of these models. One such mechanism proposed in previous studies is retrograde interference, stating that a memory can be erased due to subsequently acquired memories. In the current contribution, we hypothesize that retrograde erasure is controlled by the relevant \"importance\" measures such that more important memories eliminate less important ones acquired earlier. We show that some versions of the resulting mathematical model are broadly compatible with the previously reported power-law forgetting time course and match well the results of our recognition experiments with long, randomly assembled streams of words.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7826326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38850521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-09DOI: 10.1186/s13408-020-00101-z
Selma Souihel, Bruno Cessac
We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can-under conditions analysed in the paper-trigger a wave of activity enhancing the anticipation mechanism provided by local gain control (Berry et al. in Nature 398(6725):334-338, 1999; Chen et al. in J. Neurosci. 33(1):120-132, 2013). We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells (Baccus and Meister in Neuron 36(5):909-919, 2002) and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity (Trenholm et al. in Nat. Neurosci. 16:154-156, 2013). We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.
{"title":"On the potential role of lateral connectivity in retinal anticipation.","authors":"Selma Souihel, Bruno Cessac","doi":"10.1186/s13408-020-00101-z","DOIUrl":"https://doi.org/10.1186/s13408-020-00101-z","url":null,"abstract":"<p><p>We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can-under conditions analysed in the paper-trigger a wave of activity enhancing the anticipation mechanism provided by local gain control (Berry et al. in Nature 398(6725):334-338, 1999; Chen et al. in J. Neurosci. 33(1):120-132, 2013). We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells (Baccus and Meister in Neuron 36(5):909-919, 2002) and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity (Trenholm et al. in Nat. Neurosci. 16:154-156, 2013). We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-020-00101-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38799872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-04DOI: 10.1186/s13408-020-00100-0
Jennifer Creaser, Peter Ashwin, Claire Postlethwaite, Juliane Britz
The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition, and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Electroencephalogram (EEG) microstates are brief periods of stable scalp topography that have been identified as the electrophysiological correlate of functional magnetic resonance imaging defined resting-state networks. Spatiotemporal microstate sequences maintain high temporal resolution and have been shown to be scale-free with long-range temporal correlations. Previous attempts to model EEG microstate sequences have failed to capture this crucial property and so cannot fully capture the dynamics; this paper answers the call for more sophisticated modeling approaches. We present a dynamical model that exhibits a noisy network attractor between nodes that represent the microstates. Using an excitable network between four nodes, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. We present two extensions to this model: first, an additional hidden node at each state; second, an additional layer that controls the switching frequency in the original network. Introducing either extension to the network gives the flexibility to capture these heavy tails. We compare the model generated sequences to microstate sequences from EEG data collected from healthy subjects at rest. For the first extension, we show that the hidden nodes 'trap' the trajectories allowing the control of residence times at each node. For the second extension, we show that two nodes in the controlling layer are sufficient to model the long residence times. Finally, we show that in addition to capturing the residence time distributions and transition probabilities of the sequences, these two models capture additional properties of the sequences including having interspersed long and short residence times and long range temporal correlations in line with the data as measured by the Hurst exponent.
{"title":"Noisy network attractor models for transitions between EEG microstates.","authors":"Jennifer Creaser, Peter Ashwin, Claire Postlethwaite, Juliane Britz","doi":"10.1186/s13408-020-00100-0","DOIUrl":"https://doi.org/10.1186/s13408-020-00100-0","url":null,"abstract":"<p><p>The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition, and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Electroencephalogram (EEG) microstates are brief periods of stable scalp topography that have been identified as the electrophysiological correlate of functional magnetic resonance imaging defined resting-state networks. Spatiotemporal microstate sequences maintain high temporal resolution and have been shown to be scale-free with long-range temporal correlations. Previous attempts to model EEG microstate sequences have failed to capture this crucial property and so cannot fully capture the dynamics; this paper answers the call for more sophisticated modeling approaches. We present a dynamical model that exhibits a noisy network attractor between nodes that represent the microstates. Using an excitable network between four nodes, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. We present two extensions to this model: first, an additional hidden node at each state; second, an additional layer that controls the switching frequency in the original network. Introducing either extension to the network gives the flexibility to capture these heavy tails. We compare the model generated sequences to microstate sequences from EEG data collected from healthy subjects at rest. For the first extension, we show that the hidden nodes 'trap' the trajectories allowing the control of residence times at each node. For the second extension, we show that two nodes in the controlling layer are sufficient to model the long residence times. Finally, we show that in addition to capturing the residence time distributions and transition probabilities of the sequences, these two models capture additional properties of the sequences including having interspersed long and short residence times and long range temporal correlations in line with the data as measured by the Hurst exponent.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-020-00100-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38777266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}