Pub Date : 2015-12-01Epub Date: 2015-01-12DOI: 10.1186/2190-8567-5-2
Sung Joon Moon, Katherine A Cook, Karthikeyan Rajendran, Ioannis G Kevrekidis, Jaime Cisternas, Carlo R Laing
The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar "fine" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's "identity" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established "identity-state" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics.
{"title":"Coarse-Grained Clustering Dynamics of Heterogeneously Coupled Neurons.","authors":"Sung Joon Moon, Katherine A Cook, Karthikeyan Rajendran, Ioannis G Kevrekidis, Jaime Cisternas, Carlo R Laing","doi":"10.1186/2190-8567-5-2","DOIUrl":"https://doi.org/10.1186/2190-8567-5-2","url":null,"abstract":"<p><p>The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar \"fine\" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's \"identity\" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established \"identity-state\" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/2190-8567-5-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34079061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-08-12DOI: 10.1186/s13408-015-0028-3
Mathew P Dafilis, Federico Frascoli, Peter J Cadusch, David T J Liley
Background: In a previous work (Dafilis et al. in Chaos 23(2):023111, 2013), evidence was presented for four-dimensional chaos in Liley's mesoscopic model of the electroencephalogram. The study was limited to one parameter set of the model equations.
Findings: In this report we expand that result by presenting evidence for the extension of four-dimensional chaotic behavior to a large area of the biologically admissible parameter space. A two-parameter bifurcation analysis highlights the complexity of the dynamical landscape involved in the creation of such chaos.
Conclusions: The extensive presence of high-order chaos in a well-established physiological model of electrorhythmogenesis further emphasizes the applicability and relevance of mean field mesoscopic models in the description of brain activity at theoretical, experimental, and clinical levels.
背景:在之前的一项研究中(Dafilis et al. In Chaos 23(2):023111, 2013), Liley的脑电图介观模型提供了四维混沌的证据。研究仅限于模型方程的一个参数集。发现:在本报告中,我们通过提出四维混沌行为扩展到生物可接受参数空间的大面积的证据来扩展该结果。双参数分岔分析强调了产生这种混沌所涉及的动态景观的复杂性。结论:在一个完善的心律失常生理模型中广泛存在高阶混沌,进一步强调了平均场介观模型在理论、实验和临床水平上描述脑活动的适用性和相关性。
{"title":"Extensive Four-Dimensional Chaos in a Mesoscopic Model of the Electroencephalogram.","authors":"Mathew P Dafilis, Federico Frascoli, Peter J Cadusch, David T J Liley","doi":"10.1186/s13408-015-0028-3","DOIUrl":"https://doi.org/10.1186/s13408-015-0028-3","url":null,"abstract":"<p><strong>Background: </strong>In a previous work (Dafilis et al. in Chaos 23(2):023111, 2013), evidence was presented for four-dimensional chaos in Liley's mesoscopic model of the electroencephalogram. The study was limited to one parameter set of the model equations.</p><p><strong>Findings: </strong>In this report we expand that result by presenting evidence for the extension of four-dimensional chaotic behavior to a large area of the biologically admissible parameter space. A two-parameter bifurcation analysis highlights the complexity of the dynamical landscape involved in the creation of such chaos.</p><p><strong>Conclusions: </strong>The extensive presence of high-order chaos in a well-established physiological model of electrorhythmogenesis further emphasizes the applicability and relevance of mean field mesoscopic models in the description of brain activity at theoretical, experimental, and clinical levels.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0028-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33914696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-06-17DOI: 10.1186/s13408-015-0024-7
Alexandre Afgoustidis
In the primary visual cortex, the processing of information uses the distribution of orientations in the visual input: neurons react to some orientations in the stimulus more than to others. In many species, orientation preference is mapped in a remarkable way on the cortical surface, and this organization of the neural population seems to be important for visual processing. Now, existing models for the geometry and development of orientation preference maps in higher mammals make a crucial use of symmetry considerations. In this paper, we consider probabilistic models for V1 maps from the point of view of group theory; we focus on Gaussian random fields with symmetry properties and review the probabilistic arguments that allow one to estimate pinwheel densities and predict the observed value of π. Then, in order to test the relevance of general symmetry arguments and to introduce methods which could be of use in modeling curved regions, we reconsider this model in the light of group representation theory, the canonical mathematics of symmetry. We show that through the Plancherel decomposition of the space of complex-valued maps on the Euclidean plane, each infinite-dimensional irreducible unitary representation of the special Euclidean group yields a unique V1-like map, and we use representation theory as a symmetry-based toolbox to build orientation maps adapted to the most famous non-Euclidean geometries, viz. spherical and hyperbolic geometry. We find that most of the dominant traits of V1 maps are preserved in these; we also study the link between symmetry and the statistics of singularities in orientation maps, and show what the striking quantitative characteristics observed in animals become in our curved models.
{"title":"Orientation Maps in V1 and Non-Euclidean Geometry.","authors":"Alexandre Afgoustidis","doi":"10.1186/s13408-015-0024-7","DOIUrl":"10.1186/s13408-015-0024-7","url":null,"abstract":"<p><p>In the primary visual cortex, the processing of information uses the distribution of orientations in the visual input: neurons react to some orientations in the stimulus more than to others. In many species, orientation preference is mapped in a remarkable way on the cortical surface, and this organization of the neural population seems to be important for visual processing. Now, existing models for the geometry and development of orientation preference maps in higher mammals make a crucial use of symmetry considerations. In this paper, we consider probabilistic models for V1 maps from the point of view of group theory; we focus on Gaussian random fields with symmetry properties and review the probabilistic arguments that allow one to estimate pinwheel densities and predict the observed value of π. Then, in order to test the relevance of general symmetry arguments and to introduce methods which could be of use in modeling curved regions, we reconsider this model in the light of group representation theory, the canonical mathematics of symmetry. We show that through the Plancherel decomposition of the space of complex-valued maps on the Euclidean plane, each infinite-dimensional irreducible unitary representation of the special Euclidean group yields a unique V1-like map, and we use representation theory as a symmetry-based toolbox to build orientation maps adapted to the most famous non-Euclidean geometries, viz. spherical and hyperbolic geometry. We find that most of the dominant traits of V1 maps are preserved in these; we also study the link between symmetry and the statistics of singularities in orientation maps, and show what the striking quantitative characteristics observed in animals become in our curved models. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4469697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33269844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-05-30DOI: 10.1186/s13408-015-0023-8
Romain Veltz, Pascal Chossat, Olivier Faugeras
This paper challenges and extends earlier seminal work. We consider the problem of describing mathematically the spontaneous activity of V1 by combining several important experimental observations including (1) the organization of the visual cortex into a spatially periodic network of hypercolumns structured around pinwheels, (2) the difference between short-range and long-range intracortical connections, the first ones being rather isotropic and producing naturally doubly periodic patterns by Turing mechanisms, the second one being patchy, and (3) the fact that the Turing patterns spontaneously produced by the short-range connections and the network of pinwheels have similar periods. By analyzing the PO maps, we are able to classify all possible singular points (the pinwheels) as having symmetries described by a small subset of the wallpaper groups. We then propose a description of the spontaneous activity of V1 using a classical voltage-based neural field model that features isotropic short-range connectivities modulated by non-isotropic long-range connectivities. A key observation is that, with only short-range connections and because the problem has full translational invariance in this case, a spontaneous doubly periodic pattern generates a 2-torus in a suitable functional space which persists as a flow-invariant manifold under small perturbations, for example when turning on the long-range connections. Through a complete analysis of the symmetries of the resulting neural field equation and motivated by a numerical investigation of the bifurcations of their solutions, we conclude that the branches of solutions which are stable over an extended range of parameters are those that correspond to patterns with an hexagonal (or nearly hexagonal) symmetry. The question of which patterns persist when turning on the long-range connections is answered by (1) analyzing the remaining symmetries on the perturbed torus and (2) combining this information with the Poincaré-Hopf theorem. We have developed a numerical implementation of the theory that has allowed us to produce the predicted patterns of activities, the planforms. In particular we generalize the contoured and non-contoured planforms predicted by previous authors.
{"title":"On the Effects on Cortical Spontaneous Activity of the Symmetries of the Network of Pinwheels in Visual Area V1.","authors":"Romain Veltz, Pascal Chossat, Olivier Faugeras","doi":"10.1186/s13408-015-0023-8","DOIUrl":"10.1186/s13408-015-0023-8","url":null,"abstract":"<p><p>This paper challenges and extends earlier seminal work. We consider the problem of describing mathematically the spontaneous activity of V1 by combining several important experimental observations including (1) the organization of the visual cortex into a spatially periodic network of hypercolumns structured around pinwheels, (2) the difference between short-range and long-range intracortical connections, the first ones being rather isotropic and producing naturally doubly periodic patterns by Turing mechanisms, the second one being patchy, and (3) the fact that the Turing patterns spontaneously produced by the short-range connections and the network of pinwheels have similar periods. By analyzing the PO maps, we are able to classify all possible singular points (the pinwheels) as having symmetries described by a small subset of the wallpaper groups. We then propose a description of the spontaneous activity of V1 using a classical voltage-based neural field model that features isotropic short-range connectivities modulated by non-isotropic long-range connectivities. A key observation is that, with only short-range connections and because the problem has full translational invariance in this case, a spontaneous doubly periodic pattern generates a 2-torus in a suitable functional space which persists as a flow-invariant manifold under small perturbations, for example when turning on the long-range connections. Through a complete analysis of the symmetries of the resulting neural field equation and motivated by a numerical investigation of the bifurcations of their solutions, we conclude that the branches of solutions which are stable over an extended range of parameters are those that correspond to patterns with an hexagonal (or nearly hexagonal) symmetry. The question of which patterns persist when turning on the long-range connections is answered by (1) analyzing the remaining symmetries on the perturbed torus and (2) combining this information with the Poincaré-Hopf theorem. We have developed a numerical implementation of the theory that has allowed us to produce the predicted patterns of activities, the planforms. In particular we generalize the contoured and non-contoured planforms predicted by previous authors. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4449351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33370406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-07-17DOI: 10.1186/s13408-015-0026-5
Abigail C Snyder, Jonathan E Rubin
Rhythmic behaviors such as breathing, walking, and scratching are vital to many species. Such behaviors can emerge from groups of neurons, called central pattern generators, in the absence of rhythmic inputs. In vertebrates, the identification of the cells that constitute the central pattern generator for particular rhythmic behaviors is difficult, and often, its existence has only been inferred. For example, under experimental conditions, intact turtles generate several rhythmic scratch motor patterns corresponding to non-rhythmic stimulation of different body regions. These patterns feature alternating phases of motoneuron activation that occur repeatedly, with different patterns distinguished by the relative timing and duration of activity of hip extensor, hip flexor, and knee extensor motoneurons. While the central pattern generator network responsible for these outputs has not been located, there is hope to use motoneuron recordings to deduce its properties. To this end, this work presents a model of a previously proposed central pattern generator network and analyzes its capability to produce two distinct scratch rhythms from a single neuron pool, selected by different combinations of tonic drive parameters but with fixed strengths of connections within the network. We show through simulation that the proposed network can achieve the desired multi-functionality, even though it relies on hip unit generators to recruit appropriately timed knee extensor motoneuron activity, including a delay relative to hip activation in rostral scratch. Furthermore, we develop a phase space representation, focusing on the inputs to and the intrinsic slow variable of the knee extensor motoneuron, which we use to derive sufficient conditions for the network to realize each rhythm and which illustrates the role of a saddle-node bifurcation in achieving the knee extensor delay. This framework is harnessed to consider bistability and to make predictions about the responses of the scratch rhythms to input changes for future experimental testing.
{"title":"Conditions for Multi-functionality in a Rhythm Generating Network Inspired by Turtle Scratching.","authors":"Abigail C Snyder, Jonathan E Rubin","doi":"10.1186/s13408-015-0026-5","DOIUrl":"https://doi.org/10.1186/s13408-015-0026-5","url":null,"abstract":"<p><p>Rhythmic behaviors such as breathing, walking, and scratching are vital to many species. Such behaviors can emerge from groups of neurons, called central pattern generators, in the absence of rhythmic inputs. In vertebrates, the identification of the cells that constitute the central pattern generator for particular rhythmic behaviors is difficult, and often, its existence has only been inferred. For example, under experimental conditions, intact turtles generate several rhythmic scratch motor patterns corresponding to non-rhythmic stimulation of different body regions. These patterns feature alternating phases of motoneuron activation that occur repeatedly, with different patterns distinguished by the relative timing and duration of activity of hip extensor, hip flexor, and knee extensor motoneurons. While the central pattern generator network responsible for these outputs has not been located, there is hope to use motoneuron recordings to deduce its properties. To this end, this work presents a model of a previously proposed central pattern generator network and analyzes its capability to produce two distinct scratch rhythms from a single neuron pool, selected by different combinations of tonic drive parameters but with fixed strengths of connections within the network. We show through simulation that the proposed network can achieve the desired multi-functionality, even though it relies on hip unit generators to recruit appropriately timed knee extensor motoneuron activity, including a delay relative to hip activation in rostral scratch. Furthermore, we develop a phase space representation, focusing on the inputs to and the intrinsic slow variable of the knee extensor motoneuron, which we use to derive sufficient conditions for the network to realize each rhythm and which illustrates the role of a saddle-node bifurcation in achieving the knee extensor delay. This framework is harnessed to consider bistability and to make predictions about the responses of the scratch rhythms to input changes for future experimental testing. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0026-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33913161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-10-05DOI: 10.1186/s13408-015-0032-7
Saing Paul Hou, Wassim M Haddad, Nader Meskin, James M Bailey
With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory.
{"title":"A Mechanistic Neural Field Theory of How Anesthesia Suppresses Consciousness: Synaptic Drive Dynamics, Bifurcations, Attractors, and Partial State Equipartitioning.","authors":"Saing Paul Hou, Wassim M Haddad, Nader Meskin, James M Bailey","doi":"10.1186/s13408-015-0032-7","DOIUrl":"https://doi.org/10.1186/s13408-015-0032-7","url":null,"abstract":"<p><p>With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0032-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34233336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-09-01DOI: 10.1186/s13408-015-0031-8
Mireille Bossy, Olivier Faugeras, Denis Talay
In this note, we clarify the well-posedness of the limit equations to the mean-field N-neuron models proposed in (Baladron et al. in J. Math. Neurosci. 2:10, 2012) and we prove the associated propagation of chaos property. We also complete the modeling issue in (Baladron et al. in J. Math. Neurosci. 2:10, 2012) by discussing the well-posedness of the stochastic differential equations which govern the behavior of the ion channels and the amount of available neurotransmitters.
{"title":"Clarification and Complement to \"Mean-Field Description and Propagation of Chaos in Networks of Hodgkin-Huxley and FitzHugh-Nagumo Neurons\".","authors":"Mireille Bossy, Olivier Faugeras, Denis Talay","doi":"10.1186/s13408-015-0031-8","DOIUrl":"https://doi.org/10.1186/s13408-015-0031-8","url":null,"abstract":"<p><p>In this note, we clarify the well-posedness of the limit equations to the mean-field N-neuron models proposed in (Baladron et al. in J. Math. Neurosci. 2:10, 2012) and we prove the associated propagation of chaos property. We also complete the modeling issue in (Baladron et al. in J. Math. Neurosci. 2:10, 2012) by discussing the well-posedness of the stochastic differential equations which govern the behavior of the ion channels and the amount of available neurotransmitters. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0031-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33969703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-08-06DOI: 10.1186/s13408-015-0029-2
Peter De Maesschalck, Martin Wechselberger
We discuss the notion of excitability in 2D slow/fast neural models from a geometric singular perturbation theory point of view. We focus on the inherent singular nature of slow/fast neural models and define excitability via singular bifurcations. In particular, we show that type I excitability is associated with a novel singular Bogdanov-Takens/SNIC bifurcation while type II excitability is associated with a singular Andronov-Hopf bifurcation. In both cases, canards play an important role in the understanding of the unfolding of these singular bifurcation structures. We also explain the transition between the two excitability types and highlight all bifurcations involved, thus providing a complete analysis of excitability based on geometric singular perturbation theory.
{"title":"Neural Excitability and Singular Bifurcations.","authors":"Peter De Maesschalck, Martin Wechselberger","doi":"10.1186/s13408-015-0029-2","DOIUrl":"https://doi.org/10.1186/s13408-015-0029-2","url":null,"abstract":"<p><p>We discuss the notion of excitability in 2D slow/fast neural models from a geometric singular perturbation theory point of view. We focus on the inherent singular nature of slow/fast neural models and define excitability via singular bifurcations. In particular, we show that type I excitability is associated with a novel singular Bogdanov-Takens/SNIC bifurcation while type II excitability is associated with a singular Andronov-Hopf bifurcation. In both cases, canards play an important role in the understanding of the unfolding of these singular bifurcation structures. We also explain the transition between the two excitability types and highlight all bifurcations involved, thus providing a complete analysis of excitability based on geometric singular perturbation theory. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0029-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33897587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-07-14DOI: 10.1186/s13408-015-0027-4
Cynthia I Wood, Illya V Hicks
The concept of cell assembly was introduced by Hebb and formalized mathematically by Palm in the framework of graph theory. In the study of associative memory, a cell assembly is a group of neurons that are strongly connected and represent a "concept" of our knowledge. This group is wired in a specific manner such that only a fraction of its neurons will excite the entire assembly. We link the concept of cell assembly to the closure of a minimal k-core and study a particular type of cell assembly called k-assembly. The goal of this paper is to find all substructures within a network that must be excited in order to activate a k-assembly. Through numerical experiments, we confirm that fractions of these important subgroups overlap. To explore the problem, we present a backtracking algorithm to find all minimal k-cores of a given undirected graph, which belongs to the class of NP-hard problems. The proposed method is a modification of the Bron and Kerbosch algorithm for finding all cliques of an undirected graph. The results in the tested graphs offer insight in analyzing graph structure and help better understand how concepts are stored.
{"title":"The Minimal k-Core Problem for Modeling k-Assemblies.","authors":"Cynthia I Wood, Illya V Hicks","doi":"10.1186/s13408-015-0027-4","DOIUrl":"https://doi.org/10.1186/s13408-015-0027-4","url":null,"abstract":"<p><p>The concept of cell assembly was introduced by Hebb and formalized mathematically by Palm in the framework of graph theory. In the study of associative memory, a cell assembly is a group of neurons that are strongly connected and represent a \"concept\" of our knowledge. This group is wired in a specific manner such that only a fraction of its neurons will excite the entire assembly. We link the concept of cell assembly to the closure of a minimal k-core and study a particular type of cell assembly called k-assembly. The goal of this paper is to find all substructures within a network that must be excited in order to activate a k-assembly. Through numerical experiments, we confirm that fractions of these important subgroups overlap. To explore the problem, we present a backtracking algorithm to find all minimal k-cores of a given undirected graph, which belongs to the class of NP-hard problems. The proposed method is a modification of the Bron and Kerbosch algorithm for finding all cliques of an undirected graph. The results in the tested graphs offer insight in analyzing graph structure and help better understand how concepts are stored. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0027-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33900856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01Epub Date: 2015-09-01DOI: 10.1186/s13408-015-0030-9
David A Leen, Eric Shea-Brown
The collective dynamics of neural populations are often characterized in terms of correlations in the spike activity of different neurons. We have developed an understanding of the circuit mechanisms that lead to correlations among cell pairs, but little is known about what determines the population firing statistics among larger groups of cells. Here, we examine this question for a simple, but ubiquitous, circuit feature: common fluctuating input arriving to spiking neurons of integrate-and-fire type. We show that this leads to strong beyond-pairwise correlations-that is, correlations that cannot be captured by maximum entropy models that extrapolate from pairwise statistics-as for earlier work with discrete threshold crossing (dichotomous Gaussian) models. Moreover, we find that the same is true for another widely used, doubly stochastic model of neural spiking, the linear-nonlinear cascade. We demonstrate the strong connection between the collective dynamics produced by integrate-and-fire and dichotomous Gaussian models, and show that the latter is a surprisingly accurate model of the former. Our conclusion is that beyond-pairwise correlations can be both broadly expected and possible to describe by simplified (and tractable) statistical models.
{"title":"A Simple Mechanism for Beyond-Pairwise Correlations in Integrate-and-Fire Neurons.","authors":"David A Leen, Eric Shea-Brown","doi":"10.1186/s13408-015-0030-9","DOIUrl":"10.1186/s13408-015-0030-9","url":null,"abstract":"<p><p>The collective dynamics of neural populations are often characterized in terms of correlations in the spike activity of different neurons. We have developed an understanding of the circuit mechanisms that lead to correlations among cell pairs, but little is known about what determines the population firing statistics among larger groups of cells. Here, we examine this question for a simple, but ubiquitous, circuit feature: common fluctuating input arriving to spiking neurons of integrate-and-fire type. We show that this leads to strong beyond-pairwise correlations-that is, correlations that cannot be captured by maximum entropy models that extrapolate from pairwise statistics-as for earlier work with discrete threshold crossing (dichotomous Gaussian) models. Moreover, we find that the same is true for another widely used, doubly stochastic model of neural spiking, the linear-nonlinear cascade. We demonstrate the strong connection between the collective dynamics produced by integrate-and-fire and dichotomous Gaussian models, and show that the latter is a surprisingly accurate model of the former. Our conclusion is that beyond-pairwise correlations can be both broadly expected and possible to describe by simplified (and tractable) statistical models. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4554967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33914697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}