Pub Date : 2015-04-08eCollection Date: 2015-01-01DOI: 10.1186/s13408-015-0022-9
Alexandre Afgoustidis
In the primary visual cortex of many mammals, the processing of sensory information involves recognizing stimuli orientations. The repartition of preferred orientations of neurons in some areas is remarkable: a repetitive, non-periodic, layout. This repetitive pattern is understood to be fundamental for basic non-local aspects of vision, like the perception of contours, but important questions remain about its development and function. We focus here on Gaussian Random Fields, which provide a good description of the initial stage of orientation map development and, in spite of shortcomings we will recall, a computable framework for discussing general principles underlying the geometry of mature maps. We discuss the relationship between the notion of column spacing and the structure of correlation spectra; we prove formulas for the mean value and variance of column spacing, and we use numerical analysis of exact analytic formulae to study the variance. Referring to studies by Wolf, Geisel, Kaschube, Schnabel, and coworkers, we also show that spectral thinness is not an essential ingredient to obtain a pinwheel density of π, whereas it appears as a signature of Euclidean symmetry. The minimum variance property associated to thin spectra could be useful for information processing, provide optimal modularity for V1 hypercolumns, and be a first step toward a mathematical definition of hypercolumns. A measurement of this property in real maps is in principle possible, and comparison with the results in our paper could help establish the role of our minimum variance hypothesis in the development process.
{"title":"Monochromaticity of orientation maps in v1 implies minimum variance for hypercolumn size.","authors":"Alexandre Afgoustidis","doi":"10.1186/s13408-015-0022-9","DOIUrl":"https://doi.org/10.1186/s13408-015-0022-9","url":null,"abstract":"<p><p>In the primary visual cortex of many mammals, the processing of sensory information involves recognizing stimuli orientations. The repartition of preferred orientations of neurons in some areas is remarkable: a repetitive, non-periodic, layout. This repetitive pattern is understood to be fundamental for basic non-local aspects of vision, like the perception of contours, but important questions remain about its development and function. We focus here on Gaussian Random Fields, which provide a good description of the initial stage of orientation map development and, in spite of shortcomings we will recall, a computable framework for discussing general principles underlying the geometry of mature maps. We discuss the relationship between the notion of column spacing and the structure of correlation spectra; we prove formulas for the mean value and variance of column spacing, and we use numerical analysis of exact analytic formulae to study the variance. Referring to studies by Wolf, Geisel, Kaschube, Schnabel, and coworkers, we also show that spectral thinness is not an essential ingredient to obtain a pinwheel density of π, whereas it appears as a signature of Euclidean symmetry. The minimum variance property associated to thin spectra could be useful for information processing, provide optimal modularity for V1 hypercolumns, and be a first step toward a mathematical definition of hypercolumns. A measurement of this property in real maps is in principle possible, and comparison with the results in our paper could help establish the role of our minimum variance hypothesis in the development process. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0022-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33203638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-08eCollection Date: 2015-01-01DOI: 10.1186/s13408-015-0021-x
Ehsan Negahbani, D Alistair Steyn-Ross, Moira L Steyn-Ross, Marcus T Wilson, Jamie W Sleigh
The Wilson-Cowan neural field equations describe the dynamical behavior of a 1-D continuum of excitatory and inhibitory cortical neural aggregates, using a pair of coupled integro-differential equations. Here we use bifurcation theory and small-noise linear stochastics to study the range of a phase transitions-sudden qualitative changes in the state of a dynamical system emerging from a bifurcation-accessible to the Wilson-Cowan network. Specifically, we examine saddle-node, Hopf, Turing, and Turing-Hopf instabilities. We introduce stochasticity by adding small-amplitude spatio-temporal white noise, and analyze the resulting subthreshold fluctuations using an Ornstein-Uhlenbeck linearization. This analysis predicts divergent changes in correlation and spectral characteristics of neural activity during close approach to bifurcation from below. We validate these theoretical predictions using numerical simulations. The results demonstrate the role of noise in the emergence of critically slowed precursors in both space and time, and suggest that these early-warning signals are a universal feature of a neural system close to bifurcation. In particular, these precursor signals are likely to have neurobiological significance as early warnings of impending state change in the cortex. We support this claim with an analysis of the in vitro local field potentials recorded from slices of mouse-brain tissue. We show that in the period leading up to emergence of spontaneous seizure-like events, the mouse field potentials show a characteristic spectral focusing toward lower frequencies concomitant with a growth in fluctuation variance, consistent with critical slowing near a bifurcation point. This observation of biological criticality has clear implications regarding the feasibility of seizure prediction.
{"title":"Noise-induced precursors of state transitions in the stochastic Wilson-cowan model.","authors":"Ehsan Negahbani, D Alistair Steyn-Ross, Moira L Steyn-Ross, Marcus T Wilson, Jamie W Sleigh","doi":"10.1186/s13408-015-0021-x","DOIUrl":"https://doi.org/10.1186/s13408-015-0021-x","url":null,"abstract":"<p><p>The Wilson-Cowan neural field equations describe the dynamical behavior of a 1-D continuum of excitatory and inhibitory cortical neural aggregates, using a pair of coupled integro-differential equations. Here we use bifurcation theory and small-noise linear stochastics to study the range of a phase transitions-sudden qualitative changes in the state of a dynamical system emerging from a bifurcation-accessible to the Wilson-Cowan network. Specifically, we examine saddle-node, Hopf, Turing, and Turing-Hopf instabilities. We introduce stochasticity by adding small-amplitude spatio-temporal white noise, and analyze the resulting subthreshold fluctuations using an Ornstein-Uhlenbeck linearization. This analysis predicts divergent changes in correlation and spectral characteristics of neural activity during close approach to bifurcation from below. We validate these theoretical predictions using numerical simulations. The results demonstrate the role of noise in the emergence of critically slowed precursors in both space and time, and suggest that these early-warning signals are a universal feature of a neural system close to bifurcation. In particular, these precursor signals are likely to have neurobiological significance as early warnings of impending state change in the cortex. We support this claim with an analysis of the in vitro local field potentials recorded from slices of mouse-brain tissue. We show that in the period leading up to emergence of spontaneous seizure-like events, the mouse field potentials show a characteristic spectral focusing toward lower frequencies concomitant with a growth in fluctuation variance, consistent with critical slowing near a bifurcation point. This observation of biological criticality has clear implications regarding the feasibility of seizure prediction. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0021-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33203637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-27eCollection Date: 2015-01-01DOI: 10.1186/s13408-015-0019-4
Hil G E Meijer, Tahra L Eissa, Bert Kiewiet, Jeremy F Neuman, Catherine A Schevon, Ronald G Emerson, Robert R Goodman, Guy M McKhann, Charles J Marcuccilli, Andrew K Tryba, Jack D Cowan, Stephan A van Gils, Wim van Drongelen
Unlabelled: Measurements of neuronal signals during human seizure activity and evoked epileptic activity in experimental models suggest that, in these pathological states, the individual nerve cells experience an activity driven depolarization block, i.e. they saturate. We examined the effect of such a saturation in the Wilson-Cowan formalism by adapting the nonlinear activation function; we substituted the commonly applied sigmoid for a Gaussian function. We discuss experimental recordings during a seizure that support this substitution. Next we perform a bifurcation analysis on the Wilson-Cowan model with a Gaussian activation function. The main effect is an additional stable equilibrium with high excitatory and low inhibitory activity. Analysis of coupled local networks then shows that such high activity can stay localized or spread. Specifically, in a spatial continuum we show a wavefront with inhibition leading followed by excitatory activity. We relate our model simulations to observations of spreading activity during seizures.
Electronic supplementary material: The online version of this article (doi:10.1186/s13408-015-0019-4) contains supplementary material 1.
{"title":"Modeling focal epileptic activity in the Wilson-cowan model with depolarization block.","authors":"Hil G E Meijer, Tahra L Eissa, Bert Kiewiet, Jeremy F Neuman, Catherine A Schevon, Ronald G Emerson, Robert R Goodman, Guy M McKhann, Charles J Marcuccilli, Andrew K Tryba, Jack D Cowan, Stephan A van Gils, Wim van Drongelen","doi":"10.1186/s13408-015-0019-4","DOIUrl":"https://doi.org/10.1186/s13408-015-0019-4","url":null,"abstract":"<p><strong>Unlabelled: </strong>Measurements of neuronal signals during human seizure activity and evoked epileptic activity in experimental models suggest that, in these pathological states, the individual nerve cells experience an activity driven depolarization block, i.e. they saturate. We examined the effect of such a saturation in the Wilson-Cowan formalism by adapting the nonlinear activation function; we substituted the commonly applied sigmoid for a Gaussian function. We discuss experimental recordings during a seizure that support this substitution. Next we perform a bifurcation analysis on the Wilson-Cowan model with a Gaussian activation function. The main effect is an additional stable equilibrium with high excitatory and low inhibitory activity. Analysis of coupled local networks then shows that such high activity can stay localized or spread. Specifically, in a spatial continuum we show a wavefront with inhibition leading followed by excitatory activity. We relate our model simulations to observations of spreading activity during seizures.</p><p><strong>Electronic supplementary material: </strong>The online version of this article (doi:10.1186/s13408-015-0019-4) contains supplementary material 1.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0019-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33073732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-24eCollection Date: 2015-01-01DOI: 10.1186/s13408-015-0018-5
Carson C Chow, Michael A Buice
Stochastic differential equations (SDEs) have multiple applications in mathematical neuroscience and are notoriously difficult. Here, we give a self-contained pedagogical review of perturbative field theoretic and path integral methods to calculate moments of the probability density function of SDEs. The methods can be extended to high dimensional systems such as networks of coupled neurons and even deterministic systems with quenched disorder.
{"title":"Path integral methods for stochastic differential equations.","authors":"Carson C Chow, Michael A Buice","doi":"10.1186/s13408-015-0018-5","DOIUrl":"https://doi.org/10.1186/s13408-015-0018-5","url":null,"abstract":"<p><p>Stochastic differential equations (SDEs) have multiple applications in mathematical neuroscience and are notoriously difficult. Here, we give a self-contained pedagogical review of perturbative field theoretic and path integral methods to calculate moments of the probability density function of SDEs. The methods can be extended to high dimensional systems such as networks of coupled neurons and even deterministic systems with quenched disorder. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0018-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33079377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-15eCollection Date: 2015-01-01DOI: 10.1186/s13408-015-0020-y
Diego Fasoli, Olivier Faugeras, Stefano Panzeri
We introduce a new formalism for evaluating analytically the cross-correlation structure of a finite-size firing-rate network with recurrent connections. The analysis performs a first-order perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations so obtained reveal previously unknown behaviors of recurrent firing-rate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a mean-field description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.
{"title":"A formalism for evaluating analytically the cross-correlation structure of a firing-rate network model.","authors":"Diego Fasoli, Olivier Faugeras, Stefano Panzeri","doi":"10.1186/s13408-015-0020-y","DOIUrl":"https://doi.org/10.1186/s13408-015-0020-y","url":null,"abstract":"<p><p>We introduce a new formalism for evaluating analytically the cross-correlation structure of a finite-size firing-rate network with recurrent connections. The analysis performs a first-order perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations so obtained reveal previously unknown behaviors of recurrent firing-rate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a mean-field description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0020-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33073730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-27eCollection Date: 2015-01-01DOI: 10.1186/s13408-014-0016-z
Paul C Bressloff
We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.
{"title":"Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.","authors":"Paul C Bressloff","doi":"10.1186/s13408-014-0016-z","DOIUrl":"10.1186/s13408-014-0016-z","url":null,"abstract":"<p><p>We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4385107/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33073727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-27eCollection Date: 2015-01-01DOI: 10.1186/s13408-015-0017-6
Na Yu, Carmen C Canavier
Midbrain dopamine neurons exhibit a novel type of bursting that we call "inverted square wave bursting" when exposed to Ca(2+)-activated small conductance (SK) K(+) channel blockers in vitro. This type of bursting has three phases: hyperpolarized silence, spiking, and depolarization block. We find that two slow variables are required for this type of bursting, and we show that the three-dimensional bifurcation diagram for inverted square wave bursting is a folded surface with upper (depolarized) and lower (hyperpolarized) branches. The activation of the L-type Ca(2+) channel largely supports the separation between these branches. Spiking is initiated at a saddle node on an invariant circle bifurcation at the folded edge of the lower branch and the trajectory spirals around the unstable fixed points on the upper branch. Spiking is terminated at a supercritical Hopf bifurcation, but the trajectory remains on the upper branch until it hits a saddle node on the upper folded edge and drops to the lower branch. The two slow variables contribute as follows. A second, slow component of sodium channel inactivation is largely responsible for the initiation and termination of spiking. The slow activation of the ether-a-go-go-related (ERG) K(+) current is largely responsible for termination of the depolarized plateau. The mechanisms and slow processes identified herein may contribute to bursting as well as entry into and recovery from the depolarization block to different degrees in different subpopulations of dopamine neurons in vivo.
{"title":"A Mathematical Model of a Midbrain Dopamine Neuron Identifies Two Slow Variables Likely Responsible for Bursts Evoked by SK Channel Antagonists and Terminated by Depolarization Block.","authors":"Na Yu, Carmen C Canavier","doi":"10.1186/s13408-015-0017-6","DOIUrl":"https://doi.org/10.1186/s13408-015-0017-6","url":null,"abstract":"<p><p>Midbrain dopamine neurons exhibit a novel type of bursting that we call \"inverted square wave bursting\" when exposed to Ca(2+)-activated small conductance (SK) K(+) channel blockers in vitro. This type of bursting has three phases: hyperpolarized silence, spiking, and depolarization block. We find that two slow variables are required for this type of bursting, and we show that the three-dimensional bifurcation diagram for inverted square wave bursting is a folded surface with upper (depolarized) and lower (hyperpolarized) branches. The activation of the L-type Ca(2+) channel largely supports the separation between these branches. Spiking is initiated at a saddle node on an invariant circle bifurcation at the folded edge of the lower branch and the trajectory spirals around the unstable fixed points on the upper branch. Spiking is terminated at a supercritical Hopf bifurcation, but the trajectory remains on the upper branch until it hits a saddle node on the upper folded edge and drops to the lower branch. The two slow variables contribute as follows. A second, slow component of sodium channel inactivation is largely responsible for the initiation and termination of spiking. The slow activation of the ether-a-go-go-related (ERG) K(+) current is largely responsible for termination of the depolarized plateau. The mechanisms and slow processes identified herein may contribute to bursting as well as entry into and recovery from the depolarization block to different degrees in different subpopulations of dopamine neurons in vivo. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2015-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-015-0017-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33073728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01Epub Date: 2014-04-25DOI: 10.1186/2190-8567-4-10
Christoph Börgers, Jie Li, Nancy Kopell
The most basic functional role commonly ascribed to synchrony in the brain is that of amplifying excitatory neuronal signals. The reasoning is straightforward: When positive charge is injected into a leaky target neuron over a time window of positive duration, some of it will have time to leak back out before an action potential is triggered in the target, and it will in that sense be wasted. If the goal is to elicit a firing response in the target using as little charge as possible, it seems best to deliver the charge all at once, i.e., in perfect synchrony. In this article, we show that this reasoning is correct only if one assumes that the input ceases when the target crosses the firing threshold, but before it actually fires. If the input ceases later-for instance, in response to a feedback signal triggered by the firing of the target-the "most economical" way of delivering input (the way that requires the least total amount of input) is no longer precisely synchronous, but merely approximately so. If the target is a heterogeneous network, as it always is in the brain, then ceasing the input "when the target crosses the firing threshold" is not an option, because there is no single moment when the firing threshold is crossed. In this sense, precise synchrony is never optimal in the brain.
{"title":"Approximate, not Perfect Synchrony Maximizes the Downstream Effectiveness of Excitatory Neuronal Ensembles.","authors":"Christoph Börgers, Jie Li, Nancy Kopell","doi":"10.1186/2190-8567-4-10","DOIUrl":"https://doi.org/10.1186/2190-8567-4-10","url":null,"abstract":"<p><p>The most basic functional role commonly ascribed to synchrony in the brain is that of amplifying excitatory neuronal signals. The reasoning is straightforward: When positive charge is injected into a leaky target neuron over a time window of positive duration, some of it will have time to leak back out before an action potential is triggered in the target, and it will in that sense be wasted. If the goal is to elicit a firing response in the target using as little charge as possible, it seems best to deliver the charge all at once, i.e., in perfect synchrony. In this article, we show that this reasoning is correct only if one assumes that the input ceases when the target crosses the firing threshold, but before it actually fires. If the input ceases later-for instance, in response to a feedback signal triggered by the firing of the target-the \"most economical\" way of delivering input (the way that requires the least total amount of input) is no longer precisely synchronous, but merely approximately so. If the target is a heterogeneous network, as it always is in the brain, then ceasing the input \"when the target crosses the firing threshold\" is not an option, because there is no single moment when the firing threshold is crossed. In this sense, precise synchrony is never optimal in the brain. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/2190-8567-4-10","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34492411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01Epub Date: 2014-07-25DOI: 10.1186/2190-8567-4-13
Carlo R Laing
Numerical bifurcation theory involves finding and then following certain types of solutions of differential equations as parameters are varied, and determining whether they undergo any bifurcations (qualitative changes in behaviour). The primary technique for doing this is numerical continuation, where the solution of interest satisfies a parametrised set of algebraic equations, and branches of solutions are followed as the parameter is varied. An effective way to do this is with pseudo-arclength continuation. We give an introduction to pseudo-arclength continuation and then demonstrate its use in investigating the behaviour of a number of models from the field of computational neuroscience. The models we consider are high dimensional, as they result from the discretisation of neural field models-nonlocal differential equations used to model macroscopic pattern formation in the cortex. We consider both stationary and moving patterns in one spatial dimension, and then translating patterns in two spatial dimensions. A variety of results from the literature are discussed, and a number of extensions of the technique are given.
{"title":"Numerical Bifurcation Theory for High-Dimensional Neural Models.","authors":"Carlo R Laing","doi":"10.1186/2190-8567-4-13","DOIUrl":"https://doi.org/10.1186/2190-8567-4-13","url":null,"abstract":"<p><p>Numerical bifurcation theory involves finding and then following certain types of solutions of differential equations as parameters are varied, and determining whether they undergo any bifurcations (qualitative changes in behaviour). The primary technique for doing this is numerical continuation, where the solution of interest satisfies a parametrised set of algebraic equations, and branches of solutions are followed as the parameter is varied. An effective way to do this is with pseudo-arclength continuation. We give an introduction to pseudo-arclength continuation and then demonstrate its use in investigating the behaviour of a number of models from the field of computational neuroscience. The models we consider are high dimensional, as they result from the discretisation of neural field models-nonlocal differential equations used to model macroscopic pattern formation in the cortex. We consider both stationary and moving patterns in one spatial dimension, and then translating patterns in two spatial dimensions. A variety of results from the literature are discussed, and a number of extensions of the technique are given. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/2190-8567-4-13","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34505122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by a model for neural networks with adaptation and fatigue, we study a conservative fragmentation equation that describes the density probability of neurons with an elapsed time s after its last discharge. In the linear setting, we extend an argument by Laurençot and Perthame to prove exponential decay to the steady state. This extension allows us to handle coefficients that have a large variation rather than constant coefficients. In another extension of the argument, we treat a weakly nonlinear case and prove total desynchronization in the network. For greater nonlinearities, we present a numerical study of the impact of the fragmentation term on the appearance of synchronization of neurons in the network using two "extreme" cases. Mathematics Subject Classification (2000)2010: 35B40, 35F20, 35R09, 92B20.
{"title":"Adaptation and fatigue model for neuron networks and large time asymptotics in a nonlinear fragmentation equation.","authors":"Khashayar Pakdaman, Benoît Perthame, Delphine Salort","doi":"10.1186/2190-8567-4-14","DOIUrl":"https://doi.org/10.1186/2190-8567-4-14","url":null,"abstract":"<p><p>Motivated by a model for neural networks with adaptation and fatigue, we study a conservative fragmentation equation that describes the density probability of neurons with an elapsed time s after its last discharge. In the linear setting, we extend an argument by Laurençot and Perthame to prove exponential decay to the steady state. This extension allows us to handle coefficients that have a large variation rather than constant coefficients. In another extension of the argument, we treat a weakly nonlinear case and prove total desynchronization in the network. For greater nonlinearities, we present a numerical study of the impact of the fragmentation term on the appearance of synchronization of neurons in the network using two \"extreme\" cases. Mathematics Subject Classification (2000)2010: 35B40, 35F20, 35R09, 92B20. </p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2014-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/2190-8567-4-14","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32578327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}