Pub Date : 2018-02-05DOI: 10.1186/s13408-018-0059-7
Carlo R Laing
We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.
{"title":"The Dynamics of Networks of Identical Theta Neurons.","authors":"Carlo R Laing","doi":"10.1186/s13408-018-0059-7","DOIUrl":"https://doi.org/10.1186/s13408-018-0059-7","url":null,"abstract":"<p><p>We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2018-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-018-0059-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35797895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-05DOI: 10.1186/s13408-018-0058-8
Jehan Alswaihli, Roland Potthast, Ingo Bojak, Douglas Saddy, Axel Hutt
Understanding the neural field activity for realistic living systems is a challenging task in contemporary neuroscience. Neural fields have been studied and developed theoretically and numerically with considerable success over the past four decades. However, to make effective use of such models, we need to identify their constituents in practical systems. This includes the determination of model parameters and in particular the reconstruction of the underlying effective connectivity in biological tissues.In this work, we provide an integral equation approach to the reconstruction of the neural connectivity in the case where the neural activity is governed by a delay neural field equation. As preparation, we study the solution of the direct problem based on the Banach fixed-point theorem. Then we reformulate the inverse problem into a family of integral equations of the first kind. This equation will be vector valued when several neural activity trajectories are taken as input for the inverse problem. We employ spectral regularization techniques for its stable solution. A sensitivity analysis of the regularized kernel reconstruction with respect to the input signal u is carried out, investigating the Fréchet differentiability of the kernel with respect to the signal. Finally, we use numerical examples to show the feasibility of the approach for kernel reconstruction, including numerical sensitivity tests, which show that the integral equation approach is a very stable and promising approach for practical computational neuroscience.
{"title":"Kernel Reconstruction for Delayed Neural Field Equations.","authors":"Jehan Alswaihli, Roland Potthast, Ingo Bojak, Douglas Saddy, Axel Hutt","doi":"10.1186/s13408-018-0058-8","DOIUrl":"https://doi.org/10.1186/s13408-018-0058-8","url":null,"abstract":"<p><p>Understanding the neural field activity for realistic living systems is a challenging task in contemporary neuroscience. Neural fields have been studied and developed theoretically and numerically with considerable success over the past four decades. However, to make effective use of such models, we need to identify their constituents in practical systems. This includes the determination of model parameters and in particular the reconstruction of the underlying effective connectivity in biological tissues.In this work, we provide an integral equation approach to the reconstruction of the neural connectivity in the case where the neural activity is governed by a delay neural field equation. As preparation, we study the solution of the direct problem based on the Banach fixed-point theorem. Then we reformulate the inverse problem into a family of integral equations of the first kind. This equation will be vector valued when several neural activity trajectories are taken as input for the inverse problem. We employ spectral regularization techniques for its stable solution. A sensitivity analysis of the regularized kernel reconstruction with respect to the input signal u is carried out, investigating the Fréchet differentiability of the kernel with respect to the signal. Finally, we use numerical examples to show the feasibility of the approach for kernel reconstruction, including numerical sensitivity tests, which show that the integral equation approach is a very stable and promising approach for practical computational neuroscience.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2018-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-018-0058-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35794081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-18DOI: 10.1186/s13408-017-0057-1
Aurel A Lazar, Nikul H Ukani, Yiyin Zhou
We investigate the sparse functional identification of complex cells and the decoding of spatio-temporal visual stimuli encoded by an ensemble of complex cells. The reconstruction algorithm is formulated as a rank minimization problem that significantly reduces the number of sampling measurements (spikes) required for decoding. We also establish the duality between sparse decoding and functional identification and provide algorithms for identification of low-rank dendritic stimulus processors. The duality enables us to efficiently evaluate our functional identification algorithms by reconstructing novel stimuli in the input space. Finally, we demonstrate that our identification algorithms substantially outperform the generalized quadratic model, the nonlinear input model, and the widely used spike-triggered covariance algorithm.
{"title":"Sparse Functional Identification of Complex Cells from Spike Times and the Decoding of Visual Stimuli.","authors":"Aurel A Lazar, Nikul H Ukani, Yiyin Zhou","doi":"10.1186/s13408-017-0057-1","DOIUrl":"https://doi.org/10.1186/s13408-017-0057-1","url":null,"abstract":"<p><p>We investigate the sparse functional identification of complex cells and the decoding of spatio-temporal visual stimuli encoded by an ensemble of complex cells. The reconstruction algorithm is formulated as a rank minimization problem that significantly reduces the number of sampling measurements (spikes) required for decoding. We also establish the duality between sparse decoding and functional identification and provide algorithms for identification of low-rank dendritic stimulus processors. The duality enables us to efficiently evaluate our functional identification algorithms by reconstructing novel stimuli in the input space. Finally, we demonstrate that our identification algorithms substantially outperform the generalized quadratic model, the nonlinear input model, and the widely used spike-triggered covariance algorithm.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2018-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0057-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35750846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-16DOI: 10.1186/s13408-017-0056-2
Christopher J Hillar, Ngoc M Tran
The Hopfield recurrent neural network is a classical auto-associative model of memory, in which collections of symmetrically coupled McCulloch-Pitts binary neurons interact to perform emergent computation. Although previous researchers have explored the potential of this network to solve combinatorial optimization problems or store reoccurring activity patterns as attractors of its deterministic dynamics, a basic open problem is to design a family of Hopfield networks with a number of noise-tolerant memories that grows exponentially with neural population size. Here, we discover such networks by minimizing probability flow, a recently proposed objective for estimating parameters in discrete maximum entropy models. By descending the gradient of the convex probability flow, our networks adapt synaptic weights to achieve robust exponential storage, even when presented with vanishingly small numbers of training patterns. In addition to providing a new set of low-density error-correcting codes that achieve Shannon's noisy channel bound, these networks also efficiently solve a variant of the hidden clique problem in computer science, opening new avenues for real-world applications of computational models originating from biology.
{"title":"Robust Exponential Memory in Hopfield Networks.","authors":"Christopher J Hillar, Ngoc M Tran","doi":"10.1186/s13408-017-0056-2","DOIUrl":"https://doi.org/10.1186/s13408-017-0056-2","url":null,"abstract":"<p><p>The Hopfield recurrent neural network is a classical auto-associative model of memory, in which collections of symmetrically coupled McCulloch-Pitts binary neurons interact to perform emergent computation. Although previous researchers have explored the potential of this network to solve combinatorial optimization problems or store reoccurring activity patterns as attractors of its deterministic dynamics, a basic open problem is to design a family of Hopfield networks with a number of noise-tolerant memories that grows exponentially with neural population size. Here, we discover such networks by minimizing probability flow, a recently proposed objective for estimating parameters in discrete maximum entropy models. By descending the gradient of the convex probability flow, our networks adapt synaptic weights to achieve robust exponential storage, even when presented with vanishingly small numbers of training patterns. In addition to providing a new set of low-density error-correcting codes that achieve Shannon's noisy channel bound, these networks also efficiently solve a variant of the hidden clique problem in computer science, opening new avenues for real-world applications of computational models originating from biology.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2018-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0056-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35743400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-11DOI: 10.1186/s13408-017-0055-3
Koen Dijkstra, Yuri A Kuznetsov, Michel J A M van Putten, Stephan A van Gils
We present a simple rate-reduced neuron model that captures a wide range of complex, biologically plausible, and physiologically relevant spiking behavior. This includes spike-frequency adaptation, postinhibitory rebound, phasic spiking and accommodation, first-spike latency, and inhibition-induced spiking. Furthermore, the model can mimic different neuronal filter properties. It can be used to extend existing neural field models, adding more biological realism and yielding a richer dynamical structure. The model is based on a slight variation of the Rulkov map.
{"title":"A Rate-Reduced Neuron Model for Complex Spiking Behavior.","authors":"Koen Dijkstra, Yuri A Kuznetsov, Michel J A M van Putten, Stephan A van Gils","doi":"10.1186/s13408-017-0055-3","DOIUrl":"https://doi.org/10.1186/s13408-017-0055-3","url":null,"abstract":"<p><p>We present a simple rate-reduced neuron model that captures a wide range of complex, biologically plausible, and physiologically relevant spiking behavior. This includes spike-frequency adaptation, postinhibitory rebound, phasic spiking and accommodation, first-spike latency, and inhibition-induced spiking. Furthermore, the model can mimic different neuronal filter properties. It can be used to extend existing neural field models, adding more biological realism and yielding a richer dynamical structure. The model is based on a slight variation of the Rulkov map.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0055-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35638349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-07-06DOI: 10.1186/s13408-017-0048-2
Eva Lang, Wilhelm Stannat
Neural field equations are used to describe the spatio-temporal evolution of the activity in a network of synaptically coupled populations of neurons in the continuum limit. Their heuristic derivation involves two approximation steps. Under the assumption that each population in the network is large, the activity is described in terms of a population average. The discrete network is then approximated by a continuum. In this article we make the two approximation steps explicit. Extending a model by Bressloff and Newby, we describe the evolution of the activity in a discrete network of finite populations by a Markov chain. In order to determine finite-size effects-deviations from the mean-field limit due to the finite size of the populations in the network-we analyze the fluctuations of this Markov chain and set up an approximating system of diffusion processes. We show that a well-posed stochastic neural field equation with a noise term accounting for finite-size effects on traveling wave solutions is obtained as the strong continuum limit.
{"title":"Finite-Size Effects on Traveling Wave Solutions to Neural Field Equations.","authors":"Eva Lang, Wilhelm Stannat","doi":"10.1186/s13408-017-0048-2","DOIUrl":"https://doi.org/10.1186/s13408-017-0048-2","url":null,"abstract":"<p><p>Neural field equations are used to describe the spatio-temporal evolution of the activity in a network of synaptically coupled populations of neurons in the continuum limit. Their heuristic derivation involves two approximation steps. Under the assumption that each population in the network is large, the activity is described in terms of a population average. The discrete network is then approximated by a continuum. In this article we make the two approximation steps explicit. Extending a model by Bressloff and Newby, we describe the evolution of the activity in a discrete network of finite populations by a Markov chain. In order to determine finite-size effects-deviations from the mean-field limit due to the finite size of the populations in the network-we analyze the fluctuations of this Markov chain and set up an approximating system of diffusion processes. We show that a well-posed stochastic neural field equation with a noise term accounting for finite-size effects on traveling wave solutions is obtained as the strong continuum limit.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0048-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35151582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-07-25DOI: 10.1186/s13408-017-0050-8
Maria Luisa Saggio, Andreas Spiegler, Christophe Bernard, Viktor K Jirsa
Bursting is a phenomenon found in a variety of physical and biological systems. For example, in neuroscience, bursting is believed to play a key role in the way information is transferred in the nervous system. In this work, we propose a model that, appropriately tuned, can display several types of bursting behaviors. The model contains two subsystems acting at different time scales. For the fast subsystem we use the planar unfolding of a high codimension singularity. In its bifurcation diagram, we locate paths that underlie the right sequence of bifurcations necessary for bursting. The slow subsystem steers the fast one back and forth along these paths leading to bursting behavior. The model is able to produce almost all the classes of bursting predicted for systems with a planar fast subsystem. Transitions between classes can be obtained through an ultra-slow modulation of the model's parameters. A detailed exploration of the parameter space allows predicting possible transitions. This provides a single framework to understand the coexistence of diverse bursting patterns in physical and biological systems or in models.
{"title":"Fast-Slow Bursters in the Unfolding of a High Codimension Singularity and the Ultra-slow Transitions of Classes.","authors":"Maria Luisa Saggio, Andreas Spiegler, Christophe Bernard, Viktor K Jirsa","doi":"10.1186/s13408-017-0050-8","DOIUrl":"https://doi.org/10.1186/s13408-017-0050-8","url":null,"abstract":"<p><p>Bursting is a phenomenon found in a variety of physical and biological systems. For example, in neuroscience, bursting is believed to play a key role in the way information is transferred in the nervous system. In this work, we propose a model that, appropriately tuned, can display several types of bursting behaviors. The model contains two subsystems acting at different time scales. For the fast subsystem we use the planar unfolding of a high codimension singularity. In its bifurcation diagram, we locate paths that underlie the right sequence of bifurcations necessary for bursting. The slow subsystem steers the fast one back and forth along these paths leading to bursting behavior. The model is able to produce almost all the classes of bursting predicted for systems with a planar fast subsystem. Transitions between classes can be obtained through an ultra-slow modulation of the model's parameters. A detailed exploration of the parameter space allows predicting possible transitions. This provides a single framework to understand the coexistence of diverse bursting patterns in physical and biological systems or in models.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0050-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35200488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-06-06DOI: 10.1186/s13408-017-0045-5
Yangyang Wang, Jonathan E Rubin
Neural networks generate a variety of rhythmic activity patterns, often involving different timescales. One example arises in the respiratory network in the pre-Bötzinger complex of the mammalian brainstem, which can generate the eupneic rhythm associated with normal respiration as well as recurrent low-frequency, large-amplitude bursts associated with sighing. Two competing hypotheses have been proposed to explain sigh generation: the recruitment of a neuronal population distinct from the eupneic rhythm-generating subpopulation or the reconfiguration of activity within a single population. Here, we consider two recent computational models, one of which represents each of the hypotheses. We use methods of dynamical systems theory, such as fast-slow decomposition, averaging, and bifurcation analysis, to understand the multiple-timescale mechanisms underlying sigh generation in each model. In the course of our analysis, we discover that a third timescale is required to generate sighs in both models. Furthermore, we identify the similarities of the underlying mechanisms in the two models and the aspects in which they differ.
{"title":"Timescales and Mechanisms of Sigh-Like Bursting and Spiking in Models of Rhythmic Respiratory Neurons.","authors":"Yangyang Wang, Jonathan E Rubin","doi":"10.1186/s13408-017-0045-5","DOIUrl":"https://doi.org/10.1186/s13408-017-0045-5","url":null,"abstract":"<p><p>Neural networks generate a variety of rhythmic activity patterns, often involving different timescales. One example arises in the respiratory network in the pre-Bötzinger complex of the mammalian brainstem, which can generate the eupneic rhythm associated with normal respiration as well as recurrent low-frequency, large-amplitude bursts associated with sighing. Two competing hypotheses have been proposed to explain sigh generation: the recruitment of a neuronal population distinct from the eupneic rhythm-generating subpopulation or the reconfiguration of activity within a single population. Here, we consider two recent computational models, one of which represents each of the hypotheses. We use methods of dynamical systems theory, such as fast-slow decomposition, averaging, and bifurcation analysis, to understand the multiple-timescale mechanisms underlying sigh generation in each model. In the course of our analysis, we discover that a third timescale is required to generate sighs in both models. Furthermore, we identify the similarities of the underlying mechanisms in the two models and the aspects in which they differ.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0045-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35068732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-06-24DOI: 10.1186/s13408-017-0047-3
Arthur S Sherman, Joon Ha
Low frequency firing is modeled by Type 1 neurons with a SNIC, but, because of the vertical slope of the square-root-like f-I curve, low f only occurs over a narrow range of I. When an adaptive current is added, however, the f-I curve is linearized, and low f occurs robustly over a large I range. Ermentrout (Neural Comput. 10(7):1721-1729, 1998) showed that this feature of adaptation paradoxically arises from the SNIC that is responsible for the vertical slope. We show, using a simplified Hindmarsh-Rose neuron with negative feedback acting directly on the adaptation current, that whereas a SNIC contributes to linearization, in practice linearization over a large interval may require strong adaptation strength. We also find that a type 2 neuron with threshold generated by a Hopf bifurcation can also show linearization if adaptation strength is strong. Thus, a SNIC is not necessary. More fundamental than a SNIC is stretching the steep region near threshold, which stems from sufficiently strong adaptation, though a SNIC contributes if present. In a more realistic conductance-based model, Morris-Lecar, with negative feedback acting on the adaptation conductance, an additional assumption that the driving force of the adaptation current is independent of I is needed. If this holds, strong adaptive conductance is both necessary and sufficient for linearization of f-I curves of type 2 f-I curves.
{"title":"How Adaptation Makes Low Firing Rates Robust.","authors":"Arthur S Sherman, Joon Ha","doi":"10.1186/s13408-017-0047-3","DOIUrl":"https://doi.org/10.1186/s13408-017-0047-3","url":null,"abstract":"<p><p>Low frequency firing is modeled by Type 1 neurons with a SNIC, but, because of the vertical slope of the square-root-like f-I curve, low f only occurs over a narrow range of I. When an adaptive current is added, however, the f-I curve is linearized, and low f occurs robustly over a large I range. Ermentrout (Neural Comput. 10(7):1721-1729, 1998) showed that this feature of adaptation paradoxically arises from the SNIC that is responsible for the vertical slope. We show, using a simplified Hindmarsh-Rose neuron with negative feedback acting directly on the adaptation current, that whereas a SNIC contributes to linearization, in practice linearization over a large interval may require strong adaptation strength. We also find that a type 2 neuron with threshold generated by a Hopf bifurcation can also show linearization if adaptation strength is strong. Thus, a SNIC is not necessary. More fundamental than a SNIC is stretching the steep region near threshold, which stems from sufficiently strong adaptation, though a SNIC contributes if present. In a more realistic conductance-based model, Morris-Lecar, with negative feedback acting on the adaptation conductance, an additional assumption that the driving force of the adaptation current is independent of I is needed. If this holds, strong adaptive conductance is both necessary and sufficient for linearization of f-I curves of type 2 f-I curves.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0047-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35118764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-07-14DOI: 10.1186/s13408-017-0049-1
Bjørn Fredrik Nielsen
Point neuron models with a Heaviside firing rate function can be ill-posed. That is, the initial-condition-to-solution map might become discontinuous in finite time. If a Lipschitz continuous but steep firing rate function is employed, then standard ODE theory implies that such models are well-posed and can thus, approximately, be solved with finite precision arithmetic. We investigate whether the solution of this well-posed model converges to a solution of the ill-posed limit problem as the steepness parameter of the firing rate function tends to infinity. Our argument employs the Arzelà-Ascoli theorem and also yields the existence of a solution of the limit problem. However, we only obtain convergence of a subsequence of the regularized solutions. This is consistent with the fact that models with a Heaviside firing rate function can have several solutions, as we show. Our analysis assumes that the vector-valued limit function v, provided by the Arzelà-Ascoli theorem, is threshold simple: That is, the set containing the times when one or more of the component functions of v equal the threshold value for firing, has zero Lebesgue measure. If this assumption does not hold, we argue that the regularized solutions may not converge to a solution of the limit problem with a Heaviside firing function.
{"title":"Regularization of Ill-Posed Point Neuron Models.","authors":"Bjørn Fredrik Nielsen","doi":"10.1186/s13408-017-0049-1","DOIUrl":"https://doi.org/10.1186/s13408-017-0049-1","url":null,"abstract":"<p><p>Point neuron models with a Heaviside firing rate function can be ill-posed. That is, the initial-condition-to-solution map might become discontinuous in finite time. If a Lipschitz continuous but steep firing rate function is employed, then standard ODE theory implies that such models are well-posed and can thus, approximately, be solved with finite precision arithmetic. We investigate whether the solution of this well-posed model converges to a solution of the ill-posed limit problem as the steepness parameter of the firing rate function tends to infinity. Our argument employs the Arzelà-Ascoli theorem and also yields the existence of a solution of the limit problem. However, we only obtain convergence of a subsequence of the regularized solutions. This is consistent with the fact that models with a Heaviside firing rate function can have several solutions, as we show. Our analysis assumes that the vector-valued limit function v, provided by the Arzelà-Ascoli theorem, is threshold simple: That is, the set containing the times when one or more of the component functions of v equal the threshold value for firing, has zero Lebesgue measure. If this assumption does not hold, we argue that the regularized solutions may not converge to a solution of the limit problem with a Heaviside firing function.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-017-0049-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35168391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}