Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706960
Youwei Zheng, L. Schwabe
Most previous studies treated spikes as all-or-none events, and considered their duration and magnitude as negligible. Action potential (AP) duration varies across neuron types, but its consequences on synaptic plasticity remain largely unexplored. Here we study the effects of AP-duration on spike-timing dependent synaptic plasticity (STDP) by negatively shifting the temporal window, potentiating synapses with presy-naptic EPSPs occurring both before and during a postsynaptic AP. With this interpretation, we demonstrate that AP-duration controls the shape of weight distribution and the temporal fluctuations of the weights after the distribution reaches a steady-state.
{"title":"Shaping synaptic learning by the duration of the postsynaptic action potential","authors":"Youwei Zheng, L. Schwabe","doi":"10.1109/IJCNN.2013.6706960","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706960","url":null,"abstract":"Most previous studies treated spikes as all-or-none events, and considered their duration and magnitude as negligible. Action potential (AP) duration varies across neuron types, but its consequences on synaptic plasticity remain largely unexplored. Here we study the effects of AP-duration on spike-timing dependent synaptic plasticity (STDP) by negatively shifting the temporal window, potentiating synapses with presy-naptic EPSPs occurring both before and during a postsynaptic AP. With this interpretation, we demonstrate that AP-duration controls the shape of weight distribution and the temporal fluctuations of the weights after the distribution reaches a steady-state.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129754629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706954
M. Frasca, G. Pavesi
Gene expression is a very complex process, which is finely regulated and modulated at different levels. The first step of gene expression, the transcription of DNA into mRNA, is in turn regulated both at the genetic and epigenetic level. In particular, the latter, which involves the structure formed by DNA wrapped around histones (chromatin), has been recently shown to be a key factor, with post-translational modifications of histones acting combinatorially to activate or block transcription. In this work we addressed the problem of predicting the level of expression of genes starting from genome-wide maps of chromatin structure, that is, of the localization of several different histone modifications, which have been recently made available through the introduction of technologies like ChIP-Seq. We formalized the problem as a multi-class bipartite ranking problem, in which for each class a gene can be under-or over-expressed with respect to a given reference expression value. In order to deal with this problem, we exploit and extend a semi-supervised method (COSNet) based on a family of Hopfield neural networks. Benchmark genome-wide tests performed on six different human cell lines yielded satisfactory results, with clear improvements over the alternative approach most commonly adopted in the literature.
{"title":"A neural network based algorithm for gene expression prediction from chromatin structure","authors":"M. Frasca, G. Pavesi","doi":"10.1109/IJCNN.2013.6706954","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706954","url":null,"abstract":"Gene expression is a very complex process, which is finely regulated and modulated at different levels. The first step of gene expression, the transcription of DNA into mRNA, is in turn regulated both at the genetic and epigenetic level. In particular, the latter, which involves the structure formed by DNA wrapped around histones (chromatin), has been recently shown to be a key factor, with post-translational modifications of histones acting combinatorially to activate or block transcription. In this work we addressed the problem of predicting the level of expression of genes starting from genome-wide maps of chromatin structure, that is, of the localization of several different histone modifications, which have been recently made available through the introduction of technologies like ChIP-Seq. We formalized the problem as a multi-class bipartite ranking problem, in which for each class a gene can be under-or over-expressed with respect to a given reference expression value. In order to deal with this problem, we exploit and extend a semi-supervised method (COSNet) based on a family of Hopfield neural networks. Benchmark genome-wide tests performed on six different human cell lines yielded satisfactory results, with clear improvements over the alternative approach most commonly adopted in the literature.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128330378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706808
Samuele Salti, A. Petrelli, Federico Tombari, Nicola Fioraio, L. D. Stefano
In this paper we present a pipeline for automatic detection of traffic signs in images. The proposed system can deal with high appearance variations, which typically occur in traffic sign recognition applications, especially with strong illumination changes and dramatic scale changes. Unlike most existing systems, our pipeline is based on interest regions extraction rather than a sliding window detection scheme. The proposed approach has been specialized and tested in three variants, each aimed at detecting one of the three categories of Mandatory, Prohibitory and Danger traffic signs. Our proposal has been evaluated experimentally within the German Traffic Sign Detection Benchmark competition.
{"title":"A traffic sign detection pipeline based on interest region extraction","authors":"Samuele Salti, A. Petrelli, Federico Tombari, Nicola Fioraio, L. D. Stefano","doi":"10.1109/IJCNN.2013.6706808","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706808","url":null,"abstract":"In this paper we present a pipeline for automatic detection of traffic signs in images. The proposed system can deal with high appearance variations, which typically occur in traffic sign recognition applications, especially with strong illumination changes and dramatic scale changes. Unlike most existing systems, our pipeline is based on interest regions extraction rather than a sliding window detection scheme. The proposed approach has been specialized and tested in three variants, each aimed at detecting one of the three categories of Mandatory, Prohibitory and Danger traffic signs. Our proposal has been evaluated experimentally within the German Traffic Sign Detection Benchmark competition.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128371352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706784
K. Harrington, E. Awa, Sylvain Cussat-Blanc, J. Pollack
An important connection between evolution and learning was made over a century ago and is now termed as the Baldwin effect. Learning acts as a guide for an evolutionary search process. In this study reinforcement learning agents are trained to solve the robot coverage control problem. These agents are improved by evolving neuromodulatory gene regulatory networks (GRN) that influence the learning and memory of agents. Agents trained by these neuromodulatory GRNs can consistently generalize better than agents trained with fixed parameter settings. This work introduces evolutionary GRN models into the context of neuromodulation and illustrates some of the benefits that stem from neuromodulatory GRNs.
{"title":"Robot coverage control by evolved neuromodulation","authors":"K. Harrington, E. Awa, Sylvain Cussat-Blanc, J. Pollack","doi":"10.1109/IJCNN.2013.6706784","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706784","url":null,"abstract":"An important connection between evolution and learning was made over a century ago and is now termed as the Baldwin effect. Learning acts as a guide for an evolutionary search process. In this study reinforcement learning agents are trained to solve the robot coverage control problem. These agents are improved by evolving neuromodulatory gene regulatory networks (GRN) that influence the learning and memory of agents. Agents trained by these neuromodulatory GRNs can consistently generalize better than agents trained with fixed parameter settings. This work introduces evolutionary GRN models into the context of neuromodulation and illustrates some of the benefits that stem from neuromodulatory GRNs.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129265363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706900
Priyanka Bajaj, A. Garg
The original Hodgkin and Huxley equations are landmark equations explaining the generation of action potential in a biological neuron. Moreover, many studies have been done on the Hodgkin and Huxley model with constant injected current. Here we present an Extended Hodgkin and Huxley model with conductance based excitatory and inhibitory synaptic inputs. It is asserted that the Hodgkin and Huxley model remains robust with the all kinds of synaptic inputs. Moreover, this model is more tractable to a biological neuron.
{"title":"Dynamics of Hodgkin and Huxley model with conductance based synaptic input","authors":"Priyanka Bajaj, A. Garg","doi":"10.1109/IJCNN.2013.6706900","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706900","url":null,"abstract":"The original Hodgkin and Huxley equations are landmark equations explaining the generation of action potential in a biological neuron. Moreover, many studies have been done on the Hodgkin and Huxley model with constant injected current. Here we present an Extended Hodgkin and Huxley model with conductance based excitatory and inhibitory synaptic inputs. It is asserted that the Hodgkin and Huxley model remains robust with the all kinds of synaptic inputs. Moreover, this model is more tractable to a biological neuron.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129369663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707132
Shin-Fu Wu, Shie-Jue Lee
Multi-valued neuron (MVN) is an efficient technique for classification and regression. It is a neuron with complex-valued weights and inputs/output, and the output of the activation function is moving along the unit circle on the complex plane. Therefore, MVN may have more functionalities than sigmoidal or radial basis function neurons. In some cases, a pair of weighted sums would oscillate between two sectors and the learning process can hardly converge. Besides, many weighted sums may be located around the borders of each sector, which may cause bad performance in classification accuracy. In this paper, we propose two modifications of multivalued neuron. One is involved with moving boundaries and the other one with targets at the center of sectors. Experimental results show that the proposed modifications can improve the performance of MVN and help it to converge more efficiently.
{"title":"Multi-valued neuron with new learning schemes","authors":"Shin-Fu Wu, Shie-Jue Lee","doi":"10.1109/IJCNN.2013.6707132","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707132","url":null,"abstract":"Multi-valued neuron (MVN) is an efficient technique for classification and regression. It is a neuron with complex-valued weights and inputs/output, and the output of the activation function is moving along the unit circle on the complex plane. Therefore, MVN may have more functionalities than sigmoidal or radial basis function neurons. In some cases, a pair of weighted sums would oscillate between two sectors and the learning process can hardly converge. Besides, many weighted sums may be located around the borders of each sector, which may cause bad performance in classification accuracy. In this paper, we propose two modifications of multivalued neuron. One is involved with moving boundaries and the other one with targets at the center of sectors. Experimental results show that the proposed modifications can improve the performance of MVN and help it to converge more efficiently.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124679497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707083
P. Petrantonakis, Athanasia Papoutsi, Panayiota Poirazi
Persistent activity is the prolongation of neuronal firing that outlasts the presentation of a stimulus and has been recorded during the execution of working memory tasks in several cortical regions. The emergence of persistent activity is stimulus-specific: not all inputs lead to persistent firing, only `preferred' ones. However, the features of a stimulus or the stimulus-induced response that determine whether it will ignite persistent activity remain unknown. In this paper, we propose various statistical and fractal dimension-based features derived from the activity of a detailed biophysical Prefrontal Cortex microcircuit model, for the efficient classification of the upcoming Persistent or Non-Persistent-activity state. Moreover, by introducing a novel majority voting classification framework we manage to achieve classification rates up to 92.5%, suggesting that selected features carry important predictive information that may be read out by the brain in order to identify `preferred' vs. `no-preferred' stimuli.
{"title":"Towards predicting persistent activity of neurons by statistical and fractal dimension-based features","authors":"P. Petrantonakis, Athanasia Papoutsi, Panayiota Poirazi","doi":"10.1109/IJCNN.2013.6707083","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707083","url":null,"abstract":"Persistent activity is the prolongation of neuronal firing that outlasts the presentation of a stimulus and has been recorded during the execution of working memory tasks in several cortical regions. The emergence of persistent activity is stimulus-specific: not all inputs lead to persistent firing, only `preferred' ones. However, the features of a stimulus or the stimulus-induced response that determine whether it will ignite persistent activity remain unknown. In this paper, we propose various statistical and fractal dimension-based features derived from the activity of a detailed biophysical Prefrontal Cortex microcircuit model, for the efficient classification of the upcoming Persistent or Non-Persistent-activity state. Moreover, by introducing a novel majority voting classification framework we manage to achieve classification rates up to 92.5%, suggesting that selected features carry important predictive information that may be read out by the brain in order to identify `preferred' vs. `no-preferred' stimuli.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114345205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707079
Shuhei Kimura, Masanao Sato, Mariko Okada
Vohradský has proposed a neural network model to describe biochemical networks. Based on this model, several researchers have proposed genetic network inference methods. When trying to analyze large-scale genetic networks, however, these methods must solve high-dimensional function optimization problems. In order to resolve the high-dimensionality in the estimation of the parameters of the Vohradský's neural network model, this study proposes a new method. The proposed method estimates the parameters of the neural network model by solving two-dimensional function optimization problems. Although these two-dimensional problems are non-linear, their low-dimensionality would make the estimation of the model parameters easier. Finally, we confirm the effectiveness of the proposed method through numerical experiments.
{"title":"Development of an efficient parameter estimation method for the inference of Vohradský's neural network models of genetic networks","authors":"Shuhei Kimura, Masanao Sato, Mariko Okada","doi":"10.1109/IJCNN.2013.6707079","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707079","url":null,"abstract":"Vohradský has proposed a neural network model to describe biochemical networks. Based on this model, several researchers have proposed genetic network inference methods. When trying to analyze large-scale genetic networks, however, these methods must solve high-dimensional function optimization problems. In order to resolve the high-dimensionality in the estimation of the parameters of the Vohradský's neural network model, this study proposes a new method. The proposed method estimates the parameters of the neural network model by solving two-dimensional function optimization problems. Although these two-dimensional problems are non-linear, their low-dimensionality would make the estimation of the model parameters easier. Finally, we confirm the effectiveness of the proposed method through numerical experiments.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114732354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706922
Hanen Chenini, J. Derutin, T. Tixier
Today, the problem of designing suitable multiprocessor architecture tailored for a target Neural Networks applications raises the need for a fast and efficient MP-SOC (MultiProcessor System-on-Chip) design environment. Additionally, the implementation of such applications on multiprocessor designs will need to exploit the parallelism and pipelining in algorithms with the hope of delivering significant reduction in execution times. To take advantage of parallelization on homogeneous multiprocessor architecture and to reduce the programming effort, we provide new MP-SOC design methodology which offers more opportunities for accelerating the parallelization of Neural Networks algorithms. The efficiency of this approach is tested on many examples of applications. This work is devoted to the design and implementation of a complete intelligent controller parking system of autonomous mobile robot based on Multi-Layer Feed-Forward Neural Networks. To emphasize some specific requirements to be considered when implementing such algorithm, we propose new parallel pipelined architecture composed of several computational stages. Additionally, we especially suggest a parallel software skeleton “SCComCM” aimed at being employed by the developed multistage architecture. The experimental results show that the proposed parallel architecture has better speed-up, less communication time, and better space reduction factor than the hand tuned hardware design.
{"title":"Fast parking control of mobile robot based on multi-layer neural network on homogeneous architecture","authors":"Hanen Chenini, J. Derutin, T. Tixier","doi":"10.1109/IJCNN.2013.6706922","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706922","url":null,"abstract":"Today, the problem of designing suitable multiprocessor architecture tailored for a target Neural Networks applications raises the need for a fast and efficient MP-SOC (MultiProcessor System-on-Chip) design environment. Additionally, the implementation of such applications on multiprocessor designs will need to exploit the parallelism and pipelining in algorithms with the hope of delivering significant reduction in execution times. To take advantage of parallelization on homogeneous multiprocessor architecture and to reduce the programming effort, we provide new MP-SOC design methodology which offers more opportunities for accelerating the parallelization of Neural Networks algorithms. The efficiency of this approach is tested on many examples of applications. This work is devoted to the design and implementation of a complete intelligent controller parking system of autonomous mobile robot based on Multi-Layer Feed-Forward Neural Networks. To emphasize some specific requirements to be considered when implementing such algorithm, we propose new parallel pipelined architecture composed of several computational stages. Additionally, we especially suggest a parallel software skeleton “SCComCM” aimed at being employed by the developed multistage architecture. The experimental results show that the proposed parallel architecture has better speed-up, less communication time, and better space reduction factor than the hand tuned hardware design.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127682355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707077
A. Cassidy, P. Merolla, J. Arthur, Steven K. Esser, Bryan L. Jackson, Rodrigo Alvarez-Icaza, Pallab Datta, J. Sawada, T. Wong, V. Feldman, A. Amir, D. B. Rubin, Filipp Akopyan, E. McQuinn, W. Risk, D. Modha
Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain's function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.
{"title":"Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores","authors":"A. Cassidy, P. Merolla, J. Arthur, Steven K. Esser, Bryan L. Jackson, Rodrigo Alvarez-Icaza, Pallab Datta, J. Sawada, T. Wong, V. Feldman, A. Amir, D. B. Rubin, Filipp Akopyan, E. McQuinn, W. Risk, D. Modha","doi":"10.1109/IJCNN.2013.6707077","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707077","url":null,"abstract":"Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain's function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121771735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}