Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843953
S. Kuroyanagi, A. Iwata
We propose supervised learning rules for the pulsed neuron model to configure the parameters of the neuron models automatically. We show that the pulsed neuron model with the learning rules can learn two different features which are the pulse frequencies and the time differences. As the results of the simulation, the learning rules can extract both features by the adjustment of the time constant of the local membrane potential's decay /spl tau/.
{"title":"The supervised learning rules of the pulsed neuron model-learning of the connection weights and the delay times","authors":"S. Kuroyanagi, A. Iwata","doi":"10.1109/ICONIP.1999.843953","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843953","url":null,"abstract":"We propose supervised learning rules for the pulsed neuron model to configure the parameters of the neuron models automatically. We show that the pulsed neuron model with the learning rules can learn two different features which are the pulse frequencies and the time differences. As the results of the simulation, the learning rules can extract both features by the adjustment of the time constant of the local membrane potential's decay /spl tau/.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124109187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843954
S. Inawashiro, S. Miyake
A new recurrent network model of pyramidal cells and a basket cell in Field CA3 of the hippocampus is proposed. We assume that temporal sequences are processed in the CA3 network, and bursts are used as elements of the temporal sequences in synchronization with the theta rhythm. Besides ordinary synaptic connections between the pyramidal cells, delayed connections are assumed to connect the consecutive elements of temporal sequences. In learning mode, LTP of these connections are caused by burst inputs of theta rhythm. In recalling mode, the cooperative of a cue input, excitatory feedback, inhibitory feedback via the basket cell, and delayed excitatory feedback leads to the successful recall of the learned temporal sequence. The memory capacity of the network strongly depends on the number of firing sites in the spatial patterns.
{"title":"Learning and recall of temporal sequences in the network of CA3 pyramidal cells and a basket cell","authors":"S. Inawashiro, S. Miyake","doi":"10.1109/ICONIP.1999.843954","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843954","url":null,"abstract":"A new recurrent network model of pyramidal cells and a basket cell in Field CA3 of the hippocampus is proposed. We assume that temporal sequences are processed in the CA3 network, and bursts are used as elements of the temporal sequences in synchronization with the theta rhythm. Besides ordinary synaptic connections between the pyramidal cells, delayed connections are assumed to connect the consecutive elements of temporal sequences. In learning mode, LTP of these connections are caused by burst inputs of theta rhythm. In recalling mode, the cooperative of a cue input, excitatory feedback, inhibitory feedback via the basket cell, and delayed excitatory feedback leads to the successful recall of the learned temporal sequence. The memory capacity of the network strongly depends on the number of firing sites in the spatial patterns.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114307358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845696
A. Suemitsu, M. Morita
Neurons related to pair-association memory have been found in the inferotemporal cortex of monkeys, but their activities do not accord with existing neural network models. The article describes a neural network model consisting of excitatory-inhibitory cell pairs, which recalls paired patterns based on a gradual shift of the network state. It is demonstrated by computer simulations that this model agrees well with the observed neuronal activities.
{"title":"A neural network model of pair-association memory in the inferotemporal cortex","authors":"A. Suemitsu, M. Morita","doi":"10.1109/ICONIP.1999.845696","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845696","url":null,"abstract":"Neurons related to pair-association memory have been found in the inferotemporal cortex of monkeys, but their activities do not accord with existing neural network models. The article describes a neural network model consisting of excitatory-inhibitory cell pairs, which recalls paired patterns based on a gradual shift of the network state. It is demonstrated by computer simulations that this model agrees well with the observed neuronal activities.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116504073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844674
W. Yoshida, S. Ishii, M. Sato
Discusses the reconstruction of chaotic dynamics by using a normalized Gaussian network (NGnet). The NGnet is trained by an online expectation maximization (EM) algorithm in order to learn the discrete mapping of the chaotic dynamics. We also investigate the robustness of our approach to two kinds of noise processes: system noise and observation noise. It is shown that a trained NGnet is able to reproduce a chaotic attractor, even under various noise conditions. The trained NGnet also shows good prediction performance. When only part of the dynamical variables are observed, the NGnet is trained to learn the discrete mapping in the delay coordinate space. It is shown that the chaotic dynamics is able to be learned with this method under the two kinds of noise.
{"title":"Approximating discrete mapping of chaotic dynamical system based on on-line EM algorithm","authors":"W. Yoshida, S. Ishii, M. Sato","doi":"10.1109/ICONIP.1999.844674","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844674","url":null,"abstract":"Discusses the reconstruction of chaotic dynamics by using a normalized Gaussian network (NGnet). The NGnet is trained by an online expectation maximization (EM) algorithm in order to learn the discrete mapping of the chaotic dynamics. We also investigate the robustness of our approach to two kinds of noise processes: system noise and observation noise. It is shown that a trained NGnet is able to reproduce a chaotic attractor, even under various noise conditions. The trained NGnet also shows good prediction performance. When only part of the dynamical variables are observed, the NGnet is trained to learn the discrete mapping in the delay coordinate space. It is shown that the chaotic dynamics is able to be learned with this method under the two kinds of noise.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126482119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844679
Guohe, Guy Littlejair, R. Penson, Callan
Firstly, finite element based neural networks (FE-based NN) are introduced with a computational energy function and a variational formulation. Then, in order to apply FE based NN to dynamic problems, the ranges of value of three main parameters in the parallel algorithm of a FE-based NN are discussed. The parameters are: descent rate, size of the time step, and weights of the derivative of the unknown variable respectively. The Taguchi method is adopted in the numeric simulation. The main results of the simulation are presented.
{"title":"Application of FE-based neural networks to dynamic problems","authors":"Guohe, Guy Littlejair, R. Penson, Callan","doi":"10.1109/ICONIP.1999.844679","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844679","url":null,"abstract":"Firstly, finite element based neural networks (FE-based NN) are introduced with a computational energy function and a variational formulation. Then, in order to apply FE based NN to dynamic problems, the ranges of value of three main parameters in the parallel algorithm of a FE-based NN are discussed. The parameters are: descent rate, size of the time step, and weights of the derivative of the unknown variable respectively. The Taguchi method is adopted in the numeric simulation. The main results of the simulation are presented.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125681882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843997
K. Fukushima, K. Yoshimoto
Proposes a new learning rule by which cells with shift-invariant receptive fields are self-organized. With this learning rule, cells similar to simple and complex cells in the primary visual cortex are generated in a network. To demonstrate the new learning rule, we simulate a three-layered network that consists of an input layer (the retina), a layer of S-cells (simple cells) and a layer of C-cells (complex cells). During the learning, straight lines of various orientations sweep across the input layer. Both S- and C-cells are created through competition. Although S-cells compete depending on their instantaneous outputs, C-cells compete depending on the traces (or temporal averages) of their outputs. For the self-organization of S-cells, only winner S-cells increase their input connections in a similar way to that for the neocognitron. In other words, LTP (long-term potentiation) is induced in the input connections of the winner cells. For the self-organization of C-cells, however, loser C-cells decrease their input connections (LTD=long-term depression), while winners increase their input connections (LTP). Both S- and C-cells are accompanied by inhibitory cells. Modification of inhibitory connections together with excitatory connections is important for the creation of C-cells as well as S-cells.
{"title":"Self-organization of complex-like cells","authors":"K. Fukushima, K. Yoshimoto","doi":"10.1109/ICONIP.1999.843997","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843997","url":null,"abstract":"Proposes a new learning rule by which cells with shift-invariant receptive fields are self-organized. With this learning rule, cells similar to simple and complex cells in the primary visual cortex are generated in a network. To demonstrate the new learning rule, we simulate a three-layered network that consists of an input layer (the retina), a layer of S-cells (simple cells) and a layer of C-cells (complex cells). During the learning, straight lines of various orientations sweep across the input layer. Both S- and C-cells are created through competition. Although S-cells compete depending on their instantaneous outputs, C-cells compete depending on the traces (or temporal averages) of their outputs. For the self-organization of S-cells, only winner S-cells increase their input connections in a similar way to that for the neocognitron. In other words, LTP (long-term potentiation) is induced in the input connections of the winner cells. For the self-organization of C-cells, however, loser C-cells decrease their input connections (LTD=long-term depression), while winners increase their input connections (LTP). Both S- and C-cells are accompanied by inhibitory cells. Modification of inhibitory connections together with excitatory connections is important for the creation of C-cells as well as S-cells.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129973747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.845632
T. Hendtlass, H. Copland
Evolutionary algorithms require excellent search capabilities in order to find global minima, particularly in complex feature spaces. A means of enhancing search capabilities based upon a distributed genetic-style encoding of solution has been shown to be advantageous. Such a representation requires the use of varying gene lengths. The effects of variable gene lengths are explored in detail.
{"title":"Diversifying exploration of feature spaces in evolutionary searches","authors":"T. Hendtlass, H. Copland","doi":"10.1109/ICONIP.1999.845632","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.845632","url":null,"abstract":"Evolutionary algorithms require excellent search capabilities in order to find global minima, particularly in complex feature spaces. A means of enhancing search capabilities based upon a distributed genetic-style encoding of solution has been shown to be advantageous. Such a representation requires the use of varying gene lengths. The effects of variable gene lengths are explored in detail.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"84 16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130735166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844669
Ying Tan
A new unified method for designing both para-unitary cosine-modulated FIR filter banks and cosine-modulated wavelets is proposed in this paper. This problem has been formulated as a quadratic-constrained least-squares (QCLS) minimization problem in which all constraint matrices are symmetric and positive-definite. Furthermore, a specific analog neural network whose energy function is chosen as the combined cost of the QCLS minimization problem is built for our design problem in real time. It is quite easy and efficient to obtain analysis and synthesis filters with high stop-band attenuation and cosine-modulated wavelets with compact support by this method. A number of simulations show the effectiveness of this method and the correctness of the theoretical analysis given in this paper.
{"title":"A novel network method designing multirate filter banks and wavelets","authors":"Ying Tan","doi":"10.1109/ICONIP.1999.844669","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844669","url":null,"abstract":"A new unified method for designing both para-unitary cosine-modulated FIR filter banks and cosine-modulated wavelets is proposed in this paper. This problem has been formulated as a quadratic-constrained least-squares (QCLS) minimization problem in which all constraint matrices are symmetric and positive-definite. Furthermore, a specific analog neural network whose energy function is chosen as the combined cost of the QCLS minimization problem is built for our design problem in real time. It is quite easy and efficient to obtain analysis and synthesis filters with high stop-band attenuation and cosine-modulated wavelets with compact support by this method. A number of simulations show the effectiveness of this method and the correctness of the theoretical analysis given in this paper.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130920922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.843983
C. Zhang, P. M. Wong, O. Selinus
Three outlier detection methods of range, principle component analysis (PCA), and autoassociation neural network (AutoNN) approaches are introduced and applied to an environmental geochemical dataset in Sweden. Each method uses a different criterion for the definition of outlier. In the range method, the number of outlying values of one sample is determined as the outlying sample measurement parameter. The distance of sample scores in the principal components from the coordinate origin is suggested as the parameter for the PCA method. The total sum of error squares between the measured and predicted values is proposed as the parameter for the AutoNN approach. The results of the three methods are comparable, but differences exist. A combination of all the methods is recommended for the development of a better outlier identifier, and further analyses on the detected outliers should be carried out by integrating geological and environmental information.
{"title":"A comparison of outlier detection methods: exemplified with an environmental geochemical dataset","authors":"C. Zhang, P. M. Wong, O. Selinus","doi":"10.1109/ICONIP.1999.843983","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.843983","url":null,"abstract":"Three outlier detection methods of range, principle component analysis (PCA), and autoassociation neural network (AutoNN) approaches are introduced and applied to an environmental geochemical dataset in Sweden. Each method uses a different criterion for the definition of outlier. In the range method, the number of outlying values of one sample is determined as the outlying sample measurement parameter. The distance of sample scores in the principal components from the coordinate origin is suggested as the parameter for the PCA method. The total sum of error squares between the measured and predicted values is proposed as the parameter for the AutoNN approach. The results of the three methods are comparable, but differences exist. A combination of all the methods is recommended for the development of a better outlier identifier, and further analyses on the detected outliers should be carried out by integrating geological and environmental information.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131289746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-16DOI: 10.1109/ICONIP.1999.844663
T. Watanabe, H. Fujimura, S. Yasui
The Connectionist Analogy Processor (CAP) is a neural network. The paradigm of CAP assumes relational isomorphism for analogical inference. An internal abstraction model is formed by backpropagation training with the aid of a pruning mechanism. CAP also automatically develops abstraction and de-abstraction mappings to link the general and specific entities. CAP is applied to incremental analogical learning that involves multiple sets of analogy. It is shown that a new set of target data are selectively bound to the right one of internal abstraction models acquired from the previous analogical learning, i.e., the abstraction model acts as the attractor in the weight parameter space.
{"title":"Connectionist incremental learning by analogy","authors":"T. Watanabe, H. Fujimura, S. Yasui","doi":"10.1109/ICONIP.1999.844663","DOIUrl":"https://doi.org/10.1109/ICONIP.1999.844663","url":null,"abstract":"The Connectionist Analogy Processor (CAP) is a neural network. The paradigm of CAP assumes relational isomorphism for analogical inference. An internal abstraction model is formed by backpropagation training with the aid of a pruning mechanism. CAP also automatically develops abstraction and de-abstraction mappings to link the general and specific entities. CAP is applied to incremental analogical learning that involves multiple sets of analogy. It is shown that a new set of target data are selectively bound to the right one of internal abstraction models acquired from the previous analogical learning, i.e., the abstraction model acts as the attractor in the weight parameter space.","PeriodicalId":237855,"journal":{"name":"ICONIP'99. ANZIIS'99 & ANNES'99 & ACNN'99. 6th International Conference on Neural Information Processing. Proceedings (Cat. No.99EX378)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127244049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}