Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170644
C.-M. Cho, H.-S. Don
A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<>
{"title":"A parallel Kalman algorithm for fast learning of multilayer neural networks","authors":"C.-M. Cho, H.-S. Don","doi":"10.1109/IJCNN.1991.170644","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170644","url":null,"abstract":"A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170507
S. Kia, G. Coghill
Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<>
{"title":"Dynamic competitive learning for centroid estimation","authors":"S. Kia, G. Coghill","doi":"10.1109/IJCNN.1991.170507","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170507","url":null,"abstract":"Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134355715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170712
Shulin Yang, Youan Ke, Zhong Wang
The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<>
{"title":"Speaker-independent syllable recognition by a pyramidical neural net","authors":"Shulin Yang, Youan Ke, Zhong Wang","doi":"10.1109/IJCNN.1991.170712","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170712","url":null,"abstract":"The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170486
R. Gemello, F. Mana
Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<>
{"title":"An enhancement to MLP model to enforce closed decision regions","authors":"R. Gemello, F. Mana","doi":"10.1109/IJCNN.1991.170486","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170486","url":null,"abstract":"Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170667
W. Kawakami, K. Kitayama
A novel configuration of an optical bidirectional learnable neural network is proposed, in which the recall and learning processes can be done by transmitting lights of synaptic weight and error signal, respectively, in the opposite direction between two facing OEICs (optoelectronic integrated circuits). Thus, both vector-matrix operation for recall and outer-product for modifying synaptic weights are optically performed bidirectionally. This compact configuration is especially suitable for neurochips. The feasibility of a three-dimensional neurochip is experimentally investigated based on a learning experiment using a 2*2 optical neuro-breadboard.<>
{"title":"Bidirectional optical learnable neural networks for OEIC","authors":"W. Kawakami, K. Kitayama","doi":"10.1109/IJCNN.1991.170667","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170667","url":null,"abstract":"A novel configuration of an optical bidirectional learnable neural network is proposed, in which the recall and learning processes can be done by transmitting lights of synaptic weight and error signal, respectively, in the opposite direction between two facing OEICs (optoelectronic integrated circuits). Thus, both vector-matrix operation for recall and outer-product for modifying synaptic weights are optically performed bidirectionally. This compact configuration is especially suitable for neurochips. The feasibility of a three-dimensional neurochip is experimentally investigated based on a learning experiment using a 2*2 optical neuro-breadboard.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115717899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170577
P. Carlson, A. The
The authors report on an effort in artificial neural network (ANN) technology to use content-independent elements of prose as predictors of paragraph logic structures. They intend to embed the trained network in an intelligent tutor to teach writing skills. An attempt is made to find patterns in the nonambiguous lexical and syntactic features if discourse that predict the semantic/cognitive level of interpretation. An NN implementation of the modified Christensen method is considered. It is noted that ANN technology's ability to deal with fuzzy logic, feature extraction, classification, and predictive modeling makes a neural network the best choice for the present application.<>
{"title":"A cognitively-based neural network for determining paragraph coherence","authors":"P. Carlson, A. The","doi":"10.1109/IJCNN.1991.170577","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170577","url":null,"abstract":"The authors report on an effort in artificial neural network (ANN) technology to use content-independent elements of prose as predictors of paragraph logic structures. They intend to embed the trained network in an intelligent tutor to teach writing skills. An attempt is made to find patterns in the nonambiguous lexical and syntactic features if discourse that predict the semantic/cognitive level of interpretation. An NN implementation of the modified Christensen method is considered. It is noted that ANN technology's ability to deal with fuzzy logic, feature extraction, classification, and predictive modeling makes a neural network the best choice for the present application.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115839688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170457
A. Kowalczyk, H. Ferrá, K. Gardiner
It is demonstrated by example that neural networks can be used successfully for automatic extraction of production rules from empirical data. The case considered is a popular public domain database of 8124 mushrooms. With the use of a term selection algorithm, a number of very accurate mask perceptrons (a kind of high-order network or polynomial classifier) have been developed. Then rounding of synaptic weights was applied, leading in many cases to networks with integer weights which were subsequently converted to production rules. It is also shown that focusing of network attention onto a smaller subset of useful attributes ordered with respect to their decreasing discriminating abilities helps significantly in accurate rule generation.<>
{"title":"Discovering production rules with higher order neural networks: a case study. II","authors":"A. Kowalczyk, H. Ferrá, K. Gardiner","doi":"10.1109/IJCNN.1991.170457","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170457","url":null,"abstract":"It is demonstrated by example that neural networks can be used successfully for automatic extraction of production rules from empirical data. The case considered is a popular public domain database of 8124 mushrooms. With the use of a term selection algorithm, a number of very accurate mask perceptrons (a kind of high-order network or polynomial classifier) have been developed. Then rounding of synaptic weights was applied, leading in many cases to networks with integer weights which were subsequently converted to production rules. It is also shown that focusing of network attention onto a smaller subset of useful attributes ordered with respect to their decreasing discriminating abilities helps significantly in accurate rule generation.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115194276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170481
T. Adachi, R. Furuya, Stephan Greene, K. Mikuriya
Presents a system designed to help in the development of image recognition applications, using a general neural-network classifier and an algorithm for selecting effective image features given a small number of samples. Input to the system consists of a number of primitive image features computed directly from pixel values. The feature selection subsystem generates an image recognition feature vector by operations on the primitive features. It uses a combination of rule-based techniques and statistical heuristics to select the best features. The authors propose a quality statistic function which is based on sample values for each primitive feature. The parameters of this function were decided, and the authors experimented on several different target image groups using this function. Recognition rates were perfect in each case.<>
{"title":"Feature selection for neural network recognition","authors":"T. Adachi, R. Furuya, Stephan Greene, K. Mikuriya","doi":"10.1109/IJCNN.1991.170481","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170481","url":null,"abstract":"Presents a system designed to help in the development of image recognition applications, using a general neural-network classifier and an algorithm for selecting effective image features given a small number of samples. Input to the system consists of a number of primitive image features computed directly from pixel values. The feature selection subsystem generates an image recognition feature vector by operations on the primitive features. It uses a combination of rule-based techniques and statistical heuristics to select the best features. The authors propose a quality statistic function which is based on sample values for each primitive feature. The parameters of this function were decided, and the authors experimented on several different target image groups using this function. Recognition rates were perfect in each case.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170472
T. Omori
The author proposes a model of image transformation that can modulate any unlearned object with a general transformation. That is, the transformation is independent of an object's shape. The local associative neural network model can transform a figure represented by a local feature set. The model transforms a figure satisfying constraints that are given as external inhibition and completing conditions that any figure should satisfy to be a reasonable shape. The basic methods are a figure representation with local features, feature transformation with spatial inhibition, and figure restoration with their interactions. With this model, one can realize an elemental function that will lead to a general figure transformation model without learning or experience.<>
{"title":"Image transformation by spatial inhibition and local association","authors":"T. Omori","doi":"10.1109/IJCNN.1991.170472","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170472","url":null,"abstract":"The author proposes a model of image transformation that can modulate any unlearned object with a general transformation. That is, the transformation is independent of an object's shape. The local associative neural network model can transform a figure represented by a local feature set. The model transforms a figure satisfying constraints that are given as external inhibition and completing conditions that any figure should satisfy to be a reasonable shape. The basic methods are a figure representation with local features, feature transformation with spatial inhibition, and figure restoration with their interactions. With this model, one can realize an elemental function that will lead to a general figure transformation model without learning or experience.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115680237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170635
F. Wong
An improved backpropagation algorithm, called FastProp, for training a feedforward neural network is described. The unique feature of the algorithm is the selective training which is based on the instantaneous causal relationship between the input and output signals during the training process. The causal relationship is calculated based on the error backpropagated to the input layers. The accumulated error, referred to as the accumulated error indices (AEIs), are used to rank the input signals according to their correlation relation with the output signals. An entire set of time series data can be clustered into several situations based on the current input signal which has the highest AEI index, and the neurons can be activated based on the current situations. Experimental results showed that a significant reduction in training time can be achieved with the selective training algorithm compared to the traditional backpropagation algorithm.<>
{"title":"FastProp: a selective training algorithm for fast error propagation","authors":"F. Wong","doi":"10.1109/IJCNN.1991.170635","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170635","url":null,"abstract":"An improved backpropagation algorithm, called FastProp, for training a feedforward neural network is described. The unique feature of the algorithm is the selective training which is based on the instantaneous causal relationship between the input and output signals during the training process. The causal relationship is calculated based on the error backpropagated to the input layers. The accumulated error, referred to as the accumulated error indices (AEIs), are used to rank the input signals according to their correlation relation with the output signals. An entire set of time series data can be clustered into several situations based on the current input signal which has the highest AEI index, and the neurons can be activated based on the current situations. Experimental results showed that a significant reduction in training time can be achieved with the selective training algorithm compared to the traditional backpropagation algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}