Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170350
H. Yamakawa, Y. Okabe
It is pointed out that, in general, adaptive automata have a cost function for organizing relations between input signals and output signals. But most of these automata have been studied with an a priori fixed cost function. For this reason the authors introduce a self-constructing value system with profits or losses. This ability helps the automaton to adapt to its environment. In the proposed method, the values system is founded on a priori fixed values (cost functions); then suitable elements are added to the values system by using the correlation between new elements and existing values. In the proposed model the values of the concepts will be modified during the experiences, and the values will control the learning process.<>
{"title":"A recursive neural system for memorizing systems of values arranged in a tree like structure","authors":"H. Yamakawa, Y. Okabe","doi":"10.1109/IJCNN.1991.170350","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170350","url":null,"abstract":"It is pointed out that, in general, adaptive automata have a cost function for organizing relations between input signals and output signals. But most of these automata have been studied with an a priori fixed cost function. For this reason the authors introduce a self-constructing value system with profits or losses. This ability helps the automaton to adapt to its environment. In the proposed method, the values system is founded on a priori fixed values (cost functions); then suitable elements are added to the values system by using the correlation between new elements and existing values. In the proposed model the values of the concepts will be modified during the experiences, and the values will control the learning process.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129578465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170604
M. Gupta, M. Gorzałczany
The authors present a model building technique which combines the strength of the fuzzy set theory and the neural network based structures. This technique can simultaneously deal with two types of knowledge, a nonfuzzy one and a fuzzy one, which usually describe the behavior of complex processes. The proposed method can also be directly applied to the construction of a new type of intelligent fuzzy controller. Some aspects of the adequacy of this fuzzy neuro-computational model are also discussed. A numerical example is provided.<>
{"title":"Fuzzy neuro-computational technique and its application to modelling and control","authors":"M. Gupta, M. Gorzałczany","doi":"10.1109/IJCNN.1991.170604","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170604","url":null,"abstract":"The authors present a model building technique which combines the strength of the fuzzy set theory and the neural network based structures. This technique can simultaneously deal with two types of knowledge, a nonfuzzy one and a fuzzy one, which usually describe the behavior of complex processes. The proposed method can also be directly applied to the construction of a new type of intelligent fuzzy controller. Some aspects of the adequacy of this fuzzy neuro-computational model are also discussed. A numerical example is provided.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129751146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170504
Yongjun Zhang, Zongzhi Chen
Describes a novel neural-network architecture, the neural network loop (NNL), and its learning rules. It can operate as Hopfield, BAM (bidirectional associative memory), and other kinds of neural networks. In particular, it can perform multiple category associative memory. This capability is very similar to that of the human brain. It can be applied to pattern recognition and associative memory. Computer simulation was carried out, and the results prove that NNL is an effective network.<>
{"title":"A new architecture of neural network","authors":"Yongjun Zhang, Zongzhi Chen","doi":"10.1109/IJCNN.1991.170504","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170504","url":null,"abstract":"Describes a novel neural-network architecture, the neural network loop (NNL), and its learning rules. It can operate as Hopfield, BAM (bidirectional associative memory), and other kinds of neural networks. In particular, it can perform multiple category associative memory. This capability is very similar to that of the human brain. It can be applied to pattern recognition and associative memory. Computer simulation was carried out, and the results prove that NNL is an effective network.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129345296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170698
I.L. Davis, P. A. Sandon
The problem of recognizing rhythmic patterns characterized by a periodically repeating sequence of events is addressed. An approach to representing temporal information in neural networks and an application that makes use of this representation are described. The Tempnet rhythm system is a particular instantiation of these ideas. It is used to demonstrate the use of temporal representation in the processing of temporal signals. Decaying node activations are used to represent the timing of specific temporal events. This approach was demonstrated in a system for categorizing periodically repeating patterns, independent of time scale. The network simulator is described, along with the results of some sample training and performance runs.<>
{"title":"Temporally sensitive neural networks","authors":"I.L. Davis, P. A. Sandon","doi":"10.1109/IJCNN.1991.170698","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170698","url":null,"abstract":"The problem of recognizing rhythmic patterns characterized by a periodically repeating sequence of events is addressed. An approach to representing temporal information in neural networks and an application that makes use of this representation are described. The Tempnet rhythm system is a particular instantiation of these ideas. It is used to demonstrate the use of temporal representation in the processing of temporal signals. Decaying node activations are used to represent the timing of specific temporal events. This approach was demonstrated in a system for categorizing periodically repeating patterns, independent of time scale. The network simulator is described, along with the results of some sample training and performance runs.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123967046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170318
M. Azimi-Sadjadi, S. Sheedvash, F. O. Trujillo
An approach to simultaneous recursive weight adaptation and node creation in multilayer perceptron neural networks is presented. The method uses time and order update formulations in the orthogonal projection method to arrive at a recursive weight updating procedure for the training process of the neural network and a recursive node creation algorithm for weight adjustment of a layer with added nodes during the training process. The approach allows optimal dynamic node creation in the sense that the mean-squared error is minimized for each new topology. The effectiveness of the algorithm was demonstrated on a real world application for detecting and classifying underground dielectric anomalies.<>
{"title":"A new approach for dynamic node creation in multilayer neural networks","authors":"M. Azimi-Sadjadi, S. Sheedvash, F. O. Trujillo","doi":"10.1109/IJCNN.1991.170318","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170318","url":null,"abstract":"An approach to simultaneous recursive weight adaptation and node creation in multilayer perceptron neural networks is presented. The method uses time and order update formulations in the orthogonal projection method to arrive at a recursive weight updating procedure for the training process of the neural network and a recursive node creation algorithm for weight adjustment of a layer with added nodes during the training process. The approach allows optimal dynamic node creation in the sense that the mean-squared error is minimized for each new topology. The effectiveness of the algorithm was demonstrated on a real world application for detecting and classifying underground dielectric anomalies.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124214400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170551
C. Neubauer
A fast classifier based on a neural network is described which is the central part of an optical inspection system. Defects on treated metal surfaces are detected and classified by textural segmentation. The main purpose of this work is the development of an optical inspection system for a wide range of real-time applications. Therefore, the preprocessing of the image data is reduced to the calculation of gray-value histograms on a 10*10 pixel window. By using only eight gray-value classes in the histograms, an efficient reduction of the data is obtained. The histograms calculated on each window are presented to a three-layered perceptron net for defect detection and classification. This method is applied to a 100% surface inspection of rolling bearing metal rings. Depending on the defect class investigated the misclassification rate of the window classifier ranged from 1.5 to 11.5%.<>
{"title":"Fast detection and classification of defects on treated metal surfaces using a backpropagation neural network","authors":"C. Neubauer","doi":"10.1109/IJCNN.1991.170551","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170551","url":null,"abstract":"A fast classifier based on a neural network is described which is the central part of an optical inspection system. Defects on treated metal surfaces are detected and classified by textural segmentation. The main purpose of this work is the development of an optical inspection system for a wide range of real-time applications. Therefore, the preprocessing of the image data is reduced to the calculation of gray-value histograms on a 10*10 pixel window. By using only eight gray-value classes in the histograms, an efficient reduction of the data is obtained. The histograms calculated on each window are presented to a three-layered perceptron net for defect detection and classification. This method is applied to a 100% surface inspection of rolling bearing metal rings. Depending on the defect class investigated the misclassification rate of the window classifier ranged from 1.5 to 11.5%.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123640690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170607
Hing-Yip Chan, D. Yeung, K.F. Cheung
A methodology of synthesizing a neocognitron is presented. The goal is that the system parameters is a neocognitron can be 'programmed' rather than learned through laborious training. The tool used is the attribute graph theory. Using a set of attribute graphs describing structural and contextual information of different classes of patterns, one can synthesize a neocognitron through a mapping algorithm. The deformation-invariant attribute of the neocognitron can be preserved through the blurring of S-cells. The performance of the neocognitron obtained through the synthesis is contrasted with that of an identical neocognitron obtained through supervised training.<>
{"title":"Mapping multi-layer attributed graphs onto recognition network","authors":"Hing-Yip Chan, D. Yeung, K.F. Cheung","doi":"10.1109/IJCNN.1991.170607","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170607","url":null,"abstract":"A methodology of synthesizing a neocognitron is presented. The goal is that the system parameters is a neocognitron can be 'programmed' rather than learned through laborious training. The tool used is the attribute graph theory. Using a set of attribute graphs describing structural and contextual information of different classes of patterns, one can synthesize a neocognitron through a mapping algorithm. The deformation-invariant attribute of the neocognitron can be preserved through the blurring of S-cells. The performance of the neocognitron obtained through the synthesis is contrasted with that of an identical neocognitron obtained through supervised training.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121197151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170384
Tao Wang, X. Zhuang, X. Xing
A self-improving associative neural network (SIANN) model is presented. The implementation of this neural network consists of two phases, namely a learning procedure and a retrieval procedure. The learning procedure that determines connection weights among the neurons provides the ability to embody certain regularities implicit in a noisy pattern. It can be realized by a multilayer logic neural network using one pass. The self-improvement of the noisy pattern is achieved by the retrieval procedure. The salient points of the neural network model result from the fact that it does not require a set of training patterns, uses only one pass for the learning procedure, and converges very quickly. Computer experimental results illustrate the self-improvement of the neural network.<>
{"title":"Self-improving associative neural network models","authors":"Tao Wang, X. Zhuang, X. Xing","doi":"10.1109/IJCNN.1991.170384","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170384","url":null,"abstract":"A self-improving associative neural network (SIANN) model is presented. The implementation of this neural network consists of two phases, namely a learning procedure and a retrieval procedure. The learning procedure that determines connection weights among the neurons provides the ability to embody certain regularities implicit in a noisy pattern. It can be realized by a multilayer logic neural network using one pass. The self-improvement of the noisy pattern is achieved by the retrieval procedure. The salient points of the neural network model result from the fact that it does not require a set of training patterns, uses only one pass for the learning procedure, and converges very quickly. Computer experimental results illustrate the self-improvement of the neural network.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170560
T. Kitamura, W. Hui, A. Iwata, N. Suzumura
The authors describe speaker-dependent large vocabulary word recognition using a large-scale neural network, CombNET-II, which consists of a four-layered neural network with a comb structure, and dynamic spectral features of speech based on a two-dimensional mel-cepstrum. CombNET-II consists of two types of neural networks. The first part is a stem network which learns by a self-growing algorithm and roughly classifies an input pattern. The second part consists of many branch networks which learn by a backpropagation algorithm and precisely classify the input pattern. A stem network is a vector quantizing network and it reduces the number of category candidates for the branch networks, so that each branch network has only a small number of connections and it is easy to tune up. Experiments on speaker-dependent large-vocabulary word recognition for 1000 Chinese spoken words is described. Experimental results show that the high recognition accuracy of 99.1% is obtained and that CombNET-II is very effective for large vocabulary spoken word recognition.<>
{"title":"Speaker-dependent 1000 word recognition using a large scale neural network 'CombNET-II' and dynamic spectral features","authors":"T. Kitamura, W. Hui, A. Iwata, N. Suzumura","doi":"10.1109/IJCNN.1991.170560","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170560","url":null,"abstract":"The authors describe speaker-dependent large vocabulary word recognition using a large-scale neural network, CombNET-II, which consists of a four-layered neural network with a comb structure, and dynamic spectral features of speech based on a two-dimensional mel-cepstrum. CombNET-II consists of two types of neural networks. The first part is a stem network which learns by a self-growing algorithm and roughly classifies an input pattern. The second part consists of many branch networks which learn by a backpropagation algorithm and precisely classify the input pattern. A stem network is a vector quantizing network and it reduces the number of category candidates for the branch networks, so that each branch network has only a small number of connections and it is easy to tune up. Experiments on speaker-dependent large-vocabulary word recognition for 1000 Chinese spoken words is described. Experimental results show that the high recognition accuracy of 99.1% is obtained and that CombNET-II is very effective for large vocabulary spoken word recognition.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116479168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170376
R. Rapp, M. Wettler
An associative lexical net whose weights are computed on the basis of the co-occurrences of words using Hebb's rule has been built. The co-occurrences of word pairs are determined by shifting a window over a large body of text. To estimate the associative response to a given stimulus word, the corresponding node is activated and its activity is propagated in the net. The proposed model assumes that words with high activities after propagation correspond to the associative responses of human subjects. These predictions have been tested and confirmed using the association norms collected by Russel and Jenkins.<>
{"title":"Prediction of free word associations based on Hebbian learning","authors":"R. Rapp, M. Wettler","doi":"10.1109/IJCNN.1991.170376","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170376","url":null,"abstract":"An associative lexical net whose weights are computed on the basis of the co-occurrences of words using Hebb's rule has been built. The co-occurrences of word pairs are determined by shifting a window over a large body of text. To estimate the associative response to a given stimulus word, the corresponding node is activated and its activity is propagated in the net. The proposed model assumes that words with high activities after propagation correspond to the associative responses of human subjects. These predictions have been tested and confirmed using the association norms collected by Russel and Jenkins.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121538028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}