Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170633
E. Khan, T. Ogunfunmi
The authors investigate the possibility of adding a multilayered feedforward neural network controller to an existing servomotor controller to make it an intelligent adaptive controller. The use of the existing controller guarantees coarse learning and thus provides better generalization and correction capabilities. Several learning algorithms are proposed to properly correct the motor inputs under various system nonlinearities, parameter variations over time, and uncertainties. Simulations show very encouraging results. The performance of the proposed controller is compared with that of a proportional-integral-derivative (PID) controller and a model reference adaptive (MRAC) controller.<>
{"title":"A multilayered neural net controller for servo systems","authors":"E. Khan, T. Ogunfunmi","doi":"10.1109/IJCNN.1991.170633","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170633","url":null,"abstract":"The authors investigate the possibility of adding a multilayered feedforward neural network controller to an existing servomotor controller to make it an intelligent adaptive controller. The use of the existing controller guarantees coarse learning and thus provides better generalization and correction capabilities. Several learning algorithms are proposed to properly correct the motor inputs under various system nonlinearities, parameter variations over time, and uncertainties. Simulations show very encouraging results. The performance of the proposed controller is compared with that of a proportional-integral-derivative (PID) controller and a model reference adaptive (MRAC) controller.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123514931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170356
T. Hashiyama, T. Furuhashi, Y. Uchikawa, H. Kato
The face graph method with such varying elements as dyes, eyebrows, mouth, etc. is used for expressing multidimensional data. Since human beings are very sensitive to human faces, one can easily evaluate the multidimensional data expressed by the face graph. The authors present a novel approach of the face graph method using a fuzzy neural network for expressing conditions of complex systems. Experiments are carried out to make the face graphs correspond to the conditions of an electric circuit.<>
{"title":"A face graph method using a fuzzy neural network for expressing conditions of complex systems","authors":"T. Hashiyama, T. Furuhashi, Y. Uchikawa, H. Kato","doi":"10.1109/IJCNN.1991.170356","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170356","url":null,"abstract":"The face graph method with such varying elements as dyes, eyebrows, mouth, etc. is used for expressing multidimensional data. Since human beings are very sensitive to human faces, one can easily evaluate the multidimensional data expressed by the face graph. The authors present a novel approach of the face graph method using a fuzzy neural network for expressing conditions of complex systems. Experiments are carried out to make the face graphs correspond to the conditions of an electric circuit.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123647799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170654
G. Bolt
An examination of the fault tolerance properties of lateral interaction networks is presented. The general concept of a soft problem is discussed along with the resulting implications for reliability. Fault injection experiments were performed using several input datasets with differing characteristics in conjunction with various combinations of network parameters. It was found that a high degree of tolerance to faults existed and that the reliability of operation degraded smoothly. This result was independent of both the nature of the input dataset and to a lesser extent of the choice of network parameters.<>
{"title":"Fault tolerance of lateral interaction networks","authors":"G. Bolt","doi":"10.1109/IJCNN.1991.170654","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170654","url":null,"abstract":"An examination of the fault tolerance properties of lateral interaction networks is presented. The general concept of a soft problem is discussed along with the resulting implications for reliability. Fault injection experiments were performed using several input datasets with differing characteristics in conjunction with various combinations of network parameters. It was found that a high degree of tolerance to faults existed and that the reliability of operation degraded smoothly. This result was independent of both the nature of the input dataset and to a lesser extent of the choice of network parameters.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123675711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170704
I. Matsuba
A sequential associator based on a feedback multilayer neural network is proposed to analyze inherent structures in a sequence generated by a nonlinear dynamical system and to predict a future sequence based on these structures. The network represents time correlations in the connection weights during learning. It is capable of detecting the inherent structure and explaining the behavior of systems. The structure of the neural sequential associator, inherent structure detection, and the optimal network size based on the use of an information criterion are discussed.<>
{"title":"Inherent structure detection by neural sequential associator","authors":"I. Matsuba","doi":"10.1109/IJCNN.1991.170704","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170704","url":null,"abstract":"A sequential associator based on a feedback multilayer neural network is proposed to analyze inherent structures in a sequence generated by a nonlinear dynamical system and to predict a future sequence based on these structures. The network represents time correlations in the connection weights during learning. It is capable of detecting the inherent structure and explaining the behavior of systems. The structure of the neural sequential associator, inherent structure detection, and the optimal network size based on the use of an information criterion are discussed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170674
T. Hatanaka, Y. Nishikawa
An auto-associative memory is constructed in a recurrent network whose connection matrix is determined by use of backpropagation through time (BPTT). Through several computer simulations, basins of the memory generated by this method are compared with those generated by the conventional methods. In particular, the ability of the BPTT to adjust the basin size is investigated in detail.<>
{"title":"Adjustment of the basin size in autoassociative memories by use of the BPTT technique","authors":"T. Hatanaka, Y. Nishikawa","doi":"10.1109/IJCNN.1991.170674","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170674","url":null,"abstract":"An auto-associative memory is constructed in a recurrent network whose connection matrix is determined by use of backpropagation through time (BPTT). Through several computer simulations, basins of the memory generated by this method are compared with those generated by the conventional methods. In particular, the ability of the BPTT to adjust the basin size is investigated in detail.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124657362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170510
M. Gupta, J. Qi
An attempt is made to establish some basic models for fuzzy neurons. Three types of fuzzy neural models are proposed. The neuron I is described by logical equations or if-then rules; its inputs are either fuzzy sets or crisp values. The neuron II, with numerical inputs, and the neuron III, with fuzzy inputs, are considered to be a simple extension of nonfuzzy neurons. A few methods of how these neurons change themselves during learning to improve their performance are also given. The notion of synaptic and somatic learning and adaptation is also introduced, which seems to be a powerful approach for developed a new class of fuzzy neural networks. Such an approach may have application in the processing of fuzzy information and the design of expert systems with learning and adaptation abilities.<>
{"title":"Synaptic and somatic learning and adaptation in fuzzy neural systems","authors":"M. Gupta, J. Qi","doi":"10.1109/IJCNN.1991.170510","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170510","url":null,"abstract":"An attempt is made to establish some basic models for fuzzy neurons. Three types of fuzzy neural models are proposed. The neuron I is described by logical equations or if-then rules; its inputs are either fuzzy sets or crisp values. The neuron II, with numerical inputs, and the neuron III, with fuzzy inputs, are considered to be a simple extension of nonfuzzy neurons. A few methods of how these neurons change themselves during learning to improve their performance are also given. The notion of synaptic and somatic learning and adaptation is also introduced, which seems to be a powerful approach for developed a new class of fuzzy neural networks. Such an approach may have application in the processing of fuzzy information and the design of expert systems with learning and adaptation abilities.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170447
J. Diederich, D. Long
A connectionist model for answering open-class questions in the context of text processing is presented. The system answers questions from different question categories, such as how, why, and consequence questions. The system responds to a question by generating a set of possible answers that are weighted according to their plausibility. Search is performed by means of a massively parallel directed spreading activation process. The search process operates on several knowledge sources (i.e., connectionist networks) that are learned or explicitly built-in. Spreading activation involves the use of signature messages, which are numeric values that are propagated throughout the networks and identify a particular question category (this makes the system hybrid). Binder units that gate the flow of activation between textual units receive these signatures and change their states. That is, the binder units either block the spread of activation or allow the flow of activation in a certain direction. The process results in a pattern of activation that represents a set of candidate answers based on available knowledge sources.<>
{"title":"Efficient question answering in a hybrid system","authors":"J. Diederich, D. Long","doi":"10.1109/IJCNN.1991.170447","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170447","url":null,"abstract":"A connectionist model for answering open-class questions in the context of text processing is presented. The system answers questions from different question categories, such as how, why, and consequence questions. The system responds to a question by generating a set of possible answers that are weighted according to their plausibility. Search is performed by means of a massively parallel directed spreading activation process. The search process operates on several knowledge sources (i.e., connectionist networks) that are learned or explicitly built-in. Spreading activation involves the use of signature messages, which are numeric values that are propagated throughout the networks and identify a particular question category (this makes the system hybrid). Binder units that gate the flow of activation between textual units receive these signatures and change their states. That is, the binder units either block the spread of activation or allow the flow of activation in a certain direction. The process results in a pattern of activation that represents a set of candidate answers based on available knowledge sources.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170449
J. Wallace, K. Bluff
The authors explore the potential of a specific cognitive architecture to provide the relational mechanism needed to capitalize on the respective strengths of symbolic and nonsymbolic modes of representation, and on the benefits of their interaction in achieving machine intelligence. This architecture is strongly influenced by the BAIRN system of I. Wallace et al. (1987) which provides a general theory of human cognition with a particular emphasis on the function of learning. This cognitive architecture is being used in a generic approach to the aspects of human performance designated by the term situation awareness.<>
{"title":"A cognitive framework for hybrid systems","authors":"J. Wallace, K. Bluff","doi":"10.1109/IJCNN.1991.170449","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170449","url":null,"abstract":"The authors explore the potential of a specific cognitive architecture to provide the relational mechanism needed to capitalize on the respective strengths of symbolic and nonsymbolic modes of representation, and on the benefits of their interaction in achieving machine intelligence. This architecture is strongly influenced by the BAIRN system of I. Wallace et al. (1987) which provides a general theory of human cognition with a particular emphasis on the function of learning. This cognitive architecture is being used in a generic approach to the aspects of human performance designated by the term situation awareness.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127448676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170539
Hahn-Ming Lee, Ching-Chi Hsu
A critical factor that affects the performance of neural network training algorithms and the generalization of trained networks is the training instances. The authors consider the handling of don't care attributes in training instances. Several approaches are discussed and their experimental results are presented. The following approaches are considered: (1) replace don't care attributes with a fixed value; (2) replace don't care attributes with their maximum or minimum encoded values; (3) replace don't care attributes with their maximum and minimum encoded values; and (4) replace don't care attributes with all their possible encoded values.<>
{"title":"The handling of don't care attributes","authors":"Hahn-Ming Lee, Ching-Chi Hsu","doi":"10.1109/IJCNN.1991.170539","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170539","url":null,"abstract":"A critical factor that affects the performance of neural network training algorithms and the generalization of trained networks is the training instances. The authors consider the handling of don't care attributes in training instances. Several approaches are discussed and their experimental results are presented. The following approaches are considered: (1) replace don't care attributes with a fixed value; (2) replace don't care attributes with their maximum or minimum encoded values; (3) replace don't care attributes with their maximum and minimum encoded values; and (4) replace don't care attributes with all their possible encoded values.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170348
T. Clarkson, D. Gorse, Y. Guan, J.G. Taylor
The probabilistic RAM (pRAM) neuron is highly nonlinear and stochastic, and it is hardware-realizable. The following applications of the pRAM are discussed: the processing of half-tone images, the generation of topological maps, the storage of temporal sequences, and the recognition of regular grammars.<>
{"title":"Applications of the pRAM","authors":"T. Clarkson, D. Gorse, Y. Guan, J.G. Taylor","doi":"10.1109/IJCNN.1991.170348","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170348","url":null,"abstract":"The probabilistic RAM (pRAM) neuron is highly nonlinear and stochastic, and it is hardware-realizable. The following applications of the pRAM are discussed: the processing of half-tone images, the generation of topological maps, the storage of temporal sequences, and the recognition of regular grammars.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127517022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}