Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170573
S. Farrugia, H. Yee, P. Nickolls
An artificial neural network has been tested for the classification of cardiac rhythms from intracardiac electrocardiograms (ECGs). It uses as inputs a small number of waveform samples and extracted parameters. The network has been found to perform better than a rate-based scheme similar to those used in commercially available implantable cardioverter-defibrillators in its ability to distinguish normal rhythms from arrhythmias. It shows, in addition, a certain ability to discriminate between a larger number of rhythms: in particular, between sinus tachycardia and slow ventricular tachycardia and between slow and fast ventricular tachycardias.<>
{"title":"Neural network classification of intracardiac ECG's","authors":"S. Farrugia, H. Yee, P. Nickolls","doi":"10.1109/IJCNN.1991.170573","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170573","url":null,"abstract":"An artificial neural network has been tested for the classification of cardiac rhythms from intracardiac electrocardiograms (ECGs). It uses as inputs a small number of waveform samples and extracted parameters. The network has been found to perform better than a rate-based scheme similar to those used in commercially available implantable cardioverter-defibrillators in its ability to distinguish normal rhythms from arrhythmias. It shows, in addition, a certain ability to discriminate between a larger number of rhythms: in particular, between sinus tachycardia and slow ventricular tachycardia and between slow and fast ventricular tachycardias.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131969103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170479
T. Quah, C. L. Tan, H. H. Teh
A window-based platform, known as the Graphical Environment for Neuronet Expert Systems (GENES), is proposed. The platform provides the user with an easy-to-learn, easy-to-use operating environment for creating, training, editing, and enhancing neural-network-based expert systems. The underlying neural logic network (NELONET) has been shown to be capable of doing logical inferencing and is used in two large-scale-operation expert systems. Building on top of the X-window system and the OPENLOOK user interface, GENES inherits the select-and-perform operation strategy for neural network objects. The system's knowledge base contains simple network elements that correspond to rules in a conventional system. During the inference process, these network elements are linked up dynamically to form a large neural network which will operate according to the NELONET activation rules.<>
{"title":"A graphical operating environment for neural network expert systems","authors":"T. Quah, C. L. Tan, H. H. Teh","doi":"10.1109/IJCNN.1991.170479","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170479","url":null,"abstract":"A window-based platform, known as the Graphical Environment for Neuronet Expert Systems (GENES), is proposed. The platform provides the user with an easy-to-learn, easy-to-use operating environment for creating, training, editing, and enhancing neural-network-based expert systems. The underlying neural logic network (NELONET) has been shown to be capable of doing logical inferencing and is used in two large-scale-operation expert systems. Building on top of the X-window system and the OPENLOOK user interface, GENES inherits the select-and-perform operation strategy for neural network objects. The system's knowledge base contains simple network elements that correspond to rules in a conventional system. During the inference process, these network elements are linked up dynamically to form a large neural network which will operate according to the NELONET activation rules.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130859532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170347
B. Zhang, L. Zhang, H. Zhang
The complexity of the learning algorithm in the PLN (probabilistic logic neuron) network is investigated by using Markov chain theory. A computer simulation of a parity-checking problem has been implemented on a SUN-3 workstation using the C language. The results are given to show the correctness of the theoretical analysis.<>
{"title":"The complexity of learning algorithm in PLN network","authors":"B. Zhang, L. Zhang, H. Zhang","doi":"10.1109/IJCNN.1991.170347","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170347","url":null,"abstract":"The complexity of the learning algorithm in the PLN (probabilistic logic neuron) network is investigated by using Markov chain theory. A computer simulation of a parity-checking problem has been implemented on a SUN-3 workstation using the C language. The results are given to show the correctness of the theoretical analysis.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170641
R. Devanathan, T.H. Ngee
An analog VLSI implementation of neural networks has been modeled in terms of active cell impedance connected to a resistive grid. The resistive grid can be characterized in terms of the nominal linear component and the parasitic component with uncertain parametric values. Necessary and sufficient conditions for the nominal and robust stability of these systems can then be derived.<>
{"title":"Conditions for robust stability of analog VLSI implementation of neural networks with uncertain circuit parasitics","authors":"R. Devanathan, T.H. Ngee","doi":"10.1109/IJCNN.1991.170641","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170641","url":null,"abstract":"An analog VLSI implementation of neural networks has been modeled in terms of active cell impedance connected to a resistive grid. The resistive grid can be characterized in terms of the nominal linear component and the parasitic component with uncertain parametric values. Necessary and sufficient conditions for the nominal and robust stability of these systems can then be derived.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126243696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170716
M.F. Peschl
The aim of the project described is to achieve a deeper understanding of cognitive processes. It is based on the assumption that cognition is the result of neural activities taking place in a natural or artificial neural network (ANN). In the model presented the network is not embedded into a linguistic environment but rather is physically coupled to the environment via sensors and effectors. From an epistemological as well as computer science perspective this is a radical step which has many very important implications. In computational neuroepistemology this kind of connectionism is called radical connectionism or radical neural computing. The ANN has to be physically embedded into its environment. This means that the communication between the system and its environment takes place via effectors and sensors. No symbols are involved in this process of interaction. A recurrent topology is required which ensures a nonlinear and nontrivial behavior. Technical details are given on the simulation of the environment, of the interactions between the artificial cognitive system(s) and the environment and on the implementation of the simulation.<>
{"title":"A simulation system for the investigation of cognitive processes in artificial cognitive systems-Radical connectionism and computational neuroepistemology","authors":"M.F. Peschl","doi":"10.1109/IJCNN.1991.170716","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170716","url":null,"abstract":"The aim of the project described is to achieve a deeper understanding of cognitive processes. It is based on the assumption that cognition is the result of neural activities taking place in a natural or artificial neural network (ANN). In the model presented the network is not embedded into a linguistic environment but rather is physically coupled to the environment via sensors and effectors. From an epistemological as well as computer science perspective this is a radical step which has many very important implications. In computational neuroepistemology this kind of connectionism is called radical connectionism or radical neural computing. The ANN has to be physically embedded into its environment. This means that the communication between the system and its environment takes place via effectors and sensors. No symbols are involved in this process of interaction. A recurrent topology is required which ensures a nonlinear and nontrivial behavior. Technical details are given on the simulation of the environment, of the interactions between the artificial cognitive system(s) and the environment and on the implementation of the simulation.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115284749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170445
Xiaoping Li
A two-layered, laterally connected neural network is proposed for modeling a nonorthogonal visual coding system. If the code primitives are given in advance (as biologically), it can be shown that the connection weights between input and output layers are just these primitives, while the lateral connection weights are formed by their inner products. In order to gain insight into the detailed nature of the network, Hebbian and anti-Hebbian rules are chosen for governing the modifications of feedforward and lateral connection weights, respectively. When the network is fed with random noises, it can self-organize according to these learning rules to develop masks resembling nonorthogonal receptive fields of simple cortical cells, as opposed to those models based on principal component analysis which seek to yield orthogonal feature detectors. At the same time it can perform optimal nonorthogonal image coding with respect to the code primitives being formed.<>
{"title":"Nonorthogonal visual image coding by a laterally inhibitory neural network","authors":"Xiaoping Li","doi":"10.1109/IJCNN.1991.170445","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170445","url":null,"abstract":"A two-layered, laterally connected neural network is proposed for modeling a nonorthogonal visual coding system. If the code primitives are given in advance (as biologically), it can be shown that the connection weights between input and output layers are just these primitives, while the lateral connection weights are formed by their inner products. In order to gain insight into the detailed nature of the network, Hebbian and anti-Hebbian rules are chosen for governing the modifications of feedforward and lateral connection weights, respectively. When the network is fed with random noises, it can self-organize according to these learning rules to develop masks resembling nonorthogonal receptive fields of simple cortical cells, as opposed to those models based on principal component analysis which seek to yield orthogonal feature detectors. At the same time it can perform optimal nonorthogonal image coding with respect to the code primitives being formed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120991029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170423
T. Feng, Z. Houkes, M. Korsten, L. Spreeuwers
A large number of experiments have been done on the basic research of parameter estimation from images with neural networks. To obtain a better estimation accuracy of parameters and to decrease needed storage space and computation time, the architecture of networks, the effective learning rate and momentum, and the selection of training set are investigated. A comparison of network performance to that of the least squares estimator is made. The internal representations in trained networks, i.e. input-to-hidden weight maps or measuring models, which include statistical features of training images and have a clear physical and geometrical meaning, and the internal components of output parameters given by outputs of hidden neurons are presented.<>
{"title":"A study on backpropagation networks for parameter estimation from grey-scale images","authors":"T. Feng, Z. Houkes, M. Korsten, L. Spreeuwers","doi":"10.1109/IJCNN.1991.170423","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170423","url":null,"abstract":"A large number of experiments have been done on the basic research of parameter estimation from images with neural networks. To obtain a better estimation accuracy of parameters and to decrease needed storage space and computation time, the architecture of networks, the effective learning rate and momentum, and the selection of training set are investigated. A comparison of network performance to that of the least squares estimator is made. The internal representations in trained networks, i.e. input-to-hidden weight maps or measuring models, which include statistical features of training images and have a clear physical and geometrical meaning, and the internal components of output parameters given by outputs of hidden neurons are presented.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121169210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170711
A. Hiroike, T. Omori
The authors study temporal association in a stochastic neural network model with symmetric full-connections. A symmetric system is accessible to analysis because of the existence of free-energy. The properties of the model are analytically described by critical temperature of transition between states. The result of the analysis is consistent with Monte Carlo simulations.<>
{"title":"Temporal association in symmetric neural networks","authors":"A. Hiroike, T. Omori","doi":"10.1109/IJCNN.1991.170711","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170711","url":null,"abstract":"The authors study temporal association in a stochastic neural network model with symmetric full-connections. A symmetric system is accessible to analysis because of the existence of free-energy. The properties of the model are analytically described by critical temperature of transition between states. The result of the analysis is consistent with Monte Carlo simulations.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121232896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170585
K. F. Fong, A. P. Loh
The authors show how neural networks can be incorporated in optimal control strategies by providing a mathematical formulation and numerical algorithms in terms of general gradient descent and backpropagation. They present techniques that use neural nets in nonlinear optimal control. It is shown that D.H. Nguyen and B. Widrow's (1990) self-learning control is a special case of this technique. Control of an inverted pendulum using a neural net in nonlinear feedback is simulated, demonstrating the usefulness of the approach.<>
{"title":"Discrete-time optimal control using neural nets","authors":"K. F. Fong, A. P. Loh","doi":"10.1109/IJCNN.1991.170585","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170585","url":null,"abstract":"The authors show how neural networks can be incorporated in optimal control strategies by providing a mathematical formulation and numerical algorithms in terms of general gradient descent and backpropagation. They present techniques that use neural nets in nonlinear optimal control. It is shown that D.H. Nguyen and B. Widrow's (1990) self-learning control is a special case of this technique. Control of an inverted pendulum using a neural net in nonlinear feedback is simulated, demonstrating the usefulness of the approach.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125124468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170524
J.Y. Kim, H.S. Yang
The authors investigate a method of efficiently labeling images using the Markov random field (MRF). The MRF model is defined on the region adjacency graph and the labeling is then optimally determined using simulated annealing. The MRF model parameters are automatically estimated using an error backpropagation network. The proposed method is analyzed through experiments using real natural scene images.<>
{"title":"Markov random field based image labeling with parameter estimation by error backpropagation","authors":"J.Y. Kim, H.S. Yang","doi":"10.1109/IJCNN.1991.170524","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170524","url":null,"abstract":"The authors investigate a method of efficiently labeling images using the Markov random field (MRF). The MRF model is defined on the region adjacency graph and the labeling is then optimally determined using simulated annealing. The MRF model parameters are automatically estimated using an error backpropagation network. The proposed method is analyzed through experiments using real natural scene images.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125261216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}