Pub Date : 1994-07-01DOI: 10.1109/ICNN.1994.374551
J. A. Cooper
Many safety (risk) analyses depend on uncertain inputs and on mathematical models chosen from various alternatives, but give fixed results (implying no uncertainty). Conventional uncertainty analyses help, but are also based on assumptions and models, the accuracy of which may be difficult to assure. Some of the models and assumptions that on cursory examination seem reasonable can be misleading. As a result, quantitative assessments, even those accompanied by uncertainty measures, can give unwarranted impressions of accuracy. Since analysis results can be a major contributor to a safety-measure decision process, risk management depends on relating uncertainty to only the information available. The uncertainties due to abnormal environments are even more challenging than those in normal-environment safety assessments; and therefore require an even more cautious approach. A fuzzy algebra analysis is proposed in this paper that has the potential to appropriately reflect the information available and portray uncertainties well, especially for abnormal environments.<>
{"title":"Fuzzy-algebra uncertainty analysis for abnormal-environment safety assessment","authors":"J. A. Cooper","doi":"10.1109/ICNN.1994.374551","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374551","url":null,"abstract":"Many safety (risk) analyses depend on uncertain inputs and on mathematical models chosen from various alternatives, but give fixed results (implying no uncertainty). Conventional uncertainty analyses help, but are also based on assumptions and models, the accuracy of which may be difficult to assure. Some of the models and assumptions that on cursory examination seem reasonable can be misleading. As a result, quantitative assessments, even those accompanied by uncertainty measures, can give unwarranted impressions of accuracy. Since analysis results can be a major contributor to a safety-measure decision process, risk management depends on relating uncertainty to only the information available. The uncertainties due to abnormal environments are even more challenging than those in normal-environment safety assessments; and therefore require an even more cautious approach. A fuzzy algebra analysis is proposed in this paper that has the potential to appropriately reflect the information available and portray uncertainties well, especially for abnormal environments.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124661918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.375035
Gangsheng Wang, N. Ansari
The problem of scheduling interference-free transmissions with maximum throughput in a multi-hop radio network is NP-complete. The computational complexity becomes intractable as the network size increases. In this paper, the scheduling is formulated as a combinatorial optimization problem. An efficient neural network approach, namely, mean field annealing, is applied to obtain optimal transmission schedules. Numerical examples show that this method is capable of finding an interference-free schedule with (almost) optimal throughput.<>
{"title":"A neural network approach to broadcast scheduling in multi-hop radio networks","authors":"Gangsheng Wang, N. Ansari","doi":"10.1109/ICNN.1994.375035","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375035","url":null,"abstract":"The problem of scheduling interference-free transmissions with maximum throughput in a multi-hop radio network is NP-complete. The computational complexity becomes intractable as the network size increases. In this paper, the scheduling is formulated as a combinatorial optimization problem. An efficient neural network approach, namely, mean field annealing, is applied to obtain optimal transmission schedules. Numerical examples show that this method is capable of finding an interference-free schedule with (almost) optimal throughput.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114980802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374715
B. B. Miller, F. Merat
The recognition of objects given a complete or partial set of features is inherent in human intelligence. The fields of pattern recognition and artificial intelligence, among others, have addressed this topic with a variety of models which lack consistency and generality. Thus, it is the goal of this paper to set forth a generalized model for object recognition (classification). System models utilizing neural networks have been suggested for category perception. The proposed system is based on the principles of probability. We refer to this architecture as the generalized category perception model.<>
{"title":"A neural network architecture for generalized category perception","authors":"B. B. Miller, F. Merat","doi":"10.1109/ICNN.1994.374715","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374715","url":null,"abstract":"The recognition of objects given a complete or partial set of features is inherent in human intelligence. The fields of pattern recognition and artificial intelligence, among others, have addressed this topic with a variety of models which lack consistency and generality. Thus, it is the goal of this paper to set forth a generalized model for object recognition (classification). System models utilizing neural networks have been suggested for category perception. The proposed system is based on the principles of probability. We refer to this architecture as the generalized category perception model.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125244408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374766
W. Lucking, G. Darnell, E. D. Chesmore
The work presented here forms part of a study into the application of self-learning networks to the complex field of machine condition monitoring. There are already several methods by which machines can be automatically monitored, but the development of a simplified nonintrusive "intelligent" system would be advantageous. Some work has been undertaken on the application of time encoded speech (TES) to automatic speech recognition using neural networks. It seemed feasible to try a similar technique to classify the acoustic emissions of a mechanical object. Initial experimentation was carried out using the speech system on a diesel engine. However the implementation described here involves a simplified form of data application to that employed previously. It consists of a simple conversion of microphone TES acoustic data into a matrix of frequency of code occurrence which can be directly applied to an artificial neural network (ANN).<>
{"title":"Acoustical condition monitoring of a mechanical gearbox using artificial neural networks","authors":"W. Lucking, G. Darnell, E. D. Chesmore","doi":"10.1109/ICNN.1994.374766","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374766","url":null,"abstract":"The work presented here forms part of a study into the application of self-learning networks to the complex field of machine condition monitoring. There are already several methods by which machines can be automatically monitored, but the development of a simplified nonintrusive \"intelligent\" system would be advantageous. Some work has been undertaken on the application of time encoded speech (TES) to automatic speech recognition using neural networks. It seemed feasible to try a similar technique to classify the acoustic emissions of a mechanical object. Initial experimentation was carried out using the speech system on a diesel engine. However the implementation described here involves a simplified form of data application to that employed previously. It consists of a simple conversion of microphone TES acoustic data into a matrix of frequency of code occurrence which can be directly applied to an artificial neural network (ANN).<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117112693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374695
A. Sharaf, T. Lie
The paper presents a novel AI-ANN neural network global online fault detection, pattern classification, and relaying detection scheme for synchronous generators in interconnected electric utility networks. The input discriminant vector comprises the dominant FFT frequency spectra of eighteen input variables forming the discriminant diagnostic hyperplane. The online ANN based relaying scheme classifies fault existence, fault type as either transient stability or loss of excitation, the allowable critical clearing time, and loss of excitation type as either open circuit or short circuit filed condition. The proposed FFT dominant frequency-based hyperplane diagnostic technique can be easily extended to multimachine electric interconnected AC systems.<>
{"title":"Neural network pattern classifications of transient stability and loss of excitation for synchronous generators","authors":"A. Sharaf, T. Lie","doi":"10.1109/ICNN.1994.374695","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374695","url":null,"abstract":"The paper presents a novel AI-ANN neural network global online fault detection, pattern classification, and relaying detection scheme for synchronous generators in interconnected electric utility networks. The input discriminant vector comprises the dominant FFT frequency spectra of eighteen input variables forming the discriminant diagnostic hyperplane. The online ANN based relaying scheme classifies fault existence, fault type as either transient stability or loss of excitation, the allowable critical clearing time, and loss of excitation type as either open circuit or short circuit filed condition. The proposed FFT dominant frequency-based hyperplane diagnostic technique can be easily extended to multimachine electric interconnected AC systems.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374556
Hyukjoon Lee, K. Mehrotra, C. Mohan, S. Ranka
Traditional neural network training techniques do not work well on problems with many discontinuities, such as those that arise in multicomputer communication cost modeling. We develop a new algorithm to solve this problem. This algorithm incrementally adds modules to the network, successively expanding the 'window' in the data space where the current module works well. The need for a new module is automatically recognized by the system. This algorithm performs very well on problems with many discontinuities, and requires fewer computations than traditional backpropagation.<>
{"title":"An incremental network construction algorithm for approximating discontinuous functions","authors":"Hyukjoon Lee, K. Mehrotra, C. Mohan, S. Ranka","doi":"10.1109/ICNN.1994.374556","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374556","url":null,"abstract":"Traditional neural network training techniques do not work well on problems with many discontinuities, such as those that arise in multicomputer communication cost modeling. We develop a new algorithm to solve this problem. This algorithm incrementally adds modules to the network, successively expanding the 'window' in the data space where the current module works well. The need for a new module is automatically recognized by the system. This algorithm performs very well on problems with many discontinuities, and requires fewer computations than traditional backpropagation.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121036710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374571
Boris A. Galitsky
Studies the logical modelling of neural networks. The principles of feature representation and the mechanisms of the features' interaction in the following layers under the feature space formation have not previously been elucidated. Approaches connected with the syntactic theory of pattern recognition are suggested, in the sense that the symbolic manipulations are realized in our model of the network's actions. The layer of neuron-detectors is the first layer in the information processing pathway, where the transformation from quantitative to qualitative form, from the field of stimulus intensity to the layer distribution of neuron responses is accomplished. Each response encodes the presence of a revealed stimulus feature. In other words, if the receptive field of the primary feature detectors correspond to the physical field of the percepting value, encoded by a membrane potential or spike, then the receptive fields of the following layers represent the mutual location emerged at the previous layers. This paper addresses the question of how more complex features could be formed by the neurons of the following layers, coming from the primary features of the cell-detectors. The paper is based on the ultraproduct theory, the formalism of algebra and mathematical logic. The neuron network investigated accomplishes transformations according to the analogue-symbolic scheme, realizing a specific syntax of grammar, operating with such symbols, by the physical laws of the system described. The symbol representation of a signal cannot be reduced to its quantization in the general situation.<>
{"title":"Network connectivity of neurons-feature detectors","authors":"Boris A. Galitsky","doi":"10.1109/ICNN.1994.374571","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374571","url":null,"abstract":"Studies the logical modelling of neural networks. The principles of feature representation and the mechanisms of the features' interaction in the following layers under the feature space formation have not previously been elucidated. Approaches connected with the syntactic theory of pattern recognition are suggested, in the sense that the symbolic manipulations are realized in our model of the network's actions. The layer of neuron-detectors is the first layer in the information processing pathway, where the transformation from quantitative to qualitative form, from the field of stimulus intensity to the layer distribution of neuron responses is accomplished. Each response encodes the presence of a revealed stimulus feature. In other words, if the receptive field of the primary feature detectors correspond to the physical field of the percepting value, encoded by a membrane potential or spike, then the receptive fields of the following layers represent the mutual location emerged at the previous layers. This paper addresses the question of how more complex features could be formed by the neurons of the following layers, coming from the primary features of the cell-detectors. The paper is based on the ultraproduct theory, the formalism of algebra and mathematical logic. The neuron network investigated accomplishes transformations according to the analogue-symbolic scheme, realizing a specific syntax of grammar, operating with such symbols, by the physical laws of the system described. The symbol representation of a signal cannot be reduced to its quantization in the general situation.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127120215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374773
B. Solaiman, D. Picart
In this study, the use of neural networks (NN) in modelling biochemical reactions is shown. The metabolic chain describing the synthesis of puric bases is simulated. Results obtained are identical to those already known. The use of neural networks permits the development of more accurate models of enzymatic reactions. Thus, simulation tests concerning the use of new drugs can be performed rapidly and with good accuracy.<>
{"title":"Neural networks modelling of biochemical reactions","authors":"B. Solaiman, D. Picart","doi":"10.1109/ICNN.1994.374773","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374773","url":null,"abstract":"In this study, the use of neural networks (NN) in modelling biochemical reactions is shown. The metabolic chain describing the synthesis of puric bases is simulated. Results obtained are identical to those already known. The use of neural networks permits the development of more accurate models of enzymatic reactions. Thus, simulation tests concerning the use of new drugs can be performed rapidly and with good accuracy.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127162320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374570
F. Alicata, M. Migliore, G. Ayala
Long-term potentiation (LTP) of the excitatory postsynaptic potentials (EPSPs) is the modification of synaptic strength produced by a train of conditioning stimuli. The associative nature of LTP has been observed experimentally delivering conditioning stimuli to two different pathways, a strong one and a weak one, converging on the same dendritic area of a given neuron. However, there is not yet sufficient information to have a clear model of the biophysical processes involved. We present a computational model, consistent with experimental data, that uses the retrograde messengers hypothesis. Using this model, it is possible to propose a reasonable interpretation of experiments and the possible roles of retrograde messengers in associative LTP.<>
{"title":"A computational model for the associative long-term potentiation","authors":"F. Alicata, M. Migliore, G. Ayala","doi":"10.1109/ICNN.1994.374570","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374570","url":null,"abstract":"Long-term potentiation (LTP) of the excitatory postsynaptic potentials (EPSPs) is the modification of synaptic strength produced by a train of conditioning stimuli. The associative nature of LTP has been observed experimentally delivering conditioning stimuli to two different pathways, a strong one and a weak one, converging on the same dendritic area of a given neuron. However, there is not yet sufficient information to have a clear model of the biophysical processes involved. We present a computational model, consistent with experimental data, that uses the retrograde messengers hypothesis. Using this model, it is possible to propose a reasonable interpretation of experiments and the possible roles of retrograde messengers in associative LTP.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127451558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.375042
F. Salam, S. Vedula, G. Erten
Test results of two prototype circuit implementations that compute the maximal principal component are described. The implementations are designed to be compact and operate in the subthreshold regime for low power consumption. The prototypes use direct realization of a nonlinear self-learning circuit models which we have developed.<>
{"title":"Low power analog chips for the computation of the maximal principal component","authors":"F. Salam, S. Vedula, G. Erten","doi":"10.1109/ICNN.1994.375042","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375042","url":null,"abstract":"Test results of two prototype circuit implementations that compute the maximal principal component are described. The implementations are designed to be compact and operate in the subthreshold regime for low power consumption. The prototypes use direct realization of a nonlinear self-learning circuit models which we have developed.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127504731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}