Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.375049
S. Kartalopoulos
Communication systems are real-time deterministic, well defined systems that transport voice/data signals from point A to point B reliably. However, the transmitted signal is subject to significant distortion by the very harsh environment, the medium, and the system itself. Despite this, data reaches its destination crisply or error-free. To achieve the high quality of error-free data, mechanisms that affect the quality of signal are addressed a priori and countermeasures are developed so that the potentially "fuzzifiers" are removed or "de-fuzzified". Here, the fuzzification-defuzzification process of the signal in real-time communication systems is addressed in the context of temporal fuzziness or fuzziness in the time domain. Temporal fuzzy factors that affect the operation of communication systems and their signal transmission are illustrated, analyzed, and the de-fuzzification process is discussed.<>
{"title":"Temporal fuzziness in communications systems","authors":"S. Kartalopoulos","doi":"10.1109/ICNN.1994.375049","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375049","url":null,"abstract":"Communication systems are real-time deterministic, well defined systems that transport voice/data signals from point A to point B reliably. However, the transmitted signal is subject to significant distortion by the very harsh environment, the medium, and the system itself. Despite this, data reaches its destination crisply or error-free. To achieve the high quality of error-free data, mechanisms that affect the quality of signal are addressed a priori and countermeasures are developed so that the potentially \"fuzzifiers\" are removed or \"de-fuzzified\". Here, the fuzzification-defuzzification process of the signal in real-time communication systems is addressed in the context of temporal fuzziness or fuzziness in the time domain. Temporal fuzzy factors that affect the operation of communication systems and their signal transmission are illustrated, analyzed, and the de-fuzzification process is discussed.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121455888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374763
Wu-Yuan Tsai, H. Tai, A. Reynolds
Backpropagation feedforward neural networks have been applied to pattern recognition and classification problems. However, under certain conditions the backpropagation net classifier can produce nonintuitive, nonrobust and unreliable classification results. The backpropagation net is slower to train and is not easy to accommodate new data. To solve the difficulties mentioned above, an unsupervised/supervised type neural net, namely, ART-BP net, is proposed. The idea is to use a low vigilance parameter in ART2 net to categorize input patterns into some classes and then utilize a backpropagation net to recognize patterns in each class. Advantages of the ART2-BP neural net include (1) improvement of recognition capability, (2) training convergence enhancement, and (3) easy to add new data. Theoretical analysis along with a well testing model recognition example are given to illustrate these advantages.<>
{"title":"An ART2-BP neural net and its application to reservoir engineering","authors":"Wu-Yuan Tsai, H. Tai, A. Reynolds","doi":"10.1109/ICNN.1994.374763","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374763","url":null,"abstract":"Backpropagation feedforward neural networks have been applied to pattern recognition and classification problems. However, under certain conditions the backpropagation net classifier can produce nonintuitive, nonrobust and unreliable classification results. The backpropagation net is slower to train and is not easy to accommodate new data. To solve the difficulties mentioned above, an unsupervised/supervised type neural net, namely, ART-BP net, is proposed. The idea is to use a low vigilance parameter in ART2 net to categorize input patterns into some classes and then utilize a backpropagation net to recognize patterns in each class. Advantages of the ART2-BP neural net include (1) improvement of recognition capability, (2) training convergence enhancement, and (3) easy to add new data. Theoretical analysis along with a well testing model recognition example are given to illustrate these advantages.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121729141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374748
R. Kozma, M. Kitamura, M. Sakuma, Y. Yokoyama
The problem of detecting weak anomalies in temporal signals is addressed. The performance of statistical methods utilizing the evaluation of the intensity of time-dependent fluctuations is compared with the results obtained by a layered artificial neural network model. The desired accuracy of the approximation by the neural network at the end of the learning phase has been estimated by analyzing the statistics of the learning data. The application of the obtained results to the analysis of actual anomaly data from a nuclear reactor showed that neural networks can identify the onset of anomalies with a reasonable success, while usual statistical methods were unable to make distinction between normal and abnormal patterns.<>
{"title":"Anomaly detection by neural network models and statistical time series analysis","authors":"R. Kozma, M. Kitamura, M. Sakuma, Y. Yokoyama","doi":"10.1109/ICNN.1994.374748","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374748","url":null,"abstract":"The problem of detecting weak anomalies in temporal signals is addressed. The performance of statistical methods utilizing the evaluation of the intensity of time-dependent fluctuations is compared with the results obtained by a layered artificial neural network model. The desired accuracy of the approximation by the neural network at the end of the learning phase has been estimated by analyzing the statistics of the learning data. The application of the obtained results to the analysis of actual anomaly data from a nuclear reactor showed that neural networks can identify the onset of anomalies with a reasonable success, while usual statistical methods were unable to make distinction between normal and abnormal patterns.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132400954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374543
J.E. Ngolediage, R.N.G. Naguib, S. Dlay
This paper describes a real-time implementable algorithm that takes advantage of the Lyapunov function, which guarantees an asymptotic behaviour of the solutions to differential equations. The algorithm is designed for feedforward neural networks. Unlike conventional backpropagation, it does not require the suite of derivatives to be propagated from the top layer to the bottom one. Consequently, the amount of circuitry required for an analogue CMOS implementation is minimal. In addition, each unit in the network has its output fed back to itself across a delay element. Results from an HSPICE simulation of the 2.4 micron CMOS architecture are presented.<>
{"title":"A real-time implementable neural network","authors":"J.E. Ngolediage, R.N.G. Naguib, S. Dlay","doi":"10.1109/ICNN.1994.374543","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374543","url":null,"abstract":"This paper describes a real-time implementable algorithm that takes advantage of the Lyapunov function, which guarantees an asymptotic behaviour of the solutions to differential equations. The algorithm is designed for feedforward neural networks. Unlike conventional backpropagation, it does not require the suite of derivatives to be propagated from the top layer to the bottom one. Consequently, the amount of circuitry required for an analogue CMOS implementation is minimal. In addition, each unit in the network has its output fed back to itself across a delay element. Results from an HSPICE simulation of the 2.4 micron CMOS architecture are presented.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"54 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132535679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374274
E. Maillard, B. Solaiman
HLVQ network achieves a synthesis of supervised and unsupervised learning. Promising results have been reported elsewhere. A dynamic map-building technique for HLVQ is introduced, During learning, the creation of neurons follows a loose KD-tree algorithm. A criterion for the detection of the network weakness to match the topology of the training set is presented. This information is localized in the input space. When the weakness criterion is matched, a neuron is added to the existing map in a way that preserves the topology of the network. This new algorithm sets the network almost free of a crucial external parameter: the size of the neuron map. Furthermore, it is shown that the network presents highest classification score when employing constant learning rate and neighborhood size.<>
{"title":"A neural network based on LVQ2 with dynamic building of the map","authors":"E. Maillard, B. Solaiman","doi":"10.1109/ICNN.1994.374274","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374274","url":null,"abstract":"HLVQ network achieves a synthesis of supervised and unsupervised learning. Promising results have been reported elsewhere. A dynamic map-building technique for HLVQ is introduced, During learning, the creation of neurons follows a loose KD-tree algorithm. A criterion for the detection of the network weakness to match the topology of the training set is presented. This information is localized in the input space. When the weakness criterion is matched, a neuron is added to the existing map in a way that preserves the topology of the network. This new algorithm sets the network almost free of a crucial external parameter: the size of the neuron map. Furthermore, it is shown that the network presents highest classification score when employing constant learning rate and neighborhood size.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130021976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374485
G. Xing
The concept of entropy has considerable influence on science progress and should has some important consequences for neural network development. Information processing is in certain relation to entropy. The analysis of entropy in the pyramid neural network is a information theoretic approach to neural network research.<>
{"title":"Entropy calculations on pyramid neural network","authors":"G. Xing","doi":"10.1109/ICNN.1994.374485","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374485","url":null,"abstract":"The concept of entropy has considerable influence on science progress and should has some important consequences for neural network development. Information processing is in certain relation to entropy. The analysis of entropy in the pyramid neural network is a information theoretic approach to neural network research.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130033493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374649
R. Luo, H. Potlapalli
Mobile robots rely on traffic signs for navigation in outdoor environments. The recognition of these signs using vision is a unique problem. The important aspects of this problem are that the object parameters such as scale and orientation are constantly changing with the motion of the camera. Also, new signs may appear at some time. In this case feature extraction algorithms are unable to meet the constraints of flexibility. Neural networks can be easily programmed for this task. A new learning strategy for self-organizing neural networks is presented. By iteratively subtracting the projection of the winning neuron onto the null space of the input vector, the neuron is progressively made more representative of the input. The convergence properties of the new neural network model are studied. Comparison results with standard Kohonen learning are presented. The performance of the network with respect to training and recognition of traffic signs is studied.<>
{"title":"Landmark recognition using projection learning for mobile robot navigation","authors":"R. Luo, H. Potlapalli","doi":"10.1109/ICNN.1994.374649","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374649","url":null,"abstract":"Mobile robots rely on traffic signs for navigation in outdoor environments. The recognition of these signs using vision is a unique problem. The important aspects of this problem are that the object parameters such as scale and orientation are constantly changing with the motion of the camera. Also, new signs may appear at some time. In this case feature extraction algorithms are unable to meet the constraints of flexibility. Neural networks can be easily programmed for this task. A new learning strategy for self-organizing neural networks is presented. By iteratively subtracting the projection of the winning neuron onto the null space of the input vector, the neuron is progressively made more representative of the input. The convergence properties of the new neural network model are studied. Comparison results with standard Kohonen learning are presented. The performance of the network with respect to training and recognition of traffic signs is studied.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134262380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374487
S. Vassiliadis, K. Bertels, G. Pechanek
In this paper we investigate the reduction of the size of depth-2 feedforward neural networks performing binary addition and related functions. We suggest that 2-1 binary n-bit addition and some related functions can be computed in a depth-2 network of size O(n) with maximum fan-in of 2n+1. Furthermore, we show, if both input polarities are available, that the comparison can be computed in a depth-1 network of size O(1) also with maximum fan-in of 2n+1.<>
{"title":"O(n) depth-2 binary addition with feedforward neural nets","authors":"S. Vassiliadis, K. Bertels, G. Pechanek","doi":"10.1109/ICNN.1994.374487","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374487","url":null,"abstract":"In this paper we investigate the reduction of the size of depth-2 feedforward neural networks performing binary addition and related functions. We suggest that 2-1 binary n-bit addition and some related functions can be computed in a depth-2 network of size O(n) with maximum fan-in of 2n+1. Furthermore, we show, if both input polarities are available, that the comparison can be computed in a depth-1 network of size O(1) also with maximum fan-in of 2n+1.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134444244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374987
Kunsan Wang
Neurons in biological systems usually exhibit distinctive response selectivity to certain features in the stimulus. As the neurons are functionally and spatially segregated, one may interpret the computational principles of the neural systems as a mechanism of feature mapping, which represents information in a topographic fashion. In this article, the author summarizes the physiological findings of the neural selectivities in the primary auditory cortex and, based on which, proposes a mathematical framework for mapping the acoustic features conveyed in the power spectrum. The author further demonstrates how this model may be employed to explain a series of psychoacoustic experiments that are designed to measure the sensitivity of the human auditory system to spectral shape perception, and hypothesizes how the measured thresholds may be related to the model parameters.<>
{"title":"Neural computations as multidimensional feature mapping for acoustic information representation","authors":"Kunsan Wang","doi":"10.1109/ICNN.1994.374987","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374987","url":null,"abstract":"Neurons in biological systems usually exhibit distinctive response selectivity to certain features in the stimulus. As the neurons are functionally and spatially segregated, one may interpret the computational principles of the neural systems as a mechanism of feature mapping, which represents information in a topographic fashion. In this article, the author summarizes the physiological findings of the neural selectivities in the primary auditory cortex and, based on which, proposes a mathematical framework for mapping the acoustic features conveyed in the power spectrum. The author further demonstrates how this model may be employed to explain a series of psychoacoustic experiments that are designed to measure the sensitivity of the human auditory system to spectral shape perception, and hypothesizes how the measured thresholds may be related to the model parameters.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134454000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374368
J.L. Johnson
Group linking effects in a pulse-coupled neural network are shown to make multiple time scales for the image time signature.<>
脉冲耦合神经网络中的群连接效应为图像时间特征提供了多个时间尺度。
{"title":"Time signatures of images","authors":"J.L. Johnson","doi":"10.1109/ICNN.1994.374368","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374368","url":null,"abstract":"Group linking effects in a pulse-coupled neural network are shown to make multiple time scales for the image time signature.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134176517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}