Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287218
J. Dayhoff, S. Hameroff, R. Lahoz-Beltra, C. Swenberg
The cytoskeletal intraneuronal structure and some candidate mechanisms for signaling within nerve cells are described. Models were developing for the interaction of the cytoskeleton with cell membranes, synapses, and an internal signaling model that renders back-error propagation biologically plausible. Orientation-selective units observed in the primate motor cortex may be organized by such internal signaling mechanisms. The impact on sensorimotor systems and learning is discussed. It is concluded that the cytoskeleton's anatomical presence suggested that it plays a potentially key role in neuronal learning. The cytoskeleton could participate in synaptic processes by supporting the synapse and possibly by sending intracellular signals as well. Paradigms for adaptational mechanisms and information processing can be modeled utilizing the cytoskeleton and cytoskeletal signals.<>
{"title":"Intracellular mechanisms in neuronal learning: adaptive models","authors":"J. Dayhoff, S. Hameroff, R. Lahoz-Beltra, C. Swenberg","doi":"10.1109/IJCNN.1992.287218","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287218","url":null,"abstract":"The cytoskeletal intraneuronal structure and some candidate mechanisms for signaling within nerve cells are described. Models were developing for the interaction of the cytoskeleton with cell membranes, synapses, and an internal signaling model that renders back-error propagation biologically plausible. Orientation-selective units observed in the primate motor cortex may be organized by such internal signaling mechanisms. The impact on sensorimotor systems and learning is discussed. It is concluded that the cytoskeleton's anatomical presence suggested that it plays a potentially key role in neuronal learning. The cytoskeleton could participate in synaptic processes by supporting the synapse and possibly by sending intracellular signals as well. Paradigms for adaptational mechanisms and information processing can be modeled utilizing the cytoskeleton and cytoskeletal signals.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226865
R.M. Kuczewski, D.R. Eames
The application of neural networks to helicopter drive train fault detection and classification is discussed. A practical approach to the problem is outlined including preprocessing and network design issues. Two different neural networks are designed, constructed and demonstrated. The results indicate that a low-resolution fast Fourier transform (FFT) may provide a sufficiently rich feature set for fault detection and classification if combined with a properly structured and controlled neural network. Future directions for this work are discussed, including more data, longer time window, channel synchronization to pulse, and additional layers of cross-checking class neurons.<>
{"title":"Helicopter fault detection and classification with neural networks","authors":"R.M. Kuczewski, D.R. Eames","doi":"10.1109/IJCNN.1992.226865","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226865","url":null,"abstract":"The application of neural networks to helicopter drive train fault detection and classification is discussed. A practical approach to the problem is outlined including preprocessing and network design issues. Two different neural networks are designed, constructed and demonstrated. The results indicate that a low-resolution fast Fourier transform (FFT) may provide a sufficiently rich feature set for fault detection and classification if combined with a properly structured and controlled neural network. Future directions for this work are discussed, including more data, longer time window, channel synchronization to pulse, and additional layers of cross-checking class neurons.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114210984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227235
G. Kuhn
The author presents a feedforward network which classifies the spoken letter names 'b', 'd', 'e', and 'v' with 88.5% accuracy. For many poorly discriminated training examples, the outputs of this network are unstable or sensitive to perturbations of the values of the input features. This residual sensitivity is exploited by inserting into the network a new first hidden layer with localized receptive fields. The new layer gives the network a few additional degrees of freedom with which to optimize the input feature space for the desired classification. The benefit of further, joint optimization of the classifier and the input features was suggested in an experiment in which recognition accuracy was raised to 89.6%.<>
{"title":"Joint optimization of classifier and feature space in speech recognition","authors":"G. Kuhn","doi":"10.1109/IJCNN.1992.227235","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227235","url":null,"abstract":"The author presents a feedforward network which classifies the spoken letter names 'b', 'd', 'e', and 'v' with 88.5% accuracy. For many poorly discriminated training examples, the outputs of this network are unstable or sensitive to perturbations of the values of the input features. This residual sensitivity is exploited by inserting into the network a new first hidden layer with localized receptive fields. The new layer gives the network a few additional degrees of freedom with which to optimize the input feature space for the desired classification. The benefit of further, joint optimization of the classifier and the input features was suggested in an experiment in which recognition accuracy was raised to 89.6%.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125179292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287153
M. Alder, Y. Attikiouzel
The authors present a preliminary exploration of some ideas from syntactic pattern recognition theory and some insights of D.A. Marr (1970). The use of quadratic neural nets for the automatic extraction of strokes is examined. The concrete problem of optical character recognition (OCR) of handwritten characters is considered. That human OCR of cursive script entails both upwriting and downwriting into strokes and presumably other structures is eminently plausible, as an examination of the differences between human and machine OCR makes clear. That this is accomplished by arrays of neurons in the central nervous system is indisputable.<>
{"title":"Automatic extraction of strokes by quadratic neural nets","authors":"M. Alder, Y. Attikiouzel","doi":"10.1109/IJCNN.1992.287153","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287153","url":null,"abstract":"The authors present a preliminary exploration of some ideas from syntactic pattern recognition theory and some insights of D.A. Marr (1970). The use of quadratic neural nets for the automatic extraction of strokes is examined. The concrete problem of optical character recognition (OCR) of handwritten characters is considered. That human OCR of cursive script entails both upwriting and downwriting into strokes and presumably other structures is eminently plausible, as an examination of the differences between human and machine OCR makes clear. That this is accomplished by arrays of neurons in the central nervous system is indisputable.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125333371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227230
N. M. Botros, S. Premnath
The authors present an algorithm for isolated-word recognition that takes into consideration the duration variability of the different utterances of the same word. The algorithm is based on extracting acoustical features from the speech signal and using them as the input to a sequence of multilayer perceptron neural networks. The networks were implemented as predictors for the speech samples for a certain duration of time. The networks were trained by a combination of the back-propagation and the dynamic time warping (DTW) techniques. The DTW technique was implemented to normalize the duration variability. The networks were trained to recognize the correct words and to reject the wrong words. The training set consisted of ten words, each uttered seven times by three different speakers. The test set consisted of three utterances of each of the ten words. The results show that all these words could be recognized.<>
{"title":"Speech recognition using dynamic neural networks","authors":"N. M. Botros, S. Premnath","doi":"10.1109/IJCNN.1992.227230","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227230","url":null,"abstract":"The authors present an algorithm for isolated-word recognition that takes into consideration the duration variability of the different utterances of the same word. The algorithm is based on extracting acoustical features from the speech signal and using them as the input to a sequence of multilayer perceptron neural networks. The networks were implemented as predictors for the speech samples for a certain duration of time. The networks were trained by a combination of the back-propagation and the dynamic time warping (DTW) techniques. The DTW technique was implemented to normalize the duration variability. The networks were trained to recognize the correct words and to reject the wrong words. The training set consisted of ten words, each uttered seven times by three different speakers. The test set consisted of three utterances of each of the ten words. The results show that all these words could be recognized.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227295
S. Watanabe, N. Iijima, M. Sone, H. Mitsui, Y. Yoshida
A method for tuning artificial neural network (ANN) parameters for pattern recognition is described. A pattern recognition experiment carried out for phoneme recognition of English pure vowels in ANNs is presented. The significant of parameters that affect recognition rate seriously are defined. To determine the influence of these parameters on the recognition rate a tuning method is given. The tuning method is independent of the recognition rate.<>
{"title":"Method of deciding ANNs parameters for pattern recognition","authors":"S. Watanabe, N. Iijima, M. Sone, H. Mitsui, Y. Yoshida","doi":"10.1109/IJCNN.1992.227295","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227295","url":null,"abstract":"A method for tuning artificial neural network (ANN) parameters for pattern recognition is described. A pattern recognition experiment carried out for phoneme recognition of English pure vowels in ANNs is presented. The significant of parameters that affect recognition rate seriously are defined. To determine the influence of these parameters on the recognition rate a tuning method is given. The tuning method is independent of the recognition rate.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122419518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226866
W. Hsu, M. F. Tenorio
A novel plastic network is introduced as a tool for predicting chaotic time series. When the goal is prediction accuracy for chaotic time series, local-in-time and local-in-state-space plastic networks can outperform the traditional global methods. The key ingredient of a plastic network is a model selection criterion that allows it to self organize by choosing among a collection of candidate models. Among the advantages of the plastic network for the prediction of (chaotic) time series are the simplicity of the models used, accuracy, relatively small data requirement, online usage, and ease of understanding of the algorithms. When reporting prediction results on chaotic time series, a careful analysis of the data is recommended. Specifically for the Mackey-Glass time series, the authors find that different forward lead size can result in different prediction accuracy.<>
{"title":"Plastic network for predicting the Mackey-Glass time series","authors":"W. Hsu, M. F. Tenorio","doi":"10.1109/IJCNN.1992.226866","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226866","url":null,"abstract":"A novel plastic network is introduced as a tool for predicting chaotic time series. When the goal is prediction accuracy for chaotic time series, local-in-time and local-in-state-space plastic networks can outperform the traditional global methods. The key ingredient of a plastic network is a model selection criterion that allows it to self organize by choosing among a collection of candidate models. Among the advantages of the plastic network for the prediction of (chaotic) time series are the simplicity of the models used, accuracy, relatively small data requirement, online usage, and ease of understanding of the algorithms. When reporting prediction results on chaotic time series, a careful analysis of the data is recommended. Specifically for the Mackey-Glass time series, the authors find that different forward lead size can result in different prediction accuracy.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122768284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227303
Tien-Ren Tsao, V. C. Chen
The authors propose a neurobiologically plausible representation of the Gabor phase information, and present a neural computation scheme for extracting visual motion information from the Gabor phase information. The scheme can compute visual motion accurately from a scene with illumination changes, while other neural schemes for optical flow must assume stable brightness. The computational tests on synthetic and natural image data showed that the scheme was robust to the natural scenes. An architecture is presented of a neural network system based on the Gabor phase representation of visual motion.<>
{"title":"A neural computational scheme for extracting optical flow from the Gabor phase differences of successive images","authors":"Tien-Ren Tsao, V. C. Chen","doi":"10.1109/IJCNN.1992.227303","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227303","url":null,"abstract":"The authors propose a neurobiologically plausible representation of the Gabor phase information, and present a neural computation scheme for extracting visual motion information from the Gabor phase information. The scheme can compute visual motion accurately from a scene with illumination changes, while other neural schemes for optical flow must assume stable brightness. The computational tests on synthetic and natural image data showed that the scheme was robust to the natural scenes. An architecture is presented of a neural network system based on the Gabor phase representation of visual motion.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131532992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227017
B. R. Bakshi, G. Stephanopoulos
An artificial neural network with one hidden layer of nodes, whose basis functions are drawn from a family of orthonormal wavelets, is developed. Wavelet networks or wave-nets are based on firm theoretical foundations of functional analysis. The good localization characteristics of the basis functions, both in the input and frequency domains, allow hierarchical, multi-resolution learning of input-output maps from experimental data. Wave-nets allow explicit estimation of global and local prediction error-bounds, and thus lend themselves to a rigorous and transparent design of the network. Computational complexity arguments prove that the training and adaptation efficiency of wave-nets is at least an order of magnitude better than other networks. The mathematical framework for the development of wave-nets is presented and various aspects of their practical implementation are discussed. The problem of predicting a chaotic time-series is solved as an illustrative example.<>
{"title":"Wavelets as basis functions for localized learning in a multi-resolution hierarchy","authors":"B. R. Bakshi, G. Stephanopoulos","doi":"10.1109/IJCNN.1992.227017","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227017","url":null,"abstract":"An artificial neural network with one hidden layer of nodes, whose basis functions are drawn from a family of orthonormal wavelets, is developed. Wavelet networks or wave-nets are based on firm theoretical foundations of functional analysis. The good localization characteristics of the basis functions, both in the input and frequency domains, allow hierarchical, multi-resolution learning of input-output maps from experimental data. Wave-nets allow explicit estimation of global and local prediction error-bounds, and thus lend themselves to a rigorous and transparent design of the network. Computational complexity arguments prove that the training and adaptation efficiency of wave-nets is at least an order of magnitude better than other networks. The mathematical framework for the development of wave-nets is presented and various aspects of their practical implementation are discussed. The problem of predicting a chaotic time-series is solved as an illustrative example.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227156
G. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, D. B. Rosen
Fuzzy ARTMAP achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. Fuzzy ARTMAP realizes a new minimax learning rule that conjointly minimizes predictive error and maximizes code compression or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or hidden units, to meet accuracy criteria. Improved prediction is achieved by training the system several times using different orderings of the input set, and then voting. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Simulations illustrated fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithmic systems.<>
{"title":"Fuzzy ARTMAP: an adaptive resonance architecture for incremental learning of analog maps","authors":"G. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, D. B. Rosen","doi":"10.1109/IJCNN.1992.227156","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227156","url":null,"abstract":"Fuzzy ARTMAP achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. Fuzzy ARTMAP realizes a new minimax learning rule that conjointly minimizes predictive error and maximizes code compression or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or hidden units, to meet accuracy criteria. Improved prediction is achieved by training the system several times using different orderings of the input set, and then voting. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Simulations illustrated fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithmic systems.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127696931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}