Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287132
P. A. Jokinen
Numerical estimators of nonlinear functions can be constructed using systems based on fuzzy logic, artificial neural networks, and nonparametric regression methods. Some interesting similarities between fuzzy systems and some types of neural network models that use radial basis functions are discussed. Both these methods can be regarded as structural numerical estimators, because a rough interpretation can be given in terms of pointwise (local) rules. This explanation capability is important if the models are used as building blocks of expert systems. Most of the neural network models currently lack this capability, which the structural numerical estimators have intrinsically.<>
{"title":"On the relations between radial basis function networks and fuzzy systems","authors":"P. A. Jokinen","doi":"10.1109/IJCNN.1992.287132","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287132","url":null,"abstract":"Numerical estimators of nonlinear functions can be constructed using systems based on fuzzy logic, artificial neural networks, and nonparametric regression methods. Some interesting similarities between fuzzy systems and some types of neural network models that use radial basis functions are discussed. Both these methods can be regarded as structural numerical estimators, because a rough interpretation can be given in terms of pointwise (local) rules. This explanation capability is important if the models are used as building blocks of expert systems. Most of the neural network models currently lack this capability, which the structural numerical estimators have intrinsically.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124604763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227257
B. Kalman, S. Kwasny
As hardware implementations of backpropagation and related training algorithms are anticipated, the choice of a sigmoidal function should be carefully justified. Attention should focus on choosing an activation function in a neural unit that exhibits the best properties for training. The author argues for the use of the hyperbolic tangent. While the exact shape of the sigmoidal makes little difference once the network is trained, it is shown that it possesses particular properties that make it appealing for use while training. By paying attention to scaling it is illustrated that tanh (1.5*) has the additional advantage of equalizing training over layers. This result can easily generalize to several standard sigmoidal functions commonly in use.<>
{"title":"Why tanh: choosing a sigmoidal function","authors":"B. Kalman, S. Kwasny","doi":"10.1109/IJCNN.1992.227257","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227257","url":null,"abstract":"As hardware implementations of backpropagation and related training algorithms are anticipated, the choice of a sigmoidal function should be carefully justified. Attention should focus on choosing an activation function in a neural unit that exhibits the best properties for training. The author argues for the use of the hyperbolic tangent. While the exact shape of the sigmoidal makes little difference once the network is trained, it is shown that it possesses particular properties that make it appealing for use while training. By paying attention to scaling it is illustrated that tanh (1.5*) has the additional advantage of equalizing training over layers. This result can easily generalize to several standard sigmoidal functions commonly in use.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129640956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227213
P. Castiglione, G. Basti, Stefano Fusi, G. Morgavi, A. Perrone
The authors propose a neural net able to recognize input pattern sequences by memorizing only one of the transformed patterns, the prototype forming the sequence. This capacity depends on an automatic control of the minimal correlation order to perform recognition tasks and, in ambiguous cases, on a type of context-dependent memory recalling. The neural net model can use the noise constructively to modify continuously the learned prototype pattern in view of a contextual recognition of input pattern sequences. In such a way, the net is able to deduce, by itself, from the prototype pattern, the hypotheses by which it can recognize highly corrupted static patterns, or sequences of transformed patterns.<>
{"title":"A net for automatic detection of minimal correlation order in contextual pattern recognition","authors":"P. Castiglione, G. Basti, Stefano Fusi, G. Morgavi, A. Perrone","doi":"10.1109/IJCNN.1992.227213","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227213","url":null,"abstract":"The authors propose a neural net able to recognize input pattern sequences by memorizing only one of the transformed patterns, the prototype forming the sequence. This capacity depends on an automatic control of the minimal correlation order to perform recognition tasks and, in ambiguous cases, on a type of context-dependent memory recalling. The neural net model can use the noise constructively to modify continuously the learned prototype pattern in view of a contextual recognition of input pattern sequences. In such a way, the net is able to deduce, by itself, from the prototype pattern, the hypotheses by which it can recognize highly corrupted static patterns, or sequences of transformed patterns.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126808588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226884
J. Abbas, H. Chizeck
An adaptive neural network control system has been designed for the purpose of controlling cyclic movements of nonlinear dynamic systems with input time delays (as found in functional neuromuscular stimulation). The adaptive feedforward (FF) controller is implemented as a two-stage neural network. The first stage, the pattern generator (PG), generates a cyclic pattern of activity. The signals from the PG are adaptively filtered by the second stage, the pattern shaper (PS). This stage uses modifications to standard artificial neural network learning algorithms to adapt its filter properties. The control system is evaluated in computer simulation on a musculoskeletal model which consists of two muscles acting on a swinging pendulum. The control system provides automated customization of the FF controller parameters for a given musculoskeletal system as well as online adaptation of the FF controller parameters to account for changes in the musculoskeletal system.<>
{"title":"Adaptive feedforward control of cyclic movements using artificial neural networks","authors":"J. Abbas, H. Chizeck","doi":"10.1109/IJCNN.1992.226884","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226884","url":null,"abstract":"An adaptive neural network control system has been designed for the purpose of controlling cyclic movements of nonlinear dynamic systems with input time delays (as found in functional neuromuscular stimulation). The adaptive feedforward (FF) controller is implemented as a two-stage neural network. The first stage, the pattern generator (PG), generates a cyclic pattern of activity. The signals from the PG are adaptively filtered by the second stage, the pattern shaper (PS). This stage uses modifications to standard artificial neural network learning algorithms to adapt its filter properties. The control system is evaluated in computer simulation on a musculoskeletal model which consists of two muscles acting on a swinging pendulum. The control system provides automated customization of the FF controller parameters for a given musculoskeletal system as well as online adaptation of the FF controller parameters to account for changes in the musculoskeletal system.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130577572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227114
Bin Yu, Baozong Yuan
A versatile technique for set-feature selection from class features without any prior knowledge for multi-class-set classification is presented. A class set is a group of classes in which the patterns represented with class features can be classified with a existing classifier. The features used to classify patterns between classes within a class set are referred to as class features and the ones used to classify patterns between class sets as set features. A set-feature set is produced from class-feature sets under the criterion of minimizing the encounter zones between class sets in set-feature space. The performance of this technique was illustrated with an experiment on the understanding of circuit diagrams.<>
{"title":"A feature selection method for multi-class-set classification","authors":"Bin Yu, Baozong Yuan","doi":"10.1109/IJCNN.1992.227114","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227114","url":null,"abstract":"A versatile technique for set-feature selection from class features without any prior knowledge for multi-class-set classification is presented. A class set is a group of classes in which the patterns represented with class features can be classified with a existing classifier. The features used to classify patterns between classes within a class set are referred to as class features and the ones used to classify patterns between class sets as set features. A set-feature set is produced from class-feature sets under the criterion of minimizing the encounter zones between class sets in set-feature space. The performance of this technique was illustrated with an experiment on the understanding of circuit diagrams.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123340337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226870
W. R. Kirkland, D. Taylor
The application of feedforward neural networks to adaptive channel equalization is examined. The Rummler channel model is used for modeling the digital microwave radio channel. In applying neural networks to the channel equalization problem, complex neurons in the neural network are used. This allows for a frequency interpretation of the weights of the neurons in the first hidden layer. This channel model allows examination of binary signaling in two dimensions, (4-quadrature amplitude modulation, or QAM), and higher-level signaling as well, (16-QAM). Results show that while neural nets provide a significant performance increase in the case of binary signaling in two dimensions (4-QAM), this performance is not reflected in the results for the higher-level signaling schemes. In this case the neural net equalizer performance tends to parallel that of the linear transversal equalizer.<>
{"title":"On the application of feed forward neural networks to channel equalization","authors":"W. R. Kirkland, D. Taylor","doi":"10.1109/IJCNN.1992.226870","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226870","url":null,"abstract":"The application of feedforward neural networks to adaptive channel equalization is examined. The Rummler channel model is used for modeling the digital microwave radio channel. In applying neural networks to the channel equalization problem, complex neurons in the neural network are used. This allows for a frequency interpretation of the weights of the neurons in the first hidden layer. This channel model allows examination of binary signaling in two dimensions, (4-quadrature amplitude modulation, or QAM), and higher-level signaling as well, (16-QAM). Results show that while neural nets provide a significant performance increase in the case of binary signaling in two dimensions (4-QAM), this performance is not reflected in the results for the higher-level signaling schemes. In this case the neural net equalizer performance tends to parallel that of the linear transversal equalizer.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123473160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287095
D. Specht
Probabilistic neural networks (PNNs) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of a PNN stems from the fact that it requires one node or neuron for each training pattern. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. The correct choice of clustering technique will depend on the data distribution, data rate, and hardware implementation. Adaptation of kernel shape provides a tradeoff of increased accuracy for increased complexity and training time. The technique described also provides a basis for automatic feature selection and dimensionality reduction.<>
{"title":"Enhancements to probabilistic neural networks","authors":"D. Specht","doi":"10.1109/IJCNN.1992.287095","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287095","url":null,"abstract":"Probabilistic neural networks (PNNs) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of a PNN stems from the fact that it requires one node or neuron for each training pattern. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. The correct choice of clustering technique will depend on the data distribution, data rate, and hardware implementation. Adaptation of kernel shape provides a tradeoff of increased accuracy for increased complexity and training time. The technique described also provides a basis for automatic feature selection and dimensionality reduction.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123656737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287147
M. Musavi, K. Kalantri, W. Ahmed
A methodology for selection of appropriate widths or covariance matrices of the Gaussian functions in implementations of PNN (probabilistic neural network) classifiers is presented. The Gram-Schmidt orthogonalization process is employed to find these matrices. It has been shown that the proposed technique improves the generalization ability of the PNN classifiers over the standard approach. The result can be applied to other Gaussian-based classifiers such as the radial basis functions.<>
{"title":"Improving the performance of probabilistic neural networks","authors":"M. Musavi, K. Kalantri, W. Ahmed","doi":"10.1109/IJCNN.1992.287147","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287147","url":null,"abstract":"A methodology for selection of appropriate widths or covariance matrices of the Gaussian functions in implementations of PNN (probabilistic neural network) classifiers is presented. The Gram-Schmidt orthogonalization process is employed to find these matrices. It has been shown that the proposed technique improves the generalization ability of the PNN classifiers over the standard approach. The result can be applied to other Gaussian-based classifiers such as the radial basis functions.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"181 27","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120885663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227110
D. Sikka
A new set of two-dimensional curve shape primitives for detecting line defects on wafers in semiconductor manufacturing is presented. A supervised learning based neural network which incorporates these shape primitives has been built and tested on more than six months of real data from an Intel fabrication laboratory. Results demonstrate that the new set of shape primitives was very accurate in capturing the line defects.<>
{"title":"Two dimensional curve shape primitives for detecting line defects in silicon wafers","authors":"D. Sikka","doi":"10.1109/IJCNN.1992.227110","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227110","url":null,"abstract":"A new set of two-dimensional curve shape primitives for detecting line defects on wafers in semiconductor manufacturing is presented. A supervised learning based neural network which incorporates these shape primitives has been built and tested on more than six months of real data from an Intel fabrication laboratory. Results demonstrate that the new set of shape primitives was very accurate in capturing the line defects.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121223095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227119
H. Szu, G. Rogers
The McCullouch-Pitts (M-P) model for a neuron is generalized to endow the axon threshold with a time-dependent nonlinear dynamics. Two components of the threshold vector can be used to generate a pulsed coding output with the same qualitative characteristics as real axon hillocks, which could be useful for communications pulse coding. A simple dynamical neuron model that can include internal dynamics involving multiple internal degrees of freedom is proposed. The model reduces to the M-P model for static inputs and no internal dynamical degrees of freedom. The treatment is restricted to a single neuron without learning. Two examples are included.<>
{"title":"Generalized McCullouch-Pitts neuron model with threshold dynamics","authors":"H. Szu, G. Rogers","doi":"10.1109/IJCNN.1992.227119","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227119","url":null,"abstract":"The McCullouch-Pitts (M-P) model for a neuron is generalized to endow the axon threshold with a time-dependent nonlinear dynamics. Two components of the threshold vector can be used to generate a pulsed coding output with the same qualitative characteristics as real axon hillocks, which could be useful for communications pulse coding. A simple dynamical neuron model that can include internal dynamics involving multiple internal degrees of freedom is proposed. The model reduces to the M-P model for static inputs and no internal dynamical degrees of freedom. The treatment is restricted to a single neuron without learning. Two examples are included.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121302507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}