Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226979
S. Najand, Z. Lo, B. Bavarian
The authors present the application of the Kohonen self-organizing topology-preserving neural network for learning and developing a minimal representation for the open environment in mobile robot navigation. The input to the algorithm consists of the coordinates of randomly selected points in the open environment. No specific knowledge of the size, number, and shape of the obstacles is needed by the network. The parameter selection for the network is discussed. The neighborhood function, adaptation gain, and the number of training sample points have direct effect on the convergence and usefulness of the final representation. The environment dimensions and a measure of environment complexity are used to find approximate bounds and requirements on these parameters.<>
{"title":"Using the Kohonen topology preserving mapping network for learning the minimal environment representation","authors":"S. Najand, Z. Lo, B. Bavarian","doi":"10.1109/IJCNN.1992.226979","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226979","url":null,"abstract":"The authors present the application of the Kohonen self-organizing topology-preserving neural network for learning and developing a minimal representation for the open environment in mobile robot navigation. The input to the algorithm consists of the coordinates of randomly selected points in the open environment. No specific knowledge of the size, number, and shape of the obstacles is needed by the network. The parameter selection for the network is discussed. The neighborhood function, adaptation gain, and the number of training sample points have direct effect on the convergence and usefulness of the final representation. The environment dimensions and a measure of environment complexity are used to find approximate bounds and requirements on these parameters.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"315 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114232768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227096
G.R. De Haan, O. Ececioglu
The use of the topology preserving properties of feature maps for speaker-independent isolated digit recognition is discussed. The results of recognition experiments indicate that feature maps can be effectively used for input normalization, which is important for practical implementations of neural-network-based classifiers. Recognition rates can be increased when a third feature map is trained to integrate the responses of two feature maps, each trained with different transducer-level features. Despite the use of a rudimentary classification scheme, recognition rates exceeded 97% for integrated, feature-map-normalized, transducer-level features.<>
{"title":"Feature maps for input normalization and feature integration in a speaker independent isolated digit recognition system","authors":"G.R. De Haan, O. Ececioglu","doi":"10.1109/IJCNN.1992.227096","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227096","url":null,"abstract":"The use of the topology preserving properties of feature maps for speaker-independent isolated digit recognition is discussed. The results of recognition experiments indicate that feature maps can be effectively used for input normalization, which is important for practical implementations of neural-network-based classifiers. Recognition rates can be increased when a third feature map is trained to integrate the responses of two feature maps, each trained with different transducer-level features. Despite the use of a rudimentary classification scheme, recognition rates exceeded 97% for integrated, feature-map-normalized, transducer-level features.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116293243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227048
C. Ku, K.Y. Lee
The recurrent neural network is proposed for system identification of nonlinear dynamic systems. When the system identification is coupled with control problems, the real-time feature is very important, and a neuro-identifier must be designed so that it will converge and the training time will not be too long. The neural network should also be simple and implemented easily. A novel neuro-identifier, the diagonal recurrent neural network (DRNN), that fulfils these requirements is proposed. A generalized algorithm, dynamic backpropagation, is developed to train the DRNN. The DRNN was used to identify nonlinear systems, and simulation showed promising results.<>
{"title":"Nonlinear system identification using diagonal recurrent neural networks","authors":"C. Ku, K.Y. Lee","doi":"10.1109/IJCNN.1992.227048","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227048","url":null,"abstract":"The recurrent neural network is proposed for system identification of nonlinear dynamic systems. When the system identification is coupled with control problems, the real-time feature is very important, and a neuro-identifier must be designed so that it will converge and the training time will not be too long. The neural network should also be simple and implemented easily. A novel neuro-identifier, the diagonal recurrent neural network (DRNN), that fulfils these requirements is proposed. A generalized algorithm, dynamic backpropagation, is developed to train the DRNN. The DRNN was used to identify nonlinear systems, and simulation showed promising results.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"81 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113943315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287215
M. E. Ulug
The design and simulation of a bandpass filter are described, and an electro-optical implementation is proposed. The neural network used in this filter has an architecture similar to the one suggested by Kolmogorov's existence theorem and a data processing method based on Fourier series. The resulting system, called the orthonormal neural network, can approximate any L/sub 2/ mapping function between the input and output vectors without using the backpropagation rule or hidden layers. Because the transfer functions of the middle nodes are the terms of the Fourier series, the synaptic link values between the middle and output layers represent the frequency spectrum of the signals of the output nodes. As a result, by autoassociatively training the network with all the middle nodes and testing it with certain selected ones, it is easy to build a nonlinear bandpass filter. The system is basically a two-layer network consisting of virtual input nodes and output nodes. The transfer functions of the output nodes are linear. As a result, the network is free from the problems of local minima and has a bowl-shaped error surface. The sharp slopes of this surface make the system tolerant to loss of computational accuracy and suitable for electro-optical implementation.<>
{"title":"ANN bandpass filters for electro-optical implementation","authors":"M. E. Ulug","doi":"10.1109/IJCNN.1992.287215","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287215","url":null,"abstract":"The design and simulation of a bandpass filter are described, and an electro-optical implementation is proposed. The neural network used in this filter has an architecture similar to the one suggested by Kolmogorov's existence theorem and a data processing method based on Fourier series. The resulting system, called the orthonormal neural network, can approximate any L/sub 2/ mapping function between the input and output vectors without using the backpropagation rule or hidden layers. Because the transfer functions of the middle nodes are the terms of the Fourier series, the synaptic link values between the middle and output layers represent the frequency spectrum of the signals of the output nodes. As a result, by autoassociatively training the network with all the middle nodes and testing it with certain selected ones, it is easy to build a nonlinear bandpass filter. The system is basically a two-layer network consisting of virtual input nodes and output nodes. The transfer functions of the output nodes are linear. As a result, the network is free from the problems of local minima and has a bowl-shaped error surface. The sharp slopes of this surface make the system tolerant to loss of computational accuracy and suitable for electro-optical implementation.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124166186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287128
L. Hsu, H. H. Teh, P. Wang, S. Chan, K. Loe
A realization of fuzzy logic by a neural network is described. Each node in the network represents a premise or a conclusion. Let x be a member of the universal set, and let A be a node in the network. The value of activation of node A is taken to be the value of the membership function at point x, m/sub A/(x). A logical operation is defined by a set of weights which are independent of x. Given any value of x, a preprocessor will determine the values of the membership function for all the premises that correspond to the input nodes. These are treated as input to the network. A propagation algorithm is used to emulate the inference process. When the network stabilizes, the value of activation at an output node represents the value of the membership function that indicates the degree to which the given conclusion is true. Weight assignment for the standard logical operations is discussed. It is also shown that the scheme makes it possible to define more general logical operations.<>
{"title":"Fuzzy neural-logic system","authors":"L. Hsu, H. H. Teh, P. Wang, S. Chan, K. Loe","doi":"10.1109/IJCNN.1992.287128","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287128","url":null,"abstract":"A realization of fuzzy logic by a neural network is described. Each node in the network represents a premise or a conclusion. Let x be a member of the universal set, and let A be a node in the network. The value of activation of node A is taken to be the value of the membership function at point x, m/sub A/(x). A logical operation is defined by a set of weights which are independent of x. Given any value of x, a preprocessor will determine the values of the membership function for all the premises that correspond to the input nodes. These are treated as input to the network. A propagation algorithm is used to emulate the inference process. When the network stabilizes, the value of activation at an output node represents the value of the membership function that indicates the degree to which the given conclusion is true. Weight assignment for the standard logical operations is discussed. It is also shown that the scheme makes it possible to define more general logical operations.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124167323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/ijcnn.1992.226975
C. Bachmann, S. Musman, A. Schultz
The use of neural networks for the classification of simulated inverse synthetic aperture radar (ISAR) imagery is investigated. Certain symmetries of the artificial imagery make the use of localized moments a convenient preprocessing tool for the inputs to a neural network. A database of simulated targets is obtained by warping dynamical models to representative angles and generating images with different target motions. Ordinary backward propagation (BP) and some variants of BP which incorporate lateral inhibition obtain a generalization rate of up to approximately 78% for novel data not used during training, a rate which is comparable to the level of classification accuracy that trained human observers obtained from the unprocessed simulated imagery.<>
{"title":"Lateral inhibition neural networks for classification of simulated radar imagery","authors":"C. Bachmann, S. Musman, A. Schultz","doi":"10.1109/ijcnn.1992.226975","DOIUrl":"https://doi.org/10.1109/ijcnn.1992.226975","url":null,"abstract":"The use of neural networks for the classification of simulated inverse synthetic aperture radar (ISAR) imagery is investigated. Certain symmetries of the artificial imagery make the use of localized moments a convenient preprocessing tool for the inputs to a neural network. A database of simulated targets is obtained by warping dynamical models to representative angles and generating images with different target motions. Ordinary backward propagation (BP) and some variants of BP which incorporate lateral inhibition obtain a generalization rate of up to approximately 78% for novel data not used during training, a rate which is comparable to the level of classification accuracy that trained human observers obtained from the unprocessed simulated imagery.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126218791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287224
M. A. Cohen
A parametrized family of higher-order, gradient-like neural networks that have known arbitrary equilibria with unstable manifolds of known specified dimension is described. Any system with hyperbolic dynamics is conjugate to one of the systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric fits to known stable systems is nonconstructive, lacks generality, or has unspecified attracting equilibria. More specifically, a parameterized family of gradient-like neural networks is constructed with a simple feedback rule that will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data on nested periodic orbits.<>
{"title":"The synthesis of arbitrary stable dynamics in non-linear neural networks. II. Feedback and universality","authors":"M. A. Cohen","doi":"10.1109/IJCNN.1992.287224","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287224","url":null,"abstract":"A parametrized family of higher-order, gradient-like neural networks that have known arbitrary equilibria with unstable manifolds of known specified dimension is described. Any system with hyperbolic dynamics is conjugate to one of the systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric fits to known stable systems is nonconstructive, lacks generality, or has unspecified attracting equilibria. More specifically, a parameterized family of gradient-like neural networks is constructed with a simple feedback rule that will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data on nested periodic orbits.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126570545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287106
R. Kamimura
A method of accelerating the learning in recurrent neural networks is considered. Owing to a possible large number of connections, it has been expected that recurrent neural networks will converge faster. To activate hidden connections and use hidden units efficiently, a complexity term proposed by D.E. Rumelhart was added to the standard quadratic error function. A complexity term method is modified with a parameter to be normally effective for positive values, while negative values are pushed toward values with larger absolute values. Thus, some hidden connections are expected to be large enough to use hidden units and to speed up the learning. From the author's experiments, it was confirmed that the complexity term was effective in increasing the variance of connections, especially hidden connections, and that eventually some hidden connections were activated and large enough for hidden units to be used in speeding up the learning.<>
{"title":"Activated hidden connections to accelerate the learning in recurrent neural networks","authors":"R. Kamimura","doi":"10.1109/IJCNN.1992.287106","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287106","url":null,"abstract":"A method of accelerating the learning in recurrent neural networks is considered. Owing to a possible large number of connections, it has been expected that recurrent neural networks will converge faster. To activate hidden connections and use hidden units efficiently, a complexity term proposed by D.E. Rumelhart was added to the standard quadratic error function. A complexity term method is modified with a parameter to be normally effective for positive values, while negative values are pushed toward values with larger absolute values. Thus, some hidden connections are expected to be large enough to use hidden units and to speed up the learning. From the author's experiments, it was confirmed that the complexity term was effective in increasing the variance of connections, especially hidden connections, and that eventually some hidden connections were activated and large enough for hidden units to be used in speeding up the learning.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287187
L. Chang
In the biological neural network, the interaction and communication among neurons can be thought of as a kind of wave correlation, which is the basic idea of the discrete wave machine. A discrete wave machine is described by a complex state space. An energy function of a discrete wave machine with the Hermitian connection determines the convergence of the state evolution and the points of memory. The discrete Fourier transform is directly described by a discrete wave machine with a special connection.<>
{"title":"Discrete wave machine and Fourier transform","authors":"L. Chang","doi":"10.1109/IJCNN.1992.287187","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287187","url":null,"abstract":"In the biological neural network, the interaction and communication among neurons can be thought of as a kind of wave correlation, which is the basic idea of the discrete wave machine. A discrete wave machine is described by a complex state space. An energy function of a discrete wave machine with the Hermitian connection determines the convergence of the state evolution and the points of memory. The discrete Fourier transform is directly described by a discrete wave machine with a special connection.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122246097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287141
W. O. Camp, J. van der Spiegel, M. Xiao
The authors show an integrated circuit implementation of a higher-order vision function, that of determining the orientation of line segments and edges across an image projected onto the chip. The IC includes an array of photoreceptors and analog processing elements consisting of weights arranged in a network. A primary objective of the implementation was a compact and simple design, as it would be a prelude to including even higher levels of visual processing on the chip in the kerf areas between many such arrays.<>
{"title":"A line and edge orientation sensor","authors":"W. O. Camp, J. van der Spiegel, M. Xiao","doi":"10.1109/IJCNN.1992.287141","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287141","url":null,"abstract":"The authors show an integrated circuit implementation of a higher-order vision function, that of determining the orientation of line segments and edges across an image projected onto the chip. The IC includes an array of photoreceptors and analog processing elements consisting of weights arranged in a network. A primary objective of the implementation was a compact and simple design, as it would be a prelude to including even higher levels of visual processing on the chip in the kerf areas between many such arrays.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121741570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}