Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227312
R. Steriti, M. Fiddy
Image reconstruction problems can be viewed as energy minimization problems and can be mapped onto a Hopfield neural network. For image reconstruction problems the authors describe the Gerchberg-Papoulis iterative method and the priorized discrete Fourier transform (PDFT) algorithm (C.L. Byrne et al., 1983). Both of these can be mapped onto a Hopfield neural network architecture, with the PDFT incorporating an iterative matrix inversion. The equations describing the operation of the Hopfield neural network are formally equivalent to those used in these iterative reconstruction methods, and these iterative reconstruction algorithms are regularized. The PDFT algorithm is a closed form solution to the Gerchberg-Papoulis algorithm when image support information is used. The regularized Gerchberg-Papoulis algorithm can be implemented synchronously, from which it follows that the Hopfield neural network implementation can also converge.<>
图像重建问题可以看作是能量最小化问题,可以映射到Hopfield神经网络上。对于图像重建问题,作者描述了Gerchberg-Papoulis迭代法和优先离散傅立叶变换(PDFT)算法(C.L. Byrne et al., 1983)。这两种方法都可以映射到Hopfield神经网络架构上,PDFT结合了迭代矩阵反演。描述Hopfield神经网络运行的方程在形式上等价于这些迭代重建方法中使用的方程,并且这些迭代重建算法是正则化的。当使用图像支持信息时,PDFT算法是Gerchberg-Papoulis算法的封闭解。正则化Gerchberg-Papoulis算法可以同步实现,由此可以得出Hopfield神经网络的实现也可以收敛。
{"title":"Modeling neural network dynamics using iterative image reconstruction algorithms","authors":"R. Steriti, M. Fiddy","doi":"10.1109/IJCNN.1992.227312","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227312","url":null,"abstract":"Image reconstruction problems can be viewed as energy minimization problems and can be mapped onto a Hopfield neural network. For image reconstruction problems the authors describe the Gerchberg-Papoulis iterative method and the priorized discrete Fourier transform (PDFT) algorithm (C.L. Byrne et al., 1983). Both of these can be mapped onto a Hopfield neural network architecture, with the PDFT incorporating an iterative matrix inversion. The equations describing the operation of the Hopfield neural network are formally equivalent to those used in these iterative reconstruction methods, and these iterative reconstruction algorithms are regularized. The PDFT algorithm is a closed form solution to the Gerchberg-Papoulis algorithm when image support information is used. The regularized Gerchberg-Papoulis algorithm can be implemented synchronously, from which it follows that the Hopfield neural network implementation can also converge.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126752169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287127
C. Higgins, R. M. Goodman
The authors present a method for the induction of fuzzy logic rules to predict a numerical function from samples of the function and its dependent variables. This method uses an information-theoretic approach based on the authors' previous work with discrete-valued data (see Proc. Int. Joint. Conf. on Neur. Net., vol.1, p.875-80, 1991). The rules learned can then be used in a neural network to predict the function value based on its dependent variables. An example is shown of learning a control system function.<>
{"title":"Learning fuzzy rule-based neural networks for function approximation","authors":"C. Higgins, R. M. Goodman","doi":"10.1109/IJCNN.1992.287127","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287127","url":null,"abstract":"The authors present a method for the induction of fuzzy logic rules to predict a numerical function from samples of the function and its dependent variables. This method uses an information-theoretic approach based on the authors' previous work with discrete-valued data (see Proc. Int. Joint. Conf. on Neur. Net., vol.1, p.875-80, 1991). The rules learned can then be used in a neural network to predict the function value based on its dependent variables. An example is shown of learning a control system function.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114914642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226948
H. Bacha, W. Meyer
Neural networks offer superior performance for predicting the future behaviour of pseudo-random time series. The authors present a neural network architecture for load forecasting which is capable of capturing the relevant relationships and weather trends. The proposed architecture is tested by training three neural networks, which in turn are tested with weather data form the same four-day period. The network is made up of a series of subnetworks each connected to its immediate neighbors in a way that takes into consideration not only current weather conditions but also the weather trend around the hour for which the forecast is being made. The neural network forecasts were very close to the actual values despite the facts that only a small sample was used and there were errors in the data. A more comprehensive study is being contemplated for the next phase. One of the issues to be addressed is the expansion of the scope of the research to include data from a complete season (three consecutive months) over several years.<>
{"title":"A neural network architecture for load forecasting","authors":"H. Bacha, W. Meyer","doi":"10.1109/IJCNN.1992.226948","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226948","url":null,"abstract":"Neural networks offer superior performance for predicting the future behaviour of pseudo-random time series. The authors present a neural network architecture for load forecasting which is capable of capturing the relevant relationships and weather trends. The proposed architecture is tested by training three neural networks, which in turn are tested with weather data form the same four-day period. The network is made up of a series of subnetworks each connected to its immediate neighbors in a way that takes into consideration not only current weather conditions but also the weather trend around the hour for which the forecast is being made. The neural network forecasts were very close to the actual values despite the facts that only a small sample was used and there were errors in the data. A more comprehensive study is being contemplated for the next phase. One of the issues to be addressed is the expansion of the scope of the research to include data from a complete season (three consecutive months) over several years.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116147726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287193
T. Troudet, Sanjay Garg, Walter C. Merrill
The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.<>
{"title":"Design and evaluation of a robust dynamic neurocontroller for a multivariable aircraft control problem","authors":"T. Troudet, Sanjay Garg, Walter C. Merrill","doi":"10.1109/IJCNN.1992.287193","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287193","url":null,"abstract":"The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116585334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227337
A. Iwata, Y. Suwa, Y. Ino, N. Suzumura
CombNET-II is a self-growing four-layer neural network model which has a comb structure. The first layer constitutes a stem network which quantizes an input feature vector space into several subspaces and the following 2-4 layers constitute branch network modules which classify input data in each sub-space into specified categories. CombNET-II uses a self-growing neural network learning procedure, for training the stem network. Back propagation is utilized to train branch networks. Each branch module, which is a three-layer hierarchical network, has a restricted number of output neurons and inter-connections so that it is easy to train. Therefore CombNET-II does not cause the local minimum state since the complexities of the problems to be solved for each branch module are restricted by the stem network. CombNET-II correctly classified 99.0% of previously unseen handwritten alpha-numeric characters.<>
{"title":"Handwritten alpha-numeric recognition by a self-growing neural network 'CombNET-II'","authors":"A. Iwata, Y. Suwa, Y. Ino, N. Suzumura","doi":"10.1109/IJCNN.1992.227337","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227337","url":null,"abstract":"CombNET-II is a self-growing four-layer neural network model which has a comb structure. The first layer constitutes a stem network which quantizes an input feature vector space into several subspaces and the following 2-4 layers constitute branch network modules which classify input data in each sub-space into specified categories. CombNET-II uses a self-growing neural network learning procedure, for training the stem network. Back propagation is utilized to train branch networks. Each branch module, which is a three-layer hierarchical network, has a restricted number of output neurons and inter-connections so that it is easy to train. Therefore CombNET-II does not cause the local minimum state since the complexities of the problems to be solved for each branch module are restricted by the stem network. CombNET-II correctly classified 99.0% of previously unseen handwritten alpha-numeric characters.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114433193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226887
J.J. Garside, R. Brown, T.L. Ruchti, X. Feng
The torque of a switched reluctance motor (SRM) can be estimated using a topology-preserving self-organizing neural network map. Since self-organizing maps tend to contract at region boundaries, a procedure for locking neuron weights at specific locations in a region is presented. A strategy for preferentially training neuron weights on the region boundaries is introduced. As an example of these training techniques, a one-dimensional neural network will approximate a nonlinear function. In general an n-dimension mapping can be used to approximate an m-dimensional system for n>
{"title":"Nonlinear estimation of torque in switched reluctance motors using grid locking and preferential training techniques on self-organizing neural networks","authors":"J.J. Garside, R. Brown, T.L. Ruchti, X. Feng","doi":"10.1109/IJCNN.1992.226887","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226887","url":null,"abstract":"The torque of a switched reluctance motor (SRM) can be estimated using a topology-preserving self-organizing neural network map. Since self-organizing maps tend to contract at region boundaries, a procedure for locking neuron weights at specific locations in a region is presented. A strategy for preferentially training neuron weights on the region boundaries is introduced. As an example of these training techniques, a one-dimensional neural network will approximate a nonlinear function. In general an n-dimension mapping can be used to approximate an m-dimensional system for n<or=m. As a practical implementation of this technique, the modeling of the theoretical torque of a SRM as a function of position and current is presented. A two-dimensional neural network estimates a three-dimensional highly nonlinear surface.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122058596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287169
Y.-P. Huang, D. Gustafson
A method for improving sparsely encoded associative memory storage capacity based on dynamic thresholding is presented. Under the dynamic thresholding scheme, the sparsely encoding method is shown to have greater storage capacity than the ordinary associative memory. The results are also considered from the storage sensitivity point of view. Simulation results are consistent with the quantitative analysis. It is found that system capacity is strongly dependent on the selected threshold. Selection of threshold is based on each neuron working close to its threshold assumption. This makes it possible to find a more reasonable storage capacity by using signal part and mean noise only.<>
{"title":"A dynamic approach to improve sparsely encoded associative memory capability","authors":"Y.-P. Huang, D. Gustafson","doi":"10.1109/IJCNN.1992.287169","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287169","url":null,"abstract":"A method for improving sparsely encoded associative memory storage capacity based on dynamic thresholding is presented. Under the dynamic thresholding scheme, the sparsely encoding method is shown to have greater storage capacity than the ordinary associative memory. The results are also considered from the storage sensitivity point of view. Simulation results are consistent with the quantitative analysis. It is found that system capacity is strongly dependent on the selected threshold. Selection of threshold is based on each neuron working close to its threshold assumption. This makes it possible to find a more reasonable storage capacity by using signal part and mean noise only.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116812436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227324
J. Koza
The author describes a biologically motivated paradigm, genetic programming, which can solve a variety of problems. When genetic programming solves a problem, it produces a computer program that takes the state variables of the system as input and produces the actions required to solve the problem as output. Genetic programming is explained and applied to two well-known benchmark problems from the field of neural networks. The truck backer upper problem is a multidimensional control problem and the inter-twined spirals problem is a challenging classification problem.<>
{"title":"A genetic approach to the truck backer upper problem and the inter-twined spiral problem","authors":"J. Koza","doi":"10.1109/IJCNN.1992.227324","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227324","url":null,"abstract":"The author describes a biologically motivated paradigm, genetic programming, which can solve a variety of problems. When genetic programming solves a problem, it produces a computer program that takes the state variables of the system as input and produces the actions required to solve the problem as output. Genetic programming is explained and applied to two well-known benchmark problems from the field of neural networks. The truck backer upper problem is a multidimensional control problem and the inter-twined spirals problem is a challenging classification problem.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116846037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287133
H. Y. Xu, G.Z. Wang, C.B. Baird
A fuzzy neural network (FNN) technique is presented based on fuzzy systems and neural network technologies. Utilizing human knowledge and expertise, the FNN technique is applied to accelerate the learning process of a novel backpropagation algorithm in which both self-adjusting activation and learning rate functions are designated. The learning speed and quality of the fuzzy neural networks are proved to be superior to those of standard backpropagation and other methods using changeable learning rates or activation functions. The proposed networks are currently developed and implemented in a C language environment. Experimental and analytical results demonstrate that the FNN technique is a novel and potentially powerful approach to intelligent neural networks.<>
{"title":"A fuzzy neural networks technique with fast backpropagation learning","authors":"H. Y. Xu, G.Z. Wang, C.B. Baird","doi":"10.1109/IJCNN.1992.287133","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287133","url":null,"abstract":"A fuzzy neural network (FNN) technique is presented based on fuzzy systems and neural network technologies. Utilizing human knowledge and expertise, the FNN technique is applied to accelerate the learning process of a novel backpropagation algorithm in which both self-adjusting activation and learning rate functions are designated. The learning speed and quality of the fuzzy neural networks are proved to be superior to those of standard backpropagation and other methods using changeable learning rates or activation functions. The proposed networks are currently developed and implemented in a C language environment. Experimental and analytical results demonstrate that the FNN technique is a novel and potentially powerful approach to intelligent neural networks.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129844998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226991
J. Gascuel, E. Delaunay, L. Montoliu, B. Moobed, M. Weinfeld
A special-purpose simulator is described. It has been designed to try various interconnection schemes between several similar associative chips, in order to assess hierarchical assemblies of neural networks. These chips are digital feedback networks with 64 fully interconnected binary neurons, capable of on-chip learning and automatic detection of spurious attractors. This simulator is based on the MCP development board. Each such board can house four associative chips. The simulator is designed to transparently address chips not only inside the machine in which it resides, but also chips in other machines. All the virtual interconnections between chips are made at the neuron level, which means that the individual components of binary vectors processed by each chip can be routed to the input or from the output of any other chip. Simulator scheduling allows sequentiality in information processing.<>
{"title":"A software reconfigurable multi-networks simulator using a custom associative chip","authors":"J. Gascuel, E. Delaunay, L. Montoliu, B. Moobed, M. Weinfeld","doi":"10.1109/IJCNN.1992.226991","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226991","url":null,"abstract":"A special-purpose simulator is described. It has been designed to try various interconnection schemes between several similar associative chips, in order to assess hierarchical assemblies of neural networks. These chips are digital feedback networks with 64 fully interconnected binary neurons, capable of on-chip learning and automatic detection of spurious attractors. This simulator is based on the MCP development board. Each such board can house four associative chips. The simulator is designed to transparently address chips not only inside the machine in which it resides, but also chips in other machines. All the virtual interconnections between chips are made at the neuron level, which means that the individual components of binary vectors processed by each chip can be routed to the input or from the output of any other chip. Simulator scheduling allows sequentiality in information processing.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129880562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}