Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374513
J. Arcand, Sophie-Julie Pelletier
This article begins by explaining the concept of distributed neural networks. It then goes on to present a program library designed to support the development of such networks. In this context, distributed neural networks are seen as supernetworks comprising a number of subnetworks that can communicate with one another. Such supernetworks are intended to facilitate the modeling of complex and heterogeneous realities. Each subnetwork is trained independently of the others, according to the learning algorithm or algorithms that govern it. Once trained, the subnetworks are interconnected in such a way as to circulate information through the network as a whole. The distributed network library is an application of research in this area. It allows for the creation of distributed networks, the individual training of subnetworks, and communication between subnetworks. The library's interface makes it as much a tool for research as it is a program for neural network development for the uninitiated.<>
{"title":"ADN-analysis and development of distributed neural networks for intelligent applications","authors":"J. Arcand, Sophie-Julie Pelletier","doi":"10.1109/ICNN.1994.374513","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374513","url":null,"abstract":"This article begins by explaining the concept of distributed neural networks. It then goes on to present a program library designed to support the development of such networks. In this context, distributed neural networks are seen as supernetworks comprising a number of subnetworks that can communicate with one another. Such supernetworks are intended to facilitate the modeling of complex and heterogeneous realities. Each subnetwork is trained independently of the others, according to the learning algorithm or algorithms that govern it. Once trained, the subnetworks are interconnected in such a way as to circulate information through the network as a whole. The distributed network library is an application of research in this area. It allows for the creation of distributed networks, the individual training of subnetworks, and communication between subnetworks. The library's interface makes it as much a tool for research as it is a program for neural network development for the uninitiated.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130584945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374306
T. Nitta
This paper presents some results of an analysis on the decision boundaries of the complex valued neural networks. The main results may be summarized as follows. (a) Weight parameters of a complex valued neuron have a restriction which is concerned with two-dimensional motion. (b) The decision boundary of a complex valued neuron consists of two hypersurfaces which intersect orthogonally, and divides a decision region into four equal sections. The decision boundary of a three-layered complex valued neural network has this as a basic structure, and its two hypersurfaces intersect orthogonally if net inputs to each hidden neuron are all sufficiently large.<>
{"title":"An analysis on decision boundaries in the complex back-propagation network","authors":"T. Nitta","doi":"10.1109/ICNN.1994.374306","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374306","url":null,"abstract":"This paper presents some results of an analysis on the decision boundaries of the complex valued neural networks. The main results may be summarized as follows. (a) Weight parameters of a complex valued neuron have a restriction which is concerned with two-dimensional motion. (b) The decision boundary of a complex valued neuron consists of two hypersurfaces which intersect orthogonally, and divides a decision region into four equal sections. The decision boundary of a three-layered complex valued neural network has this as a basic structure, and its two hypersurfaces intersect orthogonally if net inputs to each hidden neuron are all sufficiently large.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374627
B. Hwang
In this paper, an approach based on neural networks for the control system design of a pressurized water reactor (PWR) is presented. A reference model which incorporates a static projective suboptimal control law under various operating conditions is used to generate the necessary data for training the neurocontroller. The designed approach is able to control the nuclear reactor in a robust manner. Simulation results presented reveal that it is feasible to use artificial neural networks to improve the operating characteristics of the nuclear power plants.<>
{"title":"Intelligent control for a nuclear power plant using artificial neural networks","authors":"B. Hwang","doi":"10.1109/ICNN.1994.374627","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374627","url":null,"abstract":"In this paper, an approach based on neural networks for the control system design of a pressurized water reactor (PWR) is presented. A reference model which incorporates a static projective suboptimal control law under various operating conditions is used to generate the necessary data for training the neurocontroller. The designed approach is able to control the nuclear reactor in a robust manner. Simulation results presented reveal that it is feasible to use artificial neural networks to improve the operating characteristics of the nuclear power plants.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123209695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374965
B. L. Vinz, S. J. Graves
This paper describes a concept that integrates a counterpropagation neural network into a video-based vision system employed for automatic spacecraft docking. A brief overview of docking phases, the target orientation problem, and potential benefits resulting from an automated docking system is provided. Issues and challenges of automatic target recognition, as applied to automatic docking, are addressed. Following a review of the architecture, training, and desirable characteristics of the counterpropagation network, an approach for determining the relative orientation of a target spacecraft based on a counterpropagation net is presented.<>
{"title":"A counterpropagation neural network for determining target spacecraft orientation","authors":"B. L. Vinz, S. J. Graves","doi":"10.1109/ICNN.1994.374965","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374965","url":null,"abstract":"This paper describes a concept that integrates a counterpropagation neural network into a video-based vision system employed for automatic spacecraft docking. A brief overview of docking phases, the target orientation problem, and potential benefits resulting from an automated docking system is provided. Issues and challenges of automatic target recognition, as applied to automatic docking, are addressed. Following a review of the architecture, training, and desirable characteristics of the counterpropagation network, an approach for determining the relative orientation of a target spacecraft based on a counterpropagation net is presented.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123616978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.375026
J. Park, J. Park, D. Kim, C. Lee, S. Suh, M. Han
With the deterministic nature and the difficulty of scaling, Hopfield-style neural network is readily to converge to one of local minima in the course of energy function minimization, not to escape from those undesirable solutions. Many researchers, who want to find the global minimum of the traveling salesman problem (TSP), have introduced various approaches to solve such conditions including heuristics, genetic algorithms, hybrid algorithms of some methods, etc. We introduce a simple heuristic algorithm which embeds the classical local search heuristics into the optimization neural network. The proposed algorithm is characterized with the best neighbors selection, which is used in the dynamic scheduling and in ordering the update sequence of neurons, and with the decidability check which is used to guarantee the near-optimal solution. The proposed algorithm enhances both the convergence speed and the quality of solutions.<>
{"title":"Dynamic neural network with heuristics","authors":"J. Park, J. Park, D. Kim, C. Lee, S. Suh, M. Han","doi":"10.1109/ICNN.1994.375026","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375026","url":null,"abstract":"With the deterministic nature and the difficulty of scaling, Hopfield-style neural network is readily to converge to one of local minima in the course of energy function minimization, not to escape from those undesirable solutions. Many researchers, who want to find the global minimum of the traveling salesman problem (TSP), have introduced various approaches to solve such conditions including heuristics, genetic algorithms, hybrid algorithms of some methods, etc. We introduce a simple heuristic algorithm which embeds the classical local search heuristics into the optimization neural network. The proposed algorithm is characterized with the best neighbors selection, which is used in the dynamic scheduling and in ordering the update sequence of neurons, and with the decidability check which is used to guarantee the near-optimal solution. The proposed algorithm enhances both the convergence speed and the quality of solutions.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123720458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374613
Sung-Woo Kim, Sun-Gi Hong, T. Ohm, Jujang Lee
Focuses on the training scheme for the neural networks to learn in the regions of unstable equilibrium states and the identification and the control using these networks. These can be achieved by introducing a supervisory controller during the learning period of the neural networks. The supervisory controller is designed based on Lyapunov theory and it guarantees the boundedness of the system states within the region of interest. Therefore the neural networks can be trained to approximate sufficiently accurately with uniformly distributed training samples by properly choosing the desired states covering the region of interest. After the networks are successfully trained to identify the system, the controller is designed to cancel out the nonlinearity of the system.<>
{"title":"Neural network identification and control of unstable systems using supervisory control while learning","authors":"Sung-Woo Kim, Sun-Gi Hong, T. Ohm, Jujang Lee","doi":"10.1109/ICNN.1994.374613","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374613","url":null,"abstract":"Focuses on the training scheme for the neural networks to learn in the regions of unstable equilibrium states and the identification and the control using these networks. These can be achieved by introducing a supervisory controller during the learning period of the neural networks. The supervisory controller is designed based on Lyapunov theory and it guarantees the boundedness of the system states within the region of interest. Therefore the neural networks can be trained to approximate sufficiently accurately with uniformly distributed training samples by properly choosing the desired states covering the region of interest. After the networks are successfully trained to identify the system, the controller is designed to cancel out the nonlinearity of the system.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121153769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374694
K. Koutroumbas, N. Kalouptsidis
In this paper two algorithms for the construction of pattern classifier neural architectures are proposed. A comparison with other known similar architectures is given and simulation results are carried out.<>
本文提出了两种构建模式分类器神经结构的算法。并与其他已知的类似结构进行了比较,并给出了仿真结果。
{"title":"Nearest neighbor pattern classification neural networks","authors":"K. Koutroumbas, N. Kalouptsidis","doi":"10.1109/ICNN.1994.374694","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374694","url":null,"abstract":"In this paper two algorithms for the construction of pattern classifier neural architectures are proposed. A comparison with other known similar architectures is given and simulation results are carried out.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374316
Sang-Hoon Oh, Youngjik Lee
In this paper, we derive the sensitivity of single hidden-layer networks with threshold functions, called "Madaline", as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the binary input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. Our result show that the sensitivity in a trained network is far different from that of networks with random weights.<>
{"title":"Sensitivity of trained neural networks with threshold functions","authors":"Sang-Hoon Oh, Youngjik Lee","doi":"10.1109/ICNN.1994.374316","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374316","url":null,"abstract":"In this paper, we derive the sensitivity of single hidden-layer networks with threshold functions, called \"Madaline\", as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the binary input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. Our result show that the sensitivity in a trained network is far different from that of networks with random weights.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116339841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374644
S. Atkins, W. Baker
The primary focus of this paper is to discuss two general approaches for incrementally synthesizing a nonlinear optimal control law, through real-time, closed-loop interactions between the dynamic system, its environment, and a learning control system, when substantial initial model uncertainty exists. Learning systems represent an on-line approach to the incremental synthesis of an optimal control law for situations where initial model uncertainty precludes the use of robust, fixed control laws, and where significant dynamic nonlinearities reduce the level of performance attainable by adaptive control laws. In parallel with the established framework of direct and indirect adaptive control algorithms, a direct/indirect framework is proposed as a means of classifying approaches to learning optimal control laws. Direct learning optimal control implies that the feedback loop which motivates the learning process is closed around system performance. Common properties of direct learning algorithms, including the apparent necessity of approximating two complementary functions, are reviewed. Indirect learning optimal control denotes a class of incremental control law synthesis methods for which the learning loop is closed around the system model. This class is illustrated by developing a simple optimal control law.<>
{"title":"Direct and indirect methods for learning optimal control laws","authors":"S. Atkins, W. Baker","doi":"10.1109/ICNN.1994.374644","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374644","url":null,"abstract":"The primary focus of this paper is to discuss two general approaches for incrementally synthesizing a nonlinear optimal control law, through real-time, closed-loop interactions between the dynamic system, its environment, and a learning control system, when substantial initial model uncertainty exists. Learning systems represent an on-line approach to the incremental synthesis of an optimal control law for situations where initial model uncertainty precludes the use of robust, fixed control laws, and where significant dynamic nonlinearities reduce the level of performance attainable by adaptive control laws. In parallel with the established framework of direct and indirect adaptive control algorithms, a direct/indirect framework is proposed as a means of classifying approaches to learning optimal control laws. Direct learning optimal control implies that the feedback loop which motivates the learning process is closed around system performance. Common properties of direct learning algorithms, including the apparent necessity of approximating two complementary functions, are reviewed. Indirect learning optimal control denotes a class of incremental control law synthesis methods for which the learning loop is closed around the system model. This class is illustrated by developing a simple optimal control law.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124172197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374437
R. Etienne-Cummings, C. Donham, J. van der Spiegel, P. Mueller
An analog neural network implementation of spatiotemporal feature extraction for real-time visual motion estimation is presented. Visual motion can be represented as an orientation in the space-time domain. Thus, motion estimation translates into orientation detection. The spatiotemporal orientation detector discussed is based on Adelson and Bergen's model with modifications to accommodate the computational limitations of hardware analog neural networks. The analog neural computer used here has the unique property of offering temporal computational capabilities through synaptic time-constants. These time-constants are crucial for implementing the spatiotemporal filters. Analysis, implementation and performance of the motion filters are discussed. The performance of the neural motion filters is found to be consistent with theoretical predictions and the real stimulus motion.<>
{"title":"Spatiotemporal computation with a general purpose analog neural computer: real-time visual motion estimation","authors":"R. Etienne-Cummings, C. Donham, J. van der Spiegel, P. Mueller","doi":"10.1109/ICNN.1994.374437","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374437","url":null,"abstract":"An analog neural network implementation of spatiotemporal feature extraction for real-time visual motion estimation is presented. Visual motion can be represented as an orientation in the space-time domain. Thus, motion estimation translates into orientation detection. The spatiotemporal orientation detector discussed is based on Adelson and Bergen's model with modifications to accommodate the computational limitations of hardware analog neural networks. The analog neural computer used here has the unique property of offering temporal computational capabilities through synaptic time-constants. These time-constants are crucial for implementing the spatiotemporal filters. Analysis, implementation and performance of the motion filters are discussed. The performance of the neural motion filters is found to be consistent with theoretical predictions and the real stimulus motion.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127688024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}