Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170511
A. Abunawass
The authors introduce a modified BP (backpropagation) model that can be used in sequential learning to overcome the NET (negative transfer) effect. Simulations were conducted to contrast the performance of the original BP model with the modified one. The results of the simulations showed that effect of the NT can be completely eliminated, and in some cases reversed, by using the modified BP model. The behavior and interactions of the weight matrices are studied over successive training sessions. This work confirms the need to have an overall cognitive architecture that goes beyond the basic application of the learning model.<>
{"title":"The negative transfer problem in neural networks: a solution","authors":"A. Abunawass","doi":"10.1109/IJCNN.1991.170511","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170511","url":null,"abstract":"The authors introduce a modified BP (backpropagation) model that can be used in sequential learning to overcome the NET (negative transfer) effect. Simulations were conducted to contrast the performance of the original BP model with the modified one. The results of the simulations showed that effect of the NT can be completely eliminated, and in some cases reversed, by using the modified BP model. The behavior and interactions of the weight matrices are studied over successive training sessions. This work confirms the need to have an overall cognitive architecture that goes beyond the basic application of the learning model.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132245967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170462
G. Bolt
The complex problem of assessing the reliability of a neural network is addressed. This is approached by first examining the style in which neural networks fail, and it is concluded that a continuous measure is required. Various factors are identified which will influence the definition of such a reliability measure. For various situations, examples are given of suitable reliability measures for the multilayer perceptron. An assessment strategy for a neural network's reliability is also developed. Two conventional methods are discussed (fault injection and mean-time-before-failure), and certain deficiencies are noted. From this, a more suitable service degradation method is developed. The importance of choosing a reasonable timescale for a simulation environment is also discussed. Examples of each style of simulation method are given for the multilayer perceptron.<>
{"title":"Assessing the reliability of artificial neural networks","authors":"G. Bolt","doi":"10.1109/IJCNN.1991.170462","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170462","url":null,"abstract":"The complex problem of assessing the reliability of a neural network is addressed. This is approached by first examining the style in which neural networks fail, and it is concluded that a continuous measure is required. Various factors are identified which will influence the definition of such a reliability measure. For various situations, examples are given of suitable reliability measures for the multilayer perceptron. An assessment strategy for a neural network's reliability is also developed. Two conventional methods are discussed (fault injection and mean-time-before-failure), and certain deficiencies are noted. From this, a more suitable service degradation method is developed. The importance of choosing a reasonable timescale for a simulation environment is also discussed. Examples of each style of simulation method are given for the multilayer perceptron.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130188857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170754
K. Murakami, T. Nakagawa, H. Kitagawa
A parallel algorithm with SDNNs (strictly digital neural networks) for solving four-coloring problems on combinatorial optimization problems is presented. This problem was defined as a set selection problem with the k-out-of-n design rule and was solved efficiently by an SDNN software simulator with the parallel algorithm. Solving this large problem with a sequential algorithm takes several hours. The simulation results of SDNN show that four-colour map problems can be solved not only within O(n) in parallel convergence but also O(n/sup 2/) in sequential simulation, where n is the number of regions. A comparison with two other algorithms shows the efficiency of the SDNN algorithm.<>
{"title":"Solving four-coloring map problems using strictly digital neural networks","authors":"K. Murakami, T. Nakagawa, H. Kitagawa","doi":"10.1109/IJCNN.1991.170754","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170754","url":null,"abstract":"A parallel algorithm with SDNNs (strictly digital neural networks) for solving four-coloring problems on combinatorial optimization problems is presented. This problem was defined as a set selection problem with the k-out-of-n design rule and was solved efficiently by an SDNN software simulator with the parallel algorithm. Solving this large problem with a sequential algorithm takes several hours. The simulation results of SDNN show that four-colour map problems can be solved not only within O(n) in parallel convergence but also O(n/sup 2/) in sequential simulation, where n is the number of regions. A comparison with two other algorithms shows the efficiency of the SDNN algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134142273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170671
Y. Kurcmatsu, O. Katayama, M. Iwata, S. Kitamura
Introduces a hierarchical structure for motion planning and learning control of a biped locomotive robot. In this system, trajectories are obtained for a robot's joints on a flat surface by an inverted pendulum equation and a Hopfield type neural network. The former equation is simulated for the motion of the center of gravity of the robot and the network is used for solving the inverse kinematics. A multi-layered neural networks is also used for training, walking modes by compensating for the difference between the inverted pendulum model and the robot. Simulation results show the effectiveness of the proposed method to generate various walking patterns. Next, the authors improved the system to let the robot walk on stairs. They set up two phases as a walking mode; a single-support phase and a double-support phase. Combination of these two phases yields a successful trajectory generation for the robot's walking on a rough surface such as stairs.<>
{"title":"Autonomous trajectory generation of a biped locomotive robot","authors":"Y. Kurcmatsu, O. Katayama, M. Iwata, S. Kitamura","doi":"10.1109/IJCNN.1991.170671","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170671","url":null,"abstract":"Introduces a hierarchical structure for motion planning and learning control of a biped locomotive robot. In this system, trajectories are obtained for a robot's joints on a flat surface by an inverted pendulum equation and a Hopfield type neural network. The former equation is simulated for the motion of the center of gravity of the robot and the network is used for solving the inverse kinematics. A multi-layered neural networks is also used for training, walking modes by compensating for the difference between the inverted pendulum model and the robot. Simulation results show the effectiveness of the proposed method to generate various walking patterns. Next, the authors improved the system to let the robot walk on stairs. They set up two phases as a walking mode; a single-support phase and a double-support phase. Combination of these two phases yields a successful trajectory generation for the robot's walking on a rough surface such as stairs.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134455684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170549
K. T. Sun, H. Fu
The authors propose a neural network algorithm for the traffic control problem (an NP-complete problem) in multistage interconnection networks. The traffic control problem can be represented by an energy function, and the state of the energy function is iteratively updated by the authors' parallel algorithm. When the energy function reaches a stable state, the state represents a solution of the problem. Empirical results show the effectiveness of the proposed algorithm, and the time complexity with n/sup 2/ neurons is O(n log n). Simulation results show that both the throughput and iteration steps are much better than in the linear approach. Furthermore, since the traffic control problem can be reduced to the traveling salesman problem. the proposed algorithm can also be applied to other optimization problems.<>
{"title":"A neural network algorithm for solving the traffic control problem in multistage interconnection networks","authors":"K. T. Sun, H. Fu","doi":"10.1109/IJCNN.1991.170549","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170549","url":null,"abstract":"The authors propose a neural network algorithm for the traffic control problem (an NP-complete problem) in multistage interconnection networks. The traffic control problem can be represented by an energy function, and the state of the energy function is iteratively updated by the authors' parallel algorithm. When the energy function reaches a stable state, the state represents a solution of the problem. Empirical results show the effectiveness of the proposed algorithm, and the time complexity with n/sup 2/ neurons is O(n log n). Simulation results show that both the throughput and iteration steps are much better than in the linear approach. Furthermore, since the traffic control problem can be reduced to the traveling salesman problem. the proposed algorithm can also be applied to other optimization problems.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131777913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170403
K. P. Venugopal, A. S. Pandya
The use of the Alopex algorithm for training multilayer neural networks is described. Alopex is a biologically influenced stochastic parallel process designed to find the global minimum of error surfaces. It has a number of advantages compared to other algorithms, such as backpropagation, reinforcement learning, and the Boltzmann machine. The authors investigate the efficacy of the algorithm for faster convergence by considering different error functions. They discuss the specifics of the algorithm for applications involving learning tasks. Results of computer simulations with standard problems such as XOR, parity, symmetry, and encoders of different dimensions are also presented and compared with those obtained using backpropagation. A temperature perturbation scheme is proposed which allows the algorithm to get out of strong local minima.<>
{"title":"Alopex algorithm for training multilayer neural networks","authors":"K. P. Venugopal, A. S. Pandya","doi":"10.1109/IJCNN.1991.170403","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170403","url":null,"abstract":"The use of the Alopex algorithm for training multilayer neural networks is described. Alopex is a biologically influenced stochastic parallel process designed to find the global minimum of error surfaces. It has a number of advantages compared to other algorithms, such as backpropagation, reinforcement learning, and the Boltzmann machine. The authors investigate the efficacy of the algorithm for faster convergence by considering different error functions. They discuss the specifics of the algorithm for applications involving learning tasks. Results of computer simulations with standard problems such as XOR, parity, symmetry, and encoders of different dimensions are also presented and compared with those obtained using backpropagation. A temperature perturbation scheme is proposed which allows the algorithm to get out of strong local minima.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129407933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170379
Ke-Lin Chen, Yu Ting, P. Yan
A novel model of associative memory with biorthogonal properties is presented which can be viewed as an improved version of T. Kohonen's (1977) linear model of associative memory. An iterative algorithm is developed which makes the proposed model directly usable without any limit condition. Several characteristics of the model which are very similar to biological phenomena are discussed. It is shown that the optimal value of an associative memory can always be obtained in the proposed model. Compared with Kohonen's model, the proposed model has many characteristics closer to the human functions of memory, and can be more conveniently and unconditionally applied in any linear physical system.<>
{"title":"A novel model of associative memory with biorthogonal properties","authors":"Ke-Lin Chen, Yu Ting, P. Yan","doi":"10.1109/IJCNN.1991.170379","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170379","url":null,"abstract":"A novel model of associative memory with biorthogonal properties is presented which can be viewed as an improved version of T. Kohonen's (1977) linear model of associative memory. An iterative algorithm is developed which makes the proposed model directly usable without any limit condition. Several characteristics of the model which are very similar to biological phenomena are discussed. It is shown that the optimal value of an associative memory can always be obtained in the proposed model. Compared with Kohonen's model, the proposed model has many characteristics closer to the human functions of memory, and can be more conveniently and unconditionally applied in any linear physical system.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170747
Young-Ik Kim, Jong Beom Ra
A method for initialization of the weight values of multilayer feedforward neural networks is proposed to improve the learning speed of a network. The proposed method suggests the minimum bound of the weights based on dynamics of decision boundaries, which is derived from the generalized delta rule. Computer simulation in several neural network models showed that the proper selection of the initial weight values improves the learning ability and contributed to fast convergence.<>
{"title":"Weight value initialization for improving training speed in the backpropagation network","authors":"Young-Ik Kim, Jong Beom Ra","doi":"10.1109/IJCNN.1991.170747","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170747","url":null,"abstract":"A method for initialization of the weight values of multilayer feedforward neural networks is proposed to improve the learning speed of a network. The proposed method suggests the minimum bound of the weights based on dynamics of decision boundaries, which is derived from the generalized delta rule. Computer simulation in several neural network models showed that the proper selection of the initial weight values improves the learning ability and contributed to fast convergence.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132850469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170727
Zhang Bai-ling, Xu Bing-zheng, Kwong Chung-ping
The authors study the bidirectional associative memory (BAM) model from the matched-filtering viewpoint, getting an intuitive understanding of its information processing mechanism. They analyze the problem of stability and attractivity, in BAM and propose some sufficient conditions. The shortcomings of BAM, that is, low memory capacity and weak attractivity, are pointed out. A revised BAM model is proposed by taking an exponential function operating on the related correlations between a probing vector and its neighbor library pattern vectors. From the analysis, it was found that stability and attractivity in the modified model are much better than in the original BAM if all the conditions are the same.<>
{"title":"Stability and attractivity analysis of bidirectional associative memory from the matched-filtering viewpoint","authors":"Zhang Bai-ling, Xu Bing-zheng, Kwong Chung-ping","doi":"10.1109/IJCNN.1991.170727","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170727","url":null,"abstract":"The authors study the bidirectional associative memory (BAM) model from the matched-filtering viewpoint, getting an intuitive understanding of its information processing mechanism. They analyze the problem of stability and attractivity, in BAM and propose some sufficient conditions. The shortcomings of BAM, that is, low memory capacity and weak attractivity, are pointed out. A revised BAM model is proposed by taking an exponential function operating on the related correlations between a probing vector and its neighbor library pattern vectors. From the analysis, it was found that stability and attractivity in the modified model are much better than in the original BAM if all the conditions are the same.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132571230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-11-18DOI: 10.1109/IJCNN.1991.170535
E. Tzirkel-Hancock, F. Fallside
A direct control scheme for a class of continuous-time nonlinear systems using neural networks is presented. The objective of the control is to track a desired reference signal. This objective is achieved through input/output linearization of the system with neural networks. Learning, based on a stability type algorithm, takes place simultaneously with control. As such, the method is closely related to adaptive control methods and the field of neural network training. In particular, the importance of the property of persistent excitation and its implications for learning with networks of localized receptive fields are discussed.<>
{"title":"A stability based neural network control method for a class of nonlinear systems","authors":"E. Tzirkel-Hancock, F. Fallside","doi":"10.1109/IJCNN.1991.170535","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170535","url":null,"abstract":"A direct control scheme for a class of continuous-time nonlinear systems using neural networks is presented. The objective of the control is to track a desired reference signal. This objective is achieved through input/output linearization of the system with neural networks. Learning, based on a stability type algorithm, takes place simultaneously with control. As such, the method is closely related to adaptive control methods and the field of neural network training. In particular, the importance of the property of persistent excitation and its implications for learning with networks of localized receptive fields are discussed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133858338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}