首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
The negative transfer problem in neural networks: a solution 神经网络中的负迁移问题:一种解决方法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170511
A. Abunawass
The authors introduce a modified BP (backpropagation) model that can be used in sequential learning to overcome the NET (negative transfer) effect. Simulations were conducted to contrast the performance of the original BP model with the modified one. The results of the simulations showed that effect of the NT can be completely eliminated, and in some cases reversed, by using the modified BP model. The behavior and interactions of the weight matrices are studied over successive training sessions. This work confirms the need to have an overall cognitive architecture that goes beyond the basic application of the learning model.<>
作者介绍了一种改进的BP(反向传播)模型,该模型可用于顺序学习以克服NET(负迁移)效应。通过仿真对比了改进后的BP模型与原模型的性能。模拟结果表明,采用改进的BP模型可以完全消除NT的影响,在某些情况下甚至可以逆转NT的影响。在连续的训练过程中,研究了权重矩阵的行为和相互作用。这项工作证实了需要有一个超越学习模型基本应用的整体认知架构。
{"title":"The negative transfer problem in neural networks: a solution","authors":"A. Abunawass","doi":"10.1109/IJCNN.1991.170511","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170511","url":null,"abstract":"The authors introduce a modified BP (backpropagation) model that can be used in sequential learning to overcome the NET (negative transfer) effect. Simulations were conducted to contrast the performance of the original BP model with the modified one. The results of the simulations showed that effect of the NT can be completely eliminated, and in some cases reversed, by using the modified BP model. The behavior and interactions of the weight matrices are studied over successive training sessions. This work confirms the need to have an overall cognitive architecture that goes beyond the basic application of the learning model.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132245967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Assessing the reliability of artificial neural networks 评估人工神经网络的可靠性
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170462
G. Bolt
The complex problem of assessing the reliability of a neural network is addressed. This is approached by first examining the style in which neural networks fail, and it is concluded that a continuous measure is required. Various factors are identified which will influence the definition of such a reliability measure. For various situations, examples are given of suitable reliability measures for the multilayer perceptron. An assessment strategy for a neural network's reliability is also developed. Two conventional methods are discussed (fault injection and mean-time-before-failure), and certain deficiencies are noted. From this, a more suitable service degradation method is developed. The importance of choosing a reasonable timescale for a simulation environment is also discussed. Examples of each style of simulation method are given for the multilayer perceptron.<>
研究了神经网络可靠性评估的复杂问题。这是通过首先检查神经网络失败的方式来解决的,并得出结论,需要连续测量。确定了影响这种可靠性度量定义的各种因素。针对各种情况,给出了多层感知器的可靠度度量的实例。提出了一种神经网络可靠性评估策略。讨论了两种常用的方法(故障注入法和故障前平均时间法),并指出了其不足之处。在此基础上,提出了一种更合适的服务退化方法。本文还讨论了为仿真环境选择合理时间尺度的重要性。对于多层感知器,给出了每种模拟方法的示例
{"title":"Assessing the reliability of artificial neural networks","authors":"G. Bolt","doi":"10.1109/IJCNN.1991.170462","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170462","url":null,"abstract":"The complex problem of assessing the reliability of a neural network is addressed. This is approached by first examining the style in which neural networks fail, and it is concluded that a continuous measure is required. Various factors are identified which will influence the definition of such a reliability measure. For various situations, examples are given of suitable reliability measures for the multilayer perceptron. An assessment strategy for a neural network's reliability is also developed. Two conventional methods are discussed (fault injection and mean-time-before-failure), and certain deficiencies are noted. From this, a more suitable service degradation method is developed. The importance of choosing a reasonable timescale for a simulation environment is also discussed. Examples of each style of simulation method are given for the multilayer perceptron.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130188857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Solving four-coloring map problems using strictly digital neural networks 用严格的数字神经网络解决四色图问题
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170754
K. Murakami, T. Nakagawa, H. Kitagawa
A parallel algorithm with SDNNs (strictly digital neural networks) for solving four-coloring problems on combinatorial optimization problems is presented. This problem was defined as a set selection problem with the k-out-of-n design rule and was solved efficiently by an SDNN software simulator with the parallel algorithm. Solving this large problem with a sequential algorithm takes several hours. The simulation results of SDNN show that four-colour map problems can be solved not only within O(n) in parallel convergence but also O(n/sup 2/) in sequential simulation, where n is the number of regions. A comparison with two other algorithms shows the efficiency of the SDNN algorithm.<>
提出了一种用sdn(严格数字神经网络)求解组合优化问题上的四着色问题的并行算法。将该问题定义为具有k- of-n设计规则的集合选择问题,并利用并行算法在SDNN软件模拟器上有效地求解了该问题。用顺序算法解决这个大问题需要几个小时。SDNN的仿真结果表明,四色映射问题不仅可以在并行收敛的O(n)内求解,而且可以在顺序模拟的O(n/sup 2/)内求解,其中n为区域数。与其他两种算法的比较表明了SDNN算法的有效性。
{"title":"Solving four-coloring map problems using strictly digital neural networks","authors":"K. Murakami, T. Nakagawa, H. Kitagawa","doi":"10.1109/IJCNN.1991.170754","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170754","url":null,"abstract":"A parallel algorithm with SDNNs (strictly digital neural networks) for solving four-coloring problems on combinatorial optimization problems is presented. This problem was defined as a set selection problem with the k-out-of-n design rule and was solved efficiently by an SDNN software simulator with the parallel algorithm. Solving this large problem with a sequential algorithm takes several hours. The simulation results of SDNN show that four-colour map problems can be solved not only within O(n) in parallel convergence but also O(n/sup 2/) in sequential simulation, where n is the number of regions. A comparison with two other algorithms shows the efficiency of the SDNN algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134142273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Autonomous trajectory generation of a biped locomotive robot 双足机车机器人的自主轨迹生成
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170671
Y. Kurcmatsu, O. Katayama, M. Iwata, S. Kitamura
Introduces a hierarchical structure for motion planning and learning control of a biped locomotive robot. In this system, trajectories are obtained for a robot's joints on a flat surface by an inverted pendulum equation and a Hopfield type neural network. The former equation is simulated for the motion of the center of gravity of the robot and the network is used for solving the inverse kinematics. A multi-layered neural networks is also used for training, walking modes by compensating for the difference between the inverted pendulum model and the robot. Simulation results show the effectiveness of the proposed method to generate various walking patterns. Next, the authors improved the system to let the robot walk on stairs. They set up two phases as a walking mode; a single-support phase and a double-support phase. Combination of these two phases yields a successful trajectory generation for the robot's walking on a rough surface such as stairs.<>
介绍了一种用于双足机器人运动规划和学习控制的分层结构。在该系统中,利用倒立摆方程和Hopfield型神经网络得到机器人关节在平面上的运动轨迹。将前一方程模拟为机器人的重心运动,并利用网络求解机器人的运动学逆解。通过补偿倒立摆模型和机器人之间的差异,多层神经网络也被用于训练、行走模式。仿真结果表明,该方法能够有效地生成各种步行模式。接下来,作者改进了系统,让机器人在楼梯上行走。他们设置了两个阶段作为步行模式;有单支撑阶段和双支撑阶段。这两个阶段的结合产生了机器人在粗糙表面(如楼梯)上行走的成功轨迹生成
{"title":"Autonomous trajectory generation of a biped locomotive robot","authors":"Y. Kurcmatsu, O. Katayama, M. Iwata, S. Kitamura","doi":"10.1109/IJCNN.1991.170671","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170671","url":null,"abstract":"Introduces a hierarchical structure for motion planning and learning control of a biped locomotive robot. In this system, trajectories are obtained for a robot's joints on a flat surface by an inverted pendulum equation and a Hopfield type neural network. The former equation is simulated for the motion of the center of gravity of the robot and the network is used for solving the inverse kinematics. A multi-layered neural networks is also used for training, walking modes by compensating for the difference between the inverted pendulum model and the robot. Simulation results show the effectiveness of the proposed method to generate various walking patterns. Next, the authors improved the system to let the robot walk on stairs. They set up two phases as a walking mode; a single-support phase and a double-support phase. Combination of these two phases yields a successful trajectory generation for the robot's walking on a rough surface such as stairs.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134455684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
A neural network algorithm for solving the traffic control problem in multistage interconnection networks 一种求解多级互联网络流量控制问题的神经网络算法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170549
K. T. Sun, H. Fu
The authors propose a neural network algorithm for the traffic control problem (an NP-complete problem) in multistage interconnection networks. The traffic control problem can be represented by an energy function, and the state of the energy function is iteratively updated by the authors' parallel algorithm. When the energy function reaches a stable state, the state represents a solution of the problem. Empirical results show the effectiveness of the proposed algorithm, and the time complexity with n/sup 2/ neurons is O(n log n). Simulation results show that both the throughput and iteration steps are much better than in the linear approach. Furthermore, since the traffic control problem can be reduced to the traveling salesman problem. the proposed algorithm can also be applied to other optimization problems.<>
针对多级互联网络中的流量控制问题(np完全问题),提出了一种神经网络算法。交通控制问题可以用能量函数表示,并通过并行算法迭代更新能量函数的状态。当能量函数达到稳定状态时,该状态表示问题的解。实验结果表明了该算法的有效性,n/sup 2/个神经元的时间复杂度为O(n log n),仿真结果表明,该算法的吞吐量和迭代步长都大大优于线性方法。此外,由于交通控制问题可以简化为旅行商问题。该算法也可应用于其他优化问题。
{"title":"A neural network algorithm for solving the traffic control problem in multistage interconnection networks","authors":"K. T. Sun, H. Fu","doi":"10.1109/IJCNN.1991.170549","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170549","url":null,"abstract":"The authors propose a neural network algorithm for the traffic control problem (an NP-complete problem) in multistage interconnection networks. The traffic control problem can be represented by an energy function, and the state of the energy function is iteratively updated by the authors' parallel algorithm. When the energy function reaches a stable state, the state represents a solution of the problem. Empirical results show the effectiveness of the proposed algorithm, and the time complexity with n/sup 2/ neurons is O(n log n). Simulation results show that both the throughput and iteration steps are much better than in the linear approach. Furthermore, since the traffic control problem can be reduced to the traveling salesman problem. the proposed algorithm can also be applied to other optimization problems.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131777913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Alopex algorithm for training multilayer neural networks 用于训练多层神经网络的Alopex算法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170403
K. P. Venugopal, A. S. Pandya
The use of the Alopex algorithm for training multilayer neural networks is described. Alopex is a biologically influenced stochastic parallel process designed to find the global minimum of error surfaces. It has a number of advantages compared to other algorithms, such as backpropagation, reinforcement learning, and the Boltzmann machine. The authors investigate the efficacy of the algorithm for faster convergence by considering different error functions. They discuss the specifics of the algorithm for applications involving learning tasks. Results of computer simulations with standard problems such as XOR, parity, symmetry, and encoders of different dimensions are also presented and compared with those obtained using backpropagation. A temperature perturbation scheme is proposed which allows the algorithm to get out of strong local minima.<>
描述了Alopex算法在多层神经网络训练中的应用。Alopex是一种受生物影响的随机并行过程,旨在寻找误差曲面的全局最小值。与其他算法(如反向传播、强化学习和玻尔兹曼机)相比,它有许多优点。通过考虑不同的误差函数,研究了该算法的收敛速度。他们讨论了涉及学习任务的应用程序的算法细节。给出了异或、奇偶、对称和不同维度编码器等标准问题的计算机模拟结果,并与反向传播的结果进行了比较。提出了一种温度扰动方案,使算法能够摆脱强局部极小值
{"title":"Alopex algorithm for training multilayer neural networks","authors":"K. P. Venugopal, A. S. Pandya","doi":"10.1109/IJCNN.1991.170403","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170403","url":null,"abstract":"The use of the Alopex algorithm for training multilayer neural networks is described. Alopex is a biologically influenced stochastic parallel process designed to find the global minimum of error surfaces. It has a number of advantages compared to other algorithms, such as backpropagation, reinforcement learning, and the Boltzmann machine. The authors investigate the efficacy of the algorithm for faster convergence by considering different error functions. They discuss the specifics of the algorithm for applications involving learning tasks. Results of computer simulations with standard problems such as XOR, parity, symmetry, and encoders of different dimensions are also presented and compared with those obtained using backpropagation. A temperature perturbation scheme is proposed which allows the algorithm to get out of strong local minima.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129407933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A novel model of associative memory with biorthogonal properties 一种新的双正交联想记忆模型
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170379
Ke-Lin Chen, Yu Ting, P. Yan
A novel model of associative memory with biorthogonal properties is presented which can be viewed as an improved version of T. Kohonen's (1977) linear model of associative memory. An iterative algorithm is developed which makes the proposed model directly usable without any limit condition. Several characteristics of the model which are very similar to biological phenomena are discussed. It is shown that the optimal value of an associative memory can always be obtained in the proposed model. Compared with Kohonen's model, the proposed model has many characteristics closer to the human functions of memory, and can be more conveniently and unconditionally applied in any linear physical system.<>
提出了一种新的双正交联想记忆模型,该模型是T. Kohonen(1977)线性联想记忆模型的改进版。提出了一种迭代算法,使所提出的模型在没有任何限制条件的情况下直接可用。讨论了该模型与生物现象非常相似的几个特点。结果表明,在该模型中总能得到最优的联想记忆值。与Kohonen的模型相比,该模型具有许多更接近人类记忆功能的特征,可以更方便和无条件地应用于任何线性物理系统。
{"title":"A novel model of associative memory with biorthogonal properties","authors":"Ke-Lin Chen, Yu Ting, P. Yan","doi":"10.1109/IJCNN.1991.170379","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170379","url":null,"abstract":"A novel model of associative memory with biorthogonal properties is presented which can be viewed as an improved version of T. Kohonen's (1977) linear model of associative memory. An iterative algorithm is developed which makes the proposed model directly usable without any limit condition. Several characteristics of the model which are very similar to biological phenomena are discussed. It is shown that the optimal value of an associative memory can always be obtained in the proposed model. Compared with Kohonen's model, the proposed model has many characteristics closer to the human functions of memory, and can be more conveniently and unconditionally applied in any linear physical system.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Weight value initialization for improving training speed in the backpropagation network 提高反向传播网络训练速度的权值初始化
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170747
Young-Ik Kim, Jong Beom Ra
A method for initialization of the weight values of multilayer feedforward neural networks is proposed to improve the learning speed of a network. The proposed method suggests the minimum bound of the weights based on dynamics of decision boundaries, which is derived from the generalized delta rule. Computer simulation in several neural network models showed that the proper selection of the initial weight values improves the learning ability and contributed to fast convergence.<>
为了提高多层前馈神经网络的学习速度,提出了一种初始化多层前馈神经网络权值的方法。该方法提出了基于决策边界动力学的权重最小界,该最小界由广义delta规则导出。对几种神经网络模型的计算机仿真表明,正确选择初始权值可以提高神经网络的学习能力,有利于神经网络的快速收敛。
{"title":"Weight value initialization for improving training speed in the backpropagation network","authors":"Young-Ik Kim, Jong Beom Ra","doi":"10.1109/IJCNN.1991.170747","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170747","url":null,"abstract":"A method for initialization of the weight values of multilayer feedforward neural networks is proposed to improve the learning speed of a network. The proposed method suggests the minimum bound of the weights based on dynamics of decision boundaries, which is derived from the generalized delta rule. Computer simulation in several neural network models showed that the proper selection of the initial weight values improves the learning ability and contributed to fast convergence.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132850469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Stability and attractivity analysis of bidirectional associative memory from the matched-filtering viewpoint 从匹配过滤的角度分析双向联想记忆的稳定性和吸引力
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170727
Zhang Bai-ling, Xu Bing-zheng, Kwong Chung-ping
The authors study the bidirectional associative memory (BAM) model from the matched-filtering viewpoint, getting an intuitive understanding of its information processing mechanism. They analyze the problem of stability and attractivity, in BAM and propose some sufficient conditions. The shortcomings of BAM, that is, low memory capacity and weak attractivity, are pointed out. A revised BAM model is proposed by taking an exponential function operating on the related correlations between a probing vector and its neighbor library pattern vectors. From the analysis, it was found that stability and attractivity in the modified model are much better than in the original BAM if all the conditions are the same.<>
从匹配过滤的角度对双向联想记忆模型进行了研究,直观地了解了双向联想记忆模型的信息处理机制。他们分析了BAM的稳定性和吸引力问题,并提出了一些充分条件。指出了BAM存在的存储容量低、吸引力弱的缺点。提出了一种改进的BAM模型,该模型采用指数函数作用于探测向量与其相邻库模式向量之间的相关关系。分析发现,在所有条件相同的情况下,修正模型的稳定性和吸引力都比原模型好得多。
{"title":"Stability and attractivity analysis of bidirectional associative memory from the matched-filtering viewpoint","authors":"Zhang Bai-ling, Xu Bing-zheng, Kwong Chung-ping","doi":"10.1109/IJCNN.1991.170727","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170727","url":null,"abstract":"The authors study the bidirectional associative memory (BAM) model from the matched-filtering viewpoint, getting an intuitive understanding of its information processing mechanism. They analyze the problem of stability and attractivity, in BAM and propose some sufficient conditions. The shortcomings of BAM, that is, low memory capacity and weak attractivity, are pointed out. A revised BAM model is proposed by taking an exponential function operating on the related correlations between a probing vector and its neighbor library pattern vectors. From the analysis, it was found that stability and attractivity in the modified model are much better than in the original BAM if all the conditions are the same.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132571230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A stability based neural network control method for a class of nonlinear systems 一类非线性系统基于稳定性的神经网络控制方法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170535
E. Tzirkel-Hancock, F. Fallside
A direct control scheme for a class of continuous-time nonlinear systems using neural networks is presented. The objective of the control is to track a desired reference signal. This objective is achieved through input/output linearization of the system with neural networks. Learning, based on a stability type algorithm, takes place simultaneously with control. As such, the method is closely related to adaptive control methods and the field of neural network training. In particular, the importance of the property of persistent excitation and its implications for learning with networks of localized receptive fields are discussed.<>
针对一类连续时间非线性系统,提出了一种基于神经网络的直接控制方案。控制的目的是跟踪期望的参考信号。这一目标是通过神经网络对系统进行输入/输出线性化来实现的。基于稳定型算法的学习与控制同时进行。因此,该方法与自适应控制方法和神经网络训练领域密切相关。特别地,持续兴奋的性质的重要性及其含义的学习与局部接受野的网络进行了讨论。
{"title":"A stability based neural network control method for a class of nonlinear systems","authors":"E. Tzirkel-Hancock, F. Fallside","doi":"10.1109/IJCNN.1991.170535","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170535","url":null,"abstract":"A direct control scheme for a class of continuous-time nonlinear systems using neural networks is presented. The objective of the control is to track a desired reference signal. This objective is achieved through input/output linearization of the system with neural networks. Learning, based on a stability type algorithm, takes place simultaneously with control. As such, the method is closely related to adaptive control methods and the field of neural network training. In particular, the importance of the property of persistent excitation and its implications for learning with networks of localized receptive fields are discussed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133858338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1