首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
Neural network for the forward kinematics problem in parallel manipulator 并联机械臂正解问题的神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170665
Choon-seng Yee, K. Lim
The parallel manipulator's unique structure presents an interesting problem in its forward kinematics solution, which involves the solving of a series of simultaneous nonlinear equations. The ability of a neural network to recognize the relationship between the input values and the output values of a system without fully understanding the system was fully exploited in this case. With the simple inverse kinematics solution of the manipulator, a neural network was trained to solve the forward kinematics of the parallel manipulator quite accurately. By adjusting the offset of the result obtained, the neural network is able to achieve an accuracy of 0.1 mm and 0.5 degrees for the six output values.<>
并联机器人的独特结构为其正解提供了一个有趣的问题,该问题涉及求解一系列联立非线性方程。在这种情况下,神经网络在不完全理解系统的情况下识别系统输入值和输出值之间关系的能力得到了充分利用。利用简单的机械臂逆解,训练神经网络较为精确地求解并联机械臂的正解。通过调整得到的结果的偏移量,神经网络能够对六个输出值实现0.1 mm和0.5度的精度。
{"title":"Neural network for the forward kinematics problem in parallel manipulator","authors":"Choon-seng Yee, K. Lim","doi":"10.1109/IJCNN.1991.170665","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170665","url":null,"abstract":"The parallel manipulator's unique structure presents an interesting problem in its forward kinematics solution, which involves the solving of a series of simultaneous nonlinear equations. The ability of a neural network to recognize the relationship between the input values and the output values of a system without fully understanding the system was fully exploited in this case. With the simple inverse kinematics solution of the manipulator, a neural network was trained to solve the forward kinematics of the parallel manipulator quite accurately. By adjusting the offset of the result obtained, the neural network is able to achieve an accuracy of 0.1 mm and 0.5 degrees for the six output values.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114691414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Neural networks that teach themselves through genetic discovery of novel examples 神经网络通过基因发现新例子来自我学习
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170480
Butong Zhang, G. Veenker
The authors introduce an active learning paradigm for neural networks. In contrast to the passive paradigm, the learning in the active paradigm is initiated by the machine learner instead of its environment or teacher. The authors present a learning algorithm that uses a genetic algorithm for creating novel examples to teach multilayer feedforward networks. The creative learning networks, based on their own knowledge, discover new examples, criticize and select useful ones, train themselves, and thereby extend their existing knowledge. Experiments on function extrapolation show that the self-teaching neural networks not only reduce the teaching efforts of the human, but the genetically created examples also contribute robustly to the improvement of generalization performance and the interpretation of the connectionist knowledge.<>
作者介绍了一种神经网络的主动学习范式。与被动范式相比,主动范式中的学习是由机器学习者发起的,而不是由其环境或教师发起的。作者提出了一种学习算法,该算法使用遗传算法来创建新的示例来教授多层前馈网络。创造性学习网络以自己的知识为基础,发现新的例子,批评和选择有用的例子,训练自己,从而扩展现有的知识。函数外推实验表明,自学习神经网络不仅减少了人类的教学工作量,而且遗传生成的示例对泛化性能的提高和连接主义知识的解释也有显著的贡献。
{"title":"Neural networks that teach themselves through genetic discovery of novel examples","authors":"Butong Zhang, G. Veenker","doi":"10.1109/IJCNN.1991.170480","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170480","url":null,"abstract":"The authors introduce an active learning paradigm for neural networks. In contrast to the passive paradigm, the learning in the active paradigm is initiated by the machine learner instead of its environment or teacher. The authors present a learning algorithm that uses a genetic algorithm for creating novel examples to teach multilayer feedforward networks. The creative learning networks, based on their own knowledge, discover new examples, criticize and select useful ones, train themselves, and thereby extend their existing knowledge. Experiments on function extrapolation show that the self-teaching neural networks not only reduce the teaching efforts of the human, but the genetically created examples also contribute robustly to the improvement of generalization performance and the interpretation of the connectionist knowledge.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114858640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
A dynamical network capable of storing sequences of static or periodic patterns 一种能够存储静态或周期性模式序列的动态网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170430
I. Y. Poteryaiko
The author proposes a modification of the neural network model of B. Baird (1988,1989) in which the constraint of symmetrical interaction between the modes representing the patterns stored is eliminated. This makes it possible to construct the system with the ordered transitions between the patterns which were the stable attractors in the original model. Although in this case there is no strict evidence that the system does not have the chaotic behavior, a qualitative investigation and extensive numerical simulations show that the dynamics of the system can be described quite simply in terms of effective excitation wandering through the closed loop. Such motion implies the consequent activation of the static or periodic patterns stored in the network. Thus, it is shown that the model can exhibit more complex, but still programmable, behavior than was originally assumed by B. Baird.<>
作者提出了对B. Baird(1988,1989)的神经网络模型的修改,其中消除了表示存储模式的模式之间对称相互作用的约束。这使得在原模型中作为稳定吸引子的模式之间建立有序过渡的系统成为可能。虽然在这种情况下,没有严格的证据表明系统不具有混沌行为,但定性研究和广泛的数值模拟表明,系统的动力学可以非常简单地描述为有效激励在闭环中徘徊。这种运动意味着存储在网络中的静态或周期性模式随之被激活。因此,表明该模型可以表现出比B. Baird.>最初假设的更复杂,但仍然可编程的行为
{"title":"A dynamical network capable of storing sequences of static or periodic patterns","authors":"I. Y. Poteryaiko","doi":"10.1109/IJCNN.1991.170430","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170430","url":null,"abstract":"The author proposes a modification of the neural network model of B. Baird (1988,1989) in which the constraint of symmetrical interaction between the modes representing the patterns stored is eliminated. This makes it possible to construct the system with the ordered transitions between the patterns which were the stable attractors in the original model. Although in this case there is no strict evidence that the system does not have the chaotic behavior, a qualitative investigation and extensive numerical simulations show that the dynamics of the system can be described quite simply in terms of effective excitation wandering through the closed loop. Such motion implies the consequent activation of the static or periodic patterns stored in the network. Thus, it is shown that the model can exhibit more complex, but still programmable, behavior than was originally assumed by B. Baird.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126289415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Boolean function generator with learning capability 具有学习能力的布尔函数生成器
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170506
Y. Chu, C. M. Hsieh
The authors use a neural technique to implement a positive logic Boolean function or truth table. The neural technique is a perceptron training algorithm by which a Boolean function or truth table can be generated. The connected weight value in the neural network represents the sum of product terms of a Boolean function or row vectors of a truth table. A neural technique for generating functional-link cells for successful learning is described. The authors then provide an improved algorithm to describe the successful learning steps to generate the logic function and then present examples to illustrate these learning steps. Finally, a function diagram is specified to illustrate the overall system function.<>
作者使用神经技术来实现一个正逻辑布尔函数或真值表。神经技术是一种感知器训练算法,通过该算法可以生成布尔函数或真值表。神经网络中的连接权值表示布尔函数的乘积项或真值表的行向量的和。描述了一种用于生成成功学习的功能连接细胞的神经技术。然后,作者提供了一种改进的算法来描述生成逻辑函数的成功学习步骤,并给出了示例来说明这些学习步骤。最后,给出了一个功能框图来说明整个系统的功能
{"title":"A Boolean function generator with learning capability","authors":"Y. Chu, C. M. Hsieh","doi":"10.1109/IJCNN.1991.170506","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170506","url":null,"abstract":"The authors use a neural technique to implement a positive logic Boolean function or truth table. The neural technique is a perceptron training algorithm by which a Boolean function or truth table can be generated. The connected weight value in the neural network represents the sum of product terms of a Boolean function or row vectors of a truth table. A neural technique for generating functional-link cells for successful learning is described. The authors then provide an improved algorithm to describe the successful learning steps to generate the logic function and then present examples to illustrate these learning steps. Finally, a function diagram is specified to illustrate the overall system function.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallel implementation of the Kohonen algorithm on transputer Kohonen算法在计算机上的并行实现
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170672
R. Togneri, Y. Attikiouzel
A parallel implementation of the Kohonen algorithm is proposed using partitioning of the network. This allows an exact implementation of the Kohonen algorithm as opposed to partitioning the data. By using a simple routing strategy the parallel Kohonen algorithm was tested on a PC-based transputer network without the need for any special distributed operating system. The execution time was measured for networks of different size and a varying number of transputers. The execution time decreased as the number of transputers increased. However, for comparatively small-size neural networks the communication overhead caused the execution time to increase when more transputers were used. Thus, the proposed parallel implementation of the Kohonen algorithm is not suitable for massively parallel architectures. It is concluded that in excess of 12 transputers can be used with network sizes of the order of 3000 neurons or more but no more than six transputers can be used with network sizes of the order of 120 neurons.<>
提出了一种利用网络分区实现Kohonen算法的并行算法。这允许Kohonen算法的精确实现,而不是对数据进行分区。采用简单的路由策略,在不需要任何特殊的分布式操作系统的情况下,在基于pc的计算机网络上对并行Kohonen算法进行了测试。对不同大小和不同数量的转发器网络的执行时间进行了测量。执行时间随着转发器数量的增加而减少。然而,对于相对较小的神经网络,当使用更多的转发器时,通信开销导致执行时间增加。因此,提出的Kohonen算法的并行实现不适合大规模并行架构。结果表明,在3000神经元量级的网络中,可以使用超过12个的转发器,而在120神经元量级的网络中,可以使用不超过6个的转发器。
{"title":"Parallel implementation of the Kohonen algorithm on transputer","authors":"R. Togneri, Y. Attikiouzel","doi":"10.1109/IJCNN.1991.170672","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170672","url":null,"abstract":"A parallel implementation of the Kohonen algorithm is proposed using partitioning of the network. This allows an exact implementation of the Kohonen algorithm as opposed to partitioning the data. By using a simple routing strategy the parallel Kohonen algorithm was tested on a PC-based transputer network without the need for any special distributed operating system. The execution time was measured for networks of different size and a varying number of transputers. The execution time decreased as the number of transputers increased. However, for comparatively small-size neural networks the communication overhead caused the execution time to increase when more transputers were used. Thus, the proposed parallel implementation of the Kohonen algorithm is not suitable for massively parallel architectures. It is concluded that in excess of 12 transputers can be used with network sizes of the order of 3000 neurons or more but no more than six transputers can be used with network sizes of the order of 120 neurons.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128044650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Cloud detection based on texture segmentation by neural network methods 基于纹理分割的神经网络云检测方法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170529
A. Visa, K. Valkealahti, O. Simula
A novel method to detect and recognize clouds from remote sensing images is introduced. The detection and recognition of clouds are based on textures. The images are partitioned into homogeneously textured regions, and the interpretation of those textures is based on a texture map. This map is created by means of artificial neural network methodology. The use of neural network methods makes it possible to apply an unsupervised learning paradigm to train the map continuously. The texture map is created by a self-organizing process of feature vectors. This is performed in an unsupervised way. The labeling is achieved by a supervised process.<>
介绍了一种从遥感图像中检测和识别云的新方法。云的检测和识别是基于纹理的。图像被划分为纹理均匀的区域,这些纹理的解释是基于纹理映射。该地图是通过人工神经网络方法创建的。神经网络方法的使用使得应用无监督学习范式来连续训练地图成为可能。纹理映射是通过特征向量的自组织过程生成的。这是以一种无监督的方式进行的。标签是通过监督过程实现的。
{"title":"Cloud detection based on texture segmentation by neural network methods","authors":"A. Visa, K. Valkealahti, O. Simula","doi":"10.1109/IJCNN.1991.170529","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170529","url":null,"abstract":"A novel method to detect and recognize clouds from remote sensing images is introduced. The detection and recognition of clouds are based on textures. The images are partitioned into homogeneously textured regions, and the interpretation of those textures is based on a texture map. This map is created by means of artificial neural network methodology. The use of neural network methods makes it possible to apply an unsupervised learning paradigm to train the map continuously. The texture map is created by a self-organizing process of feature vectors. This is performed in an unsupervised way. The labeling is achieved by a supervised process.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125705855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A neural searchlight processor that differentiates any images with common features by transitory synchronization 一种神经探照灯处理器,通过瞬时同步区分具有共同特征的图像
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170706
K. Murase, Y. Nakade, Y. Matsunaga, O. Yamakawa
The neural cocktail-party processor (NCPP) is known as a theoretical model of the visual binding by coherent oscillation of neurons, a hypothesis that transitory synchronization of neuronal activities might link fragmentarily represented visual information in the widely spaced areas of the brain to establish coherent images. However, NCPP was made under an assumption that the images to be recognized have no common features. If there are any common features the synchronization among cells is disturbed and the network cannot recognize the images correctly. The authors therefore developed a network, called the neural searchlight processor (NSP), that recognizes images by transitory synchronization allowing common features between images in the input pattern. The mechanism and results of computer simulation of NCPP are described. Then the structure and simulation of NSP are explained by comparison with NCPP.<>
神经鸡尾酒会处理器(neural cocktail-party processor, NCPP)被认为是神经元相干振荡视觉结合的理论模型,这是一种假设,认为神经元活动的短暂同步可能会将大脑中间隔广泛的区域中零散的视觉信息联系起来,从而建立连贯的图像。然而,NCPP是在待识别图像没有共同特征的假设下进行的。如果存在共同特征,则会干扰单元间的同步,导致网络无法正确识别图像。因此,作者开发了一种称为神经探照灯处理器(NSP)的网络,该网络通过允许输入模式中图像之间的共同特征的短暂同步来识别图像。介绍了NCPP的机理和计算机模拟结果。然后通过与NCPP的比较,说明了NSP的结构和仿真。
{"title":"A neural searchlight processor that differentiates any images with common features by transitory synchronization","authors":"K. Murase, Y. Nakade, Y. Matsunaga, O. Yamakawa","doi":"10.1109/IJCNN.1991.170706","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170706","url":null,"abstract":"The neural cocktail-party processor (NCPP) is known as a theoretical model of the visual binding by coherent oscillation of neurons, a hypothesis that transitory synchronization of neuronal activities might link fragmentarily represented visual information in the widely spaced areas of the brain to establish coherent images. However, NCPP was made under an assumption that the images to be recognized have no common features. If there are any common features the synchronization among cells is disturbed and the network cannot recognize the images correctly. The authors therefore developed a network, called the neural searchlight processor (NSP), that recognizes images by transitory synchronization allowing common features between images in the input pattern. The mechanism and results of computer simulation of NCPP are described. Then the structure and simulation of NSP are explained by comparison with NCPP.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127937269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occluded object recognition: an approach which combines neurocomputing and conventional algorithms 遮挡物识别:一种结合神经计算和传统算法的方法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170783
Chung-Mong Lee, D. W. Patterson
A system which combines the power of neural network learning and computing with conventional vision processing methods has been developed. At the heart of the system is a neural network composed of neocognitron and self-created layer components. During the recognition phase, the network computations are augmented by conventional vision algorithms which perform some low- and intermediate-level processing functions. The system is first trained under supervision to recognize several types of nonoccluded objects. It is then used to identify each of the objects appearing in an image even though the objects appear at different locations and are partially occluded or even somewhat deformed. A high degree of accuracy has been achieved with the system.<>
本文提出了一种将神经网络学习和计算能力与传统视觉处理方法相结合的系统。该系统的核心是一个由neocognitron和自创建层组件组成的神经网络。在识别阶段,网络计算被传统的视觉算法增强,执行一些低级和中级的处理功能。该系统首先在监督下进行训练,以识别几种类型的非遮挡物体。然后,它被用来识别出现在图像中的每个物体,即使这些物体出现在不同的位置,部分遮挡甚至有些变形。该系统达到了很高的精度。
{"title":"Occluded object recognition: an approach which combines neurocomputing and conventional algorithms","authors":"Chung-Mong Lee, D. W. Patterson","doi":"10.1109/IJCNN.1991.170783","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170783","url":null,"abstract":"A system which combines the power of neural network learning and computing with conventional vision processing methods has been developed. At the heart of the system is a neural network composed of neocognitron and self-created layer components. During the recognition phase, the network computations are augmented by conventional vision algorithms which perform some low- and intermediate-level processing functions. The system is first trained under supervision to recognize several types of nonoccluded objects. It is then used to identify each of the objects appearing in an image even though the objects appear at different locations and are partially occluded or even somewhat deformed. A high degree of accuracy has been achieved with the system.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analog maximum neural network circuits using the switched capacitor technique 利用开关电容技术模拟最大神经网络电路
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170652
Y.B. Cho, K.C. Lee, Yoshiyasu Takefuji, N. Funabiki
The circuit of the maximum neural network based on the switched capacitor technique is proposed. The performance of the proposed circuit was derived from SPICE simulation. The bipartite subgraph problem is solved by using the proposed circuit. The SPICE simulation result confirms the function of the network. Because the complexity of the proposed analog circuit is so small, it is possible to fabricate an optimization system in a single chip.<>
提出了一种基于开关电容技术的最大神经网络电路。通过SPICE仿真得到了该电路的性能。利用该电路解决了二部子图问题。SPICE仿真结果验证了该网络的功能。由于所提出的模拟电路的复杂性非常小,因此可以在单个芯片上制造一个优化系统。
{"title":"Analog maximum neural network circuits using the switched capacitor technique","authors":"Y.B. Cho, K.C. Lee, Yoshiyasu Takefuji, N. Funabiki","doi":"10.1109/IJCNN.1991.170652","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170652","url":null,"abstract":"The circuit of the maximum neural network based on the switched capacitor technique is proposed. The performance of the proposed circuit was derived from SPICE simulation. The bipartite subgraph problem is solved by using the proposed circuit. The SPICE simulation result confirms the function of the network. Because the complexity of the proposed analog circuit is so small, it is possible to fabricate an optimization system in a single chip.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132065789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid learning of inverse robot kinematics based on connection assignment and topographical encoding (CATE) 基于连接分配和地形编码的机器人逆运动学快速学习
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170618
J. Hakala, G. Fahner, R. Eckmiller
An adaptive neural structure for robot control based on homogeneous encoding in a topographical manner is developed. An intermediate representation (IRep) is adaptively generated using a novel learning scheme, CATE. The connection assignment rules of CATE keep the number of IRep-neurons as small as possible, while maintaining the desired mapping accuracy. This adaptive net (CATEnet) was successfully applied to embed the inverse kinematics of a redundant, planar robot arm (four-joint-machine) with only a few presentations of the learning set. The mapping solution incorporated local optimization of a cost function to account for a limited joint range and to avoid singularities.<>
提出了一种基于地形同构编码的机器人自适应神经网络结构。使用一种新的学习方案CATE自适应生成中间表示(IRep)。CATE的连接分配规则使irep -神经元的数量尽可能少,同时保持所需的映射精度。该自适应网络(CATEnet)成功地应用于嵌入冗余平面机械臂(四关节机)的逆运动学,并且只需要少量的学习集。映射解决方案结合了成本函数的局部优化,以考虑有限的联合范围并避免奇点。
{"title":"Rapid learning of inverse robot kinematics based on connection assignment and topographical encoding (CATE)","authors":"J. Hakala, G. Fahner, R. Eckmiller","doi":"10.1109/IJCNN.1991.170618","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170618","url":null,"abstract":"An adaptive neural structure for robot control based on homogeneous encoding in a topographical manner is developed. An intermediate representation (IRep) is adaptively generated using a novel learning scheme, CATE. The connection assignment rules of CATE keep the number of IRep-neurons as small as possible, while maintaining the desired mapping accuracy. This adaptive net (CATEnet) was successfully applied to embed the inverse kinematics of a redundant, planar robot arm (four-joint-machine) with only a few presentations of the learning set. The mapping solution incorporated local optimization of a cost function to account for a limited joint range and to avoid singularities.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132136312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1