首页 > 最新文献

Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)最新文献

英文 中文
ADN-analysis and development of distributed neural networks for intelligent applications 面向智能应用的分布式神经网络的adn分析与开发
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374513
J. Arcand, Sophie-Julie Pelletier
This article begins by explaining the concept of distributed neural networks. It then goes on to present a program library designed to support the development of such networks. In this context, distributed neural networks are seen as supernetworks comprising a number of subnetworks that can communicate with one another. Such supernetworks are intended to facilitate the modeling of complex and heterogeneous realities. Each subnetwork is trained independently of the others, according to the learning algorithm or algorithms that govern it. Once trained, the subnetworks are interconnected in such a way as to circulate information through the network as a whole. The distributed network library is an application of research in this area. It allows for the creation of distributed networks, the individual training of subnetworks, and communication between subnetworks. The library's interface makes it as much a tool for research as it is a program for neural network development for the uninitiated.<>
本文首先解释分布式神经网络的概念。然后,它继续提出一个程序库,旨在支持这种网络的发展。在这种情况下,分布式神经网络被视为由许多可以相互通信的子网络组成的超级网络。这种超级网络旨在促进复杂和异构现实的建模。根据学习算法或控制它的算法,每个子网都独立于其他子网进行训练。一旦训练完毕,子网络就以这样一种方式相互连接,以便在整个网络中传播信息。分布式网络图书馆是这一领域研究的一个应用。它允许创建分布式网络、单独训练子网以及子网之间的通信。这个库的接口使它既是一个研究工具,也是一个为外行开发神经网络的程序。
{"title":"ADN-analysis and development of distributed neural networks for intelligent applications","authors":"J. Arcand, Sophie-Julie Pelletier","doi":"10.1109/ICNN.1994.374513","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374513","url":null,"abstract":"This article begins by explaining the concept of distributed neural networks. It then goes on to present a program library designed to support the development of such networks. In this context, distributed neural networks are seen as supernetworks comprising a number of subnetworks that can communicate with one another. Such supernetworks are intended to facilitate the modeling of complex and heterogeneous realities. Each subnetwork is trained independently of the others, according to the learning algorithm or algorithms that govern it. Once trained, the subnetworks are interconnected in such a way as to circulate information through the network as a whole. The distributed network library is an application of research in this area. It allows for the creation of distributed networks, the individual training of subnetworks, and communication between subnetworks. The library's interface makes it as much a tool for research as it is a program for neural network development for the uninitiated.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130584945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An analysis on decision boundaries in the complex back-propagation network 复杂反向传播网络中的决策边界分析
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374306
T. Nitta
This paper presents some results of an analysis on the decision boundaries of the complex valued neural networks. The main results may be summarized as follows. (a) Weight parameters of a complex valued neuron have a restriction which is concerned with two-dimensional motion. (b) The decision boundary of a complex valued neuron consists of two hypersurfaces which intersect orthogonally, and divides a decision region into four equal sections. The decision boundary of a three-layered complex valued neural network has this as a basic structure, and its two hypersurfaces intersect orthogonally if net inputs to each hidden neuron are all sufficiently large.<>
本文给出了复值神经网络决策边界的一些分析结果。主要结果可以总结如下。(a)复值神经元的权参数具有与二维运动有关的限制。(b)复值神经元的决策边界由两个正交相交的超曲面组成,并将决策区域划分为四个相等的部分。三层复值神经网络的决策边界以此为基本结构,当每个隐藏神经元的网络输入都足够大时,其两个超曲面正交相交。
{"title":"An analysis on decision boundaries in the complex back-propagation network","authors":"T. Nitta","doi":"10.1109/ICNN.1994.374306","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374306","url":null,"abstract":"This paper presents some results of an analysis on the decision boundaries of the complex valued neural networks. The main results may be summarized as follows. (a) Weight parameters of a complex valued neuron have a restriction which is concerned with two-dimensional motion. (b) The decision boundary of a complex valued neuron consists of two hypersurfaces which intersect orthogonally, and divides a decision region into four equal sections. The decision boundary of a three-layered complex valued neural network has this as a basic structure, and its two hypersurfaces intersect orthogonally if net inputs to each hidden neuron are all sufficiently large.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Intelligent control for a nuclear power plant using artificial neural networks 基于人工神经网络的核电站智能控制
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374627
B. Hwang
In this paper, an approach based on neural networks for the control system design of a pressurized water reactor (PWR) is presented. A reference model which incorporates a static projective suboptimal control law under various operating conditions is used to generate the necessary data for training the neurocontroller. The designed approach is able to control the nuclear reactor in a robust manner. Simulation results presented reveal that it is feasible to use artificial neural networks to improve the operating characteristics of the nuclear power plants.<>
本文提出了一种基于神经网络的压水堆控制系统设计方法。采用包含静态投影次优控制律的参考模型生成训练神经控制器所需的数据。所设计的方法能够以稳健的方式控制核反应堆。仿真结果表明,利用人工神经网络改善核电站运行特性是可行的。
{"title":"Intelligent control for a nuclear power plant using artificial neural networks","authors":"B. Hwang","doi":"10.1109/ICNN.1994.374627","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374627","url":null,"abstract":"In this paper, an approach based on neural networks for the control system design of a pressurized water reactor (PWR) is presented. A reference model which incorporates a static projective suboptimal control law under various operating conditions is used to generate the necessary data for training the neurocontroller. The designed approach is able to control the nuclear reactor in a robust manner. Simulation results presented reveal that it is feasible to use artificial neural networks to improve the operating characteristics of the nuclear power plants.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123209695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A counterpropagation neural network for determining target spacecraft orientation 一种确定目标航天器方向的反传播神经网络
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374965
B. L. Vinz, S. J. Graves
This paper describes a concept that integrates a counterpropagation neural network into a video-based vision system employed for automatic spacecraft docking. A brief overview of docking phases, the target orientation problem, and potential benefits resulting from an automated docking system is provided. Issues and challenges of automatic target recognition, as applied to automatic docking, are addressed. Following a review of the architecture, training, and desirable characteristics of the counterpropagation network, an approach for determining the relative orientation of a target spacecraft based on a counterpropagation net is presented.<>
本文提出了一种将反传播神经网络集成到航天器自动对接视频视觉系统中的概念。简要概述了对接阶段、目标定向问题以及自动对接系统带来的潜在好处。讨论了自动目标识别应用于自动对接的问题和挑战。在回顾了反传播网络的结构、训练和期望特性之后,提出了一种基于反传播网络确定目标航天器相对方向的方法。
{"title":"A counterpropagation neural network for determining target spacecraft orientation","authors":"B. L. Vinz, S. J. Graves","doi":"10.1109/ICNN.1994.374965","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374965","url":null,"abstract":"This paper describes a concept that integrates a counterpropagation neural network into a video-based vision system employed for automatic spacecraft docking. A brief overview of docking phases, the target orientation problem, and potential benefits resulting from an automated docking system is provided. Issues and challenges of automatic target recognition, as applied to automatic docking, are addressed. Following a review of the architecture, training, and desirable characteristics of the counterpropagation network, an approach for determining the relative orientation of a target spacecraft based on a counterpropagation net is presented.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123616978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic neural network with heuristics 启发式动态神经网络
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.375026
J. Park, J. Park, D. Kim, C. Lee, S. Suh, M. Han
With the deterministic nature and the difficulty of scaling, Hopfield-style neural network is readily to converge to one of local minima in the course of energy function minimization, not to escape from those undesirable solutions. Many researchers, who want to find the global minimum of the traveling salesman problem (TSP), have introduced various approaches to solve such conditions including heuristics, genetic algorithms, hybrid algorithms of some methods, etc. We introduce a simple heuristic algorithm which embeds the classical local search heuristics into the optimization neural network. The proposed algorithm is characterized with the best neighbors selection, which is used in the dynamic scheduling and in ordering the update sequence of neurons, and with the decidability check which is used to guarantee the near-optimal solution. The proposed algorithm enhances both the convergence speed and the quality of solutions.<>
hopfield型神经网络具有确定性和可扩展性,在能量函数最小化过程中容易收敛到一个局部极小值,而不会逃避那些不希望得到的解。许多研究者为了求解旅行商问题(TSP)的全局最小值,引入了各种方法来求解这类问题,包括启发式算法、遗传算法、某些方法的混合算法等。提出了一种简单的启发式算法,将经典的局部搜索启发式算法嵌入到优化神经网络中。该算法具有动态调度和神经元更新顺序排序的最佳邻居选择和保证近最优解的可判定性检查的特点。该算法既提高了收敛速度,又提高了解的质量。
{"title":"Dynamic neural network with heuristics","authors":"J. Park, J. Park, D. Kim, C. Lee, S. Suh, M. Han","doi":"10.1109/ICNN.1994.375026","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375026","url":null,"abstract":"With the deterministic nature and the difficulty of scaling, Hopfield-style neural network is readily to converge to one of local minima in the course of energy function minimization, not to escape from those undesirable solutions. Many researchers, who want to find the global minimum of the traveling salesman problem (TSP), have introduced various approaches to solve such conditions including heuristics, genetic algorithms, hybrid algorithms of some methods, etc. We introduce a simple heuristic algorithm which embeds the classical local search heuristics into the optimization neural network. The proposed algorithm is characterized with the best neighbors selection, which is used in the dynamic scheduling and in ordering the update sequence of neurons, and with the decidability check which is used to guarantee the near-optimal solution. The proposed algorithm enhances both the convergence speed and the quality of solutions.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123720458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Neural network identification and control of unstable systems using supervisory control while learning 不稳定系统的神经网络辨识与控制采用边学习边监督控制
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374613
Sung-Woo Kim, Sun-Gi Hong, T. Ohm, Jujang Lee
Focuses on the training scheme for the neural networks to learn in the regions of unstable equilibrium states and the identification and the control using these networks. These can be achieved by introducing a supervisory controller during the learning period of the neural networks. The supervisory controller is designed based on Lyapunov theory and it guarantees the boundedness of the system states within the region of interest. Therefore the neural networks can be trained to approximate sufficiently accurately with uniformly distributed training samples by properly choosing the desired states covering the region of interest. After the networks are successfully trained to identify the system, the controller is designed to cancel out the nonlinearity of the system.<>
重点研究了神经网络在不稳定平衡状态区域学习的训练方案,以及利用这些神经网络进行识别和控制。这些可以通过在神经网络学习期间引入监督控制器来实现。基于李雅普诺夫理论设计了监控控制器,保证了系统状态在感兴趣区域内的有界性。因此,通过正确选择覆盖感兴趣区域的理想状态,可以训练神经网络以均匀分布的训练样本进行足够精确的近似。在网络被成功地训练以识别系统之后,控制器被设计用来抵消系统的非线性
{"title":"Neural network identification and control of unstable systems using supervisory control while learning","authors":"Sung-Woo Kim, Sun-Gi Hong, T. Ohm, Jujang Lee","doi":"10.1109/ICNN.1994.374613","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374613","url":null,"abstract":"Focuses on the training scheme for the neural networks to learn in the regions of unstable equilibrium states and the identification and the control using these networks. These can be achieved by introducing a supervisory controller during the learning period of the neural networks. The supervisory controller is designed based on Lyapunov theory and it guarantees the boundedness of the system states within the region of interest. Therefore the neural networks can be trained to approximate sufficiently accurately with uniformly distributed training samples by properly choosing the desired states covering the region of interest. After the networks are successfully trained to identify the system, the controller is designed to cancel out the nonlinearity of the system.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121153769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Nearest neighbor pattern classification neural networks 最近邻模式分类神经网络
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374694
K. Koutroumbas, N. Kalouptsidis
In this paper two algorithms for the construction of pattern classifier neural architectures are proposed. A comparison with other known similar architectures is given and simulation results are carried out.<>
本文提出了两种构建模式分类器神经结构的算法。并与其他已知的类似结构进行了比较,并给出了仿真结果。
{"title":"Nearest neighbor pattern classification neural networks","authors":"K. Koutroumbas, N. Kalouptsidis","doi":"10.1109/ICNN.1994.374694","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374694","url":null,"abstract":"In this paper two algorithms for the construction of pattern classifier neural architectures are proposed. A comparison with other known similar architectures is given and simulation results are carried out.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Sensitivity of trained neural networks with threshold functions 阈值函数训练神经网络的灵敏度
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374316
Sang-Hoon Oh, Youngjik Lee
In this paper, we derive the sensitivity of single hidden-layer networks with threshold functions, called "Madaline", as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the binary input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. Our result show that the sensitivity in a trained network is far different from that of networks with random weights.<>
本文推导了具有阈值函数(Madaline)的单隐层网络的灵敏度是训练权值、输入模式和权值扰动方差或二进制输入模式的误码概率的函数。通过Madaline识别手写体数字的仿真验证了所得结果。我们的结果表明,训练后的网络的灵敏度与随机权值的网络有很大的不同
{"title":"Sensitivity of trained neural networks with threshold functions","authors":"Sang-Hoon Oh, Youngjik Lee","doi":"10.1109/ICNN.1994.374316","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374316","url":null,"abstract":"In this paper, we derive the sensitivity of single hidden-layer networks with threshold functions, called \"Madaline\", as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the binary input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. Our result show that the sensitivity in a trained network is far different from that of networks with random weights.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116339841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct and indirect methods for learning optimal control laws 学习最优控制律的直接和间接方法
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374644
S. Atkins, W. Baker
The primary focus of this paper is to discuss two general approaches for incrementally synthesizing a nonlinear optimal control law, through real-time, closed-loop interactions between the dynamic system, its environment, and a learning control system, when substantial initial model uncertainty exists. Learning systems represent an on-line approach to the incremental synthesis of an optimal control law for situations where initial model uncertainty precludes the use of robust, fixed control laws, and where significant dynamic nonlinearities reduce the level of performance attainable by adaptive control laws. In parallel with the established framework of direct and indirect adaptive control algorithms, a direct/indirect framework is proposed as a means of classifying approaches to learning optimal control laws. Direct learning optimal control implies that the feedback loop which motivates the learning process is closed around system performance. Common properties of direct learning algorithms, including the apparent necessity of approximating two complementary functions, are reviewed. Indirect learning optimal control denotes a class of incremental control law synthesis methods for which the learning loop is closed around the system model. This class is illustrated by developing a simple optimal control law.<>
本文的主要重点是讨论在存在大量初始模型不确定性的情况下,通过动态系统、环境和学习控制系统之间的实时闭环相互作用,增量合成非线性最优控制律的两种一般方法。学习系统代表了一种在线方法,用于在初始模型不确定性排除使用鲁棒固定控制律的情况下增量合成最优控制律,以及在显著的动态非线性降低自适应控制律可达到的性能水平的情况下。在建立直接和间接自适应控制算法框架的基础上,提出了一种直接/间接框架,作为学习最优控制律的分类方法。直接学习最优控制意味着激励学习过程的反馈环是围绕系统性能闭合的。回顾了直接学习算法的一般性质,包括逼近两个互补函数的明显必要性。间接学习最优控制是一类学习环在系统模型周围闭合的增量控制律综合方法。这个类是通过发展一个简单的最优控制律来说明的
{"title":"Direct and indirect methods for learning optimal control laws","authors":"S. Atkins, W. Baker","doi":"10.1109/ICNN.1994.374644","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374644","url":null,"abstract":"The primary focus of this paper is to discuss two general approaches for incrementally synthesizing a nonlinear optimal control law, through real-time, closed-loop interactions between the dynamic system, its environment, and a learning control system, when substantial initial model uncertainty exists. Learning systems represent an on-line approach to the incremental synthesis of an optimal control law for situations where initial model uncertainty precludes the use of robust, fixed control laws, and where significant dynamic nonlinearities reduce the level of performance attainable by adaptive control laws. In parallel with the established framework of direct and indirect adaptive control algorithms, a direct/indirect framework is proposed as a means of classifying approaches to learning optimal control laws. Direct learning optimal control implies that the feedback loop which motivates the learning process is closed around system performance. Common properties of direct learning algorithms, including the apparent necessity of approximating two complementary functions, are reviewed. Indirect learning optimal control denotes a class of incremental control law synthesis methods for which the learning loop is closed around the system model. This class is illustrated by developing a simple optimal control law.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124172197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal computation with a general purpose analog neural computer: real-time visual motion estimation 时空计算与通用模拟神经计算机:实时视觉运动估计
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374437
R. Etienne-Cummings, C. Donham, J. van der Spiegel, P. Mueller
An analog neural network implementation of spatiotemporal feature extraction for real-time visual motion estimation is presented. Visual motion can be represented as an orientation in the space-time domain. Thus, motion estimation translates into orientation detection. The spatiotemporal orientation detector discussed is based on Adelson and Bergen's model with modifications to accommodate the computational limitations of hardware analog neural networks. The analog neural computer used here has the unique property of offering temporal computational capabilities through synaptic time-constants. These time-constants are crucial for implementing the spatiotemporal filters. Analysis, implementation and performance of the motion filters are discussed. The performance of the neural motion filters is found to be consistent with theoretical predictions and the real stimulus motion.<>
提出了一种用于实时视觉运动估计的时空特征提取的模拟神经网络实现。视觉运动可以表示为时空域中的一个方向。因此,运动估计转化为方向检测。所讨论的时空方向检测器基于Adelson和Bergen的模型,并进行了修改,以适应硬件模拟神经网络的计算限制。这里使用的模拟神经计算机具有通过突触时间常数提供时间计算能力的独特特性。这些时间常数对于实现时空滤波器至关重要。讨论了运动滤波器的分析、实现和性能。神经运动滤波器的性能与理论预测和真实刺激运动相一致
{"title":"Spatiotemporal computation with a general purpose analog neural computer: real-time visual motion estimation","authors":"R. Etienne-Cummings, C. Donham, J. van der Spiegel, P. Mueller","doi":"10.1109/ICNN.1994.374437","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374437","url":null,"abstract":"An analog neural network implementation of spatiotemporal feature extraction for real-time visual motion estimation is presented. Visual motion can be represented as an orientation in the space-time domain. Thus, motion estimation translates into orientation detection. The spatiotemporal orientation detector discussed is based on Adelson and Bergen's model with modifications to accommodate the computational limitations of hardware analog neural networks. The analog neural computer used here has the unique property of offering temporal computational capabilities through synaptic time-constants. These time-constants are crucial for implementing the spatiotemporal filters. Analysis, implementation and performance of the motion filters are discussed. The performance of the neural motion filters is found to be consistent with theoretical predictions and the real stimulus motion.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127688024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1