首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
Optimizing neural networks for playing tic-tac-toe 优化玩井字游戏的神经网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227268
M. Sungur, U. Halici
A neural network approach for playing the game tic-tac-toe is introduced. The problem is considered as a combinatorial optimization problem aiming to maximize the value of a heuristic evaluation function. The proposed design guarantees a feasible solution, including in the cases where a winning move is never missed and a losing position is always prevented, if possible. The design has been implemented on a Hopfield network, a Boltzmann machine, and a Gaussian machine. The performance of the models was compared through simulation.<>
介绍了一种神经网络方法来玩井字游戏。该问题被认为是一个以启发式评价函数值最大化为目标的组合优化问题。所建议的设计保证了一个可行的解决方案,包括在赢的一步永远不会错过,输的位置总是被防止的情况下,如果可能的话。该设计已在Hopfield网络、玻尔兹曼机和高斯机上实现。通过仿真比较了各模型的性能。
{"title":"Optimizing neural networks for playing tic-tac-toe","authors":"M. Sungur, U. Halici","doi":"10.1109/IJCNN.1992.227268","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227268","url":null,"abstract":"A neural network approach for playing the game tic-tac-toe is introduced. The problem is considered as a combinatorial optimization problem aiming to maximize the value of a heuristic evaluation function. The proposed design guarantees a feasible solution, including in the cases where a winning move is never missed and a losing position is always prevented, if possible. The design has been implemented on a Hopfield network, a Boltzmann machine, and a Gaussian machine. The performance of the models was compared through simulation.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"73 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A one neuron truck backer-upper 一个神经元卡车背板
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226881
S. Geva, J. Sitte, G. Willshire
The truck backer-upper has been used to demonstrate the ability of neural networks to solve highly nonlinear control problems where the solution is not easily obtained by analytical techniques. The authors demonstrate that good linear solutions to this problem exist, and that it is very easy to find such solutions. It is shown how to design a controller to perform this task, and how it is implemented with a single control neuron. The control neuron requires only two input variables and two weights to produce correct steering signals. The probability that random weights are adequate to solve the problem is so high that a random search is highly successful. It is shown that a single neuron is also sufficient to solve the seemingly more difficult task of backing up a truck with two trailers, and that with small addition in network complexity the problem of providing minimum length backup trajectories can be solved too.<>
卡车后挡板已被用来证明神经网络的能力,以解决高度非线性的控制问题,这是不容易得到的解析技术。作者证明了这个问题的线性解是存在的,并且很容易找到这样的解。它展示了如何设计一个控制器来执行这项任务,以及如何用单个控制神经元实现它。控制神经元只需要两个输入变量和两个权值就能产生正确的转向信号。随机权重足以解决问题的概率非常高,以至于随机搜索非常成功。研究表明,单个神经元也足以解决看起来更困难的任务,即为一辆有两个拖车的卡车倒车,并且在网络复杂性上增加一点,也可以解决提供最小长度后备轨迹的问题
{"title":"A one neuron truck backer-upper","authors":"S. Geva, J. Sitte, G. Willshire","doi":"10.1109/IJCNN.1992.226881","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226881","url":null,"abstract":"The truck backer-upper has been used to demonstrate the ability of neural networks to solve highly nonlinear control problems where the solution is not easily obtained by analytical techniques. The authors demonstrate that good linear solutions to this problem exist, and that it is very easy to find such solutions. It is shown how to design a controller to perform this task, and how it is implemented with a single control neuron. The control neuron requires only two input variables and two weights to produce correct steering signals. The probability that random weights are adequate to solve the problem is so high that a random search is highly successful. It is shown that a single neuron is also sufficient to solve the seemingly more difficult task of backing up a truck with two trailers, and that with small addition in network complexity the problem of providing minimum length backup trajectories can be solved too.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126702259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
An algebraic approach to learning in syntactic neural networks 语法神经网络学习的代数方法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287076
S. Lucas
The algebraic learning paradigm is described in relation to syntactic neural networks. In algebraic learning, each free parameter of the net is given a unique variable name, and the net output is then expressed as a sum of products of these variables, for each training sentence. The expressions are equated to true if the sentence is a positive sample and false if the sentence is a negative sample. A constraint satisfaction procedure is then used to find an assignment to the variables such that all the equations are satisfied. Such an assignment must yield a network that parses all the positive samples and none of the negative samples, and hence a correct grammar. Unfortunately, the algorithm grows exponentially in time and space with respect to string length. A number of ways of countering this growth, using the inference of a tiny subset of context-free English as a example, are explored.<>
代数学习范式是与句法神经网络相关的。在代数学习中,网络的每个自由参数都被赋予一个唯一的变量名,然后对于每个训练句子,网络输出被表示为这些变量的乘积的和。如果句子是阳性样本,则表达式等于真,如果句子是阴性样本,则表达式等于假。然后使用约束满足程序来找到变量的赋值,使所有方程都得到满足。这样的分配必须产生一个能够解析所有正样本而不解析负样本的网络,因此是一个正确的语法。不幸的是,该算法在时间和空间上随字符串长度呈指数增长。本文以一小部分无上下文英语的推理为例,探讨了许多应对这种增长的方法。
{"title":"An algebraic approach to learning in syntactic neural networks","authors":"S. Lucas","doi":"10.1109/IJCNN.1992.287076","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287076","url":null,"abstract":"The algebraic learning paradigm is described in relation to syntactic neural networks. In algebraic learning, each free parameter of the net is given a unique variable name, and the net output is then expressed as a sum of products of these variables, for each training sentence. The expressions are equated to true if the sentence is a positive sample and false if the sentence is a negative sample. A constraint satisfaction procedure is then used to find an assignment to the variables such that all the equations are satisfied. Such an assignment must yield a network that parses all the positive samples and none of the negative samples, and hence a correct grammar. Unfortunately, the algorithm grows exponentially in time and space with respect to string length. A number of ways of countering this growth, using the inference of a tiny subset of context-free English as a example, are explored.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition of Japanese words by neural networks using vocal tract area 利用声道区域的神经网络识别日语单词
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227247
H. Kinugasa, H. Kamata, Y. Ishida
The authors present a new system for Japanese word recognition by neural networks using the vocal tract area. They present a method by which the vocal tract area is directly estimated from speech waves. The estimation method applies an adaptive inverse filter to the autocorrelation coefficients. A neural network learning algorithm developed by Y. Ishida et al. (1991), which is based on the conjugate gradient method, is used. The speaker-independent word recognition results for a vocabulary of 10 Japanese words demonstrated the effectiveness of the method.<>
提出了一种基于声道区域的神经网络日语单词识别系统。他们提出了一种直接从语音波中估计声道面积的方法。该方法对自相关系数进行自适应逆滤波。采用Y. Ishida等人(1991)基于共轭梯度法开发的神经网络学习算法。独立于说话人的10个日语单词识别结果证明了该方法的有效性。
{"title":"Recognition of Japanese words by neural networks using vocal tract area","authors":"H. Kinugasa, H. Kamata, Y. Ishida","doi":"10.1109/IJCNN.1992.227247","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227247","url":null,"abstract":"The authors present a new system for Japanese word recognition by neural networks using the vocal tract area. They present a method by which the vocal tract area is directly estimated from speech waves. The estimation method applies an adaptive inverse filter to the autocorrelation coefficients. A neural network learning algorithm developed by Y. Ishida et al. (1991), which is based on the conjugate gradient method, is used. The speaker-independent word recognition results for a vocabulary of 10 Japanese words demonstrated the effectiveness of the method.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121613184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction of neural network classification expert systems using switching theory algorithms 基于切换理论算法的神经网络分类专家系统构建
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287195
J. Jaskolski
A new family of neural network (NN) architectures is presented. This family of architectures solves the problem of constructing and training minimal NN classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a NN expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a VLSI circuits. The rules that the NN generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.<>
提出了一种新的神经网络结构。该体系结构族利用切换理论解决了最小神经网络分类专家系统的构造和训练问题。导致使用开关理论的主要见解是,在神经网络专家系统中,最小化规则数量和每个规则的IF语句(先决条件)数量的问题可以重新转换为最小化VLSI电路中数字门数量和数字门之间连接数量的问题。神经网络为执行任务而生成的规则很容易从网络的权重和拓扑中提取出来。在Mushroom数据库上的分析和仿真验证了系统的性能。
{"title":"Construction of neural network classification expert systems using switching theory algorithms","authors":"J. Jaskolski","doi":"10.1109/IJCNN.1992.287195","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287195","url":null,"abstract":"A new family of neural network (NN) architectures is presented. This family of architectures solves the problem of constructing and training minimal NN classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a NN expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a VLSI circuits. The rules that the NN generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122958303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Artificial neural networks for 3D nonrigid motion analysis 三维非刚体运动分析的人工神经网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227308
T. Chen, W. Lin, C.-T. Chen
A novel approach to 3D nonrigid motion analysis using artificial neural networks is presented. A set of neural networks is proposed to tackle the problem of nonrigidity in 3D motion estimation. Constraints are specified to ensure a stable and global consistent estimation of local deformations. The assignments of weights between two layers, the initial values of the outputs, and the connections between each network reflect the constraints defined. The objective of the proposed neural networks is to find the optimal deformation matrices that satisfy the constraints for all the points on the surface of the nonrigid object. Experimental results on synthetic and real data are provided.<>
提出了一种利用人工神经网络进行三维非刚性运动分析的新方法。提出了一套神经网络来解决三维运动估计中的非刚性问题。指定约束以确保局部变形的稳定和全局一致估计。两层之间的权值分配、输出的初始值以及每个网络之间的连接反映了所定义的约束。提出的神经网络的目标是寻找满足非刚性物体表面所有点约束的最优变形矩阵。给出了合成数据和实际数据的实验结果。
{"title":"Artificial neural networks for 3D nonrigid motion analysis","authors":"T. Chen, W. Lin, C.-T. Chen","doi":"10.1109/IJCNN.1992.227308","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227308","url":null,"abstract":"A novel approach to 3D nonrigid motion analysis using artificial neural networks is presented. A set of neural networks is proposed to tackle the problem of nonrigidity in 3D motion estimation. Constraints are specified to ensure a stable and global consistent estimation of local deformations. The assignments of weights between two layers, the initial values of the outputs, and the connections between each network reflect the constraints defined. The objective of the proposed neural networks is to find the optimal deformation matrices that satisfy the constraints for all the points on the surface of the nonrigid object. Experimental results on synthetic and real data are provided.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123112589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Symmetric neural networks and its examples 对称神经网络及其实例
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287176
Hee-Seung Na, Youngjin Park
The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<>
提出了对称神经网络的概念,它不仅在结构上是对称的,而且在权重分布上也是对称的。该概念进一步扩展到约束网络,也可以应用于一些存在权重分布模式先验知识的非对称问题。由于传统的训练算法无法训练这些神经网络,破坏了神经网络的权值结构,因此提出了一种合适的训练算法。通过三个例子来证明所提出思想的适用性。使用所提出的概念可以提高系统性能,降低网络维度,减少计算负载,并改善所考虑示例的学习
{"title":"Symmetric neural networks and its examples","authors":"Hee-Seung Na, Youngjin Park","doi":"10.1109/IJCNN.1992.287176","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287176","url":null,"abstract":"The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123119696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A subband coding scheme and the Bayesian neural network for EMG function analysis 一种用于肌电功能分析的子带编码方案和贝叶斯神经网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226868
K. Cheng, Din-Yuen Chan, Sheeng-Horng Liou
A subband coding scheme and Bayesian neural network (BNN) approach to the analysis of electromyographic (EMG) signals of upper extremity limb functions are presented. Three channels of EMG signals recorded from the biceps, triceps and one muscle of the forearm are used for discriminating six primitive motions associated with the limb. A set of parameters is extracted from the spectrum of the EMG signals combining with the subband coding technique for data compression. Each sequence of EMG signals is cut into five frames from the primary point located by the energy threshold method. From each frame, the parameters are then obtained by the integration of the subbands. The temporal as well as the spectral characteristics can be implicitly or directly included in the parameters. The BNN is used as a subnet for discriminating one motion. From the results, it is shown that an average recognition rate of 85% may be achieved.<>
提出了一种基于子带编码和贝叶斯神经网络(BNN)的上肢肌电信号分析方法。来自肱二头肌、肱三头肌和前臂一块肌肉的三个通道的肌电图信号被用来区分与肢体相关的六种原始运动。结合子带编码技术,从肌电信号的频谱中提取一组参数进行数据压缩。利用能量阈值法从定位的主点开始,将每组肌电信号分割成5帧。从每一帧中,通过子带积分得到参数。时间和光谱特征可以隐式或直接包含在参数中。BNN被用作区分一个运动的子网。结果表明,该方法的平均识别率可达85%。
{"title":"A subband coding scheme and the Bayesian neural network for EMG function analysis","authors":"K. Cheng, Din-Yuen Chan, Sheeng-Horng Liou","doi":"10.1109/IJCNN.1992.226868","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226868","url":null,"abstract":"A subband coding scheme and Bayesian neural network (BNN) approach to the analysis of electromyographic (EMG) signals of upper extremity limb functions are presented. Three channels of EMG signals recorded from the biceps, triceps and one muscle of the forearm are used for discriminating six primitive motions associated with the limb. A set of parameters is extracted from the spectrum of the EMG signals combining with the subband coding technique for data compression. Each sequence of EMG signals is cut into five frames from the primary point located by the energy threshold method. From each frame, the parameters are then obtained by the integration of the subbands. The temporal as well as the spectral characteristics can be implicitly or directly included in the parameters. The BNN is used as a subnet for discriminating one motion. From the results, it is shown that an average recognition rate of 85% may be achieved.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Local analysis of phase transitions in networks with varying connection strengths 不同连接强度网络中相变的局部分析
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227191
F. McFadden, Y. Peng, J. Reggia
It has been observed in networks with rapidly varying connection strengths that individual node activation levels can grow explosively in a phase where total network activation remains bounded. On the basis of the results reported by F. McFadden et al. (1991), the authors extend the previous research to apply to a more general class of connectionist models, and they identify additional phase transition boundaries not covered by previous research. Sufficient conditions are derived for boundedness of the activation vector of the system, not only for total activation. In addition, sufficient conditions are derived for divergence in the absence of external input. The mathematical results are illustrated by computer simulation results using a competitive activation model, and the simulations are used for exploration of the phase space.<>
已经观察到,在连接强度快速变化的网络中,单个节点的激活水平可能在总网络激活保持有限的阶段呈爆炸式增长。在F. McFadden等人(1991)报告的结果的基础上,作者扩展了先前的研究,将其应用于更一般的连接主义模型,并确定了先前研究未涵盖的额外相变边界。导出了系统激活向量有界性的充分条件,不仅得到了系统总激活的充分条件。此外,导出了在无外部输入情况下发散的充分条件。利用竞争激活模型的计算机模拟结果对数学结果进行了说明,并将模拟结果用于相空间的勘探
{"title":"Local analysis of phase transitions in networks with varying connection strengths","authors":"F. McFadden, Y. Peng, J. Reggia","doi":"10.1109/IJCNN.1992.227191","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227191","url":null,"abstract":"It has been observed in networks with rapidly varying connection strengths that individual node activation levels can grow explosively in a phase where total network activation remains bounded. On the basis of the results reported by F. McFadden et al. (1991), the authors extend the previous research to apply to a more general class of connectionist models, and they identify additional phase transition boundaries not covered by previous research. Sufficient conditions are derived for boundedness of the activation vector of the system, not only for total activation. In addition, sufficient conditions are derived for divergence in the absence of external input. The mathematical results are illustrated by computer simulation results using a competitive activation model, and the simulations are used for exploration of the phase space.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126620432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and development of a real-time neural processor using the Intel 80170NX ETANN 基于Intel 80170NX ETANN的实时神经处理器的设计与开发
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226908
L. R. Kern
The Naval Air Warfare Center Weapons Division is designing and developing a real-time neural processor for missile seeker applications. The system uses a high-speed digital computer as the user interface and as a monitor for processing. The use of a standard digital computer as the user interface allows the user to develop the process in whatever programming environment desired. With the capability to store up to 64 k of output data on each frame, it is possible to process two-dimensional image data in excess of video rates. The real-time communication bus, with user-defined interconnect structures, enables the system to solve a wide variety of problems. The system is best suited to perform local area processing on two-dimensional images. Using this system, each layer has the capacity to represent up to 65536 neurons. The fully operational system may contain up to 12 of these layers, giving the total system a capacity in excess of 745000 neurons.<>
海军空战中心武器分部正在设计和开发一种用于导弹导引头应用的实时神经处理器。该系统采用高速数字计算机作为用户界面和监视器进行处理。使用标准的数字计算机作为用户界面允许用户在所需的任何编程环境中开发过程。由于能够在每帧上存储多达64k的输出数据,因此可以处理超过视频速率的二维图像数据。实时通信总线,用户自定义的互连结构,使系统能够解决各种各样的问题。该系统最适合对二维图像进行局部处理。使用这个系统,每层都有能力代表65536个神经元。完全可操作的系统可能包含多达12个这样的层,使整个系统的容量超过745000个神经元
{"title":"Design and development of a real-time neural processor using the Intel 80170NX ETANN","authors":"L. R. Kern","doi":"10.1109/IJCNN.1992.226908","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226908","url":null,"abstract":"The Naval Air Warfare Center Weapons Division is designing and developing a real-time neural processor for missile seeker applications. The system uses a high-speed digital computer as the user interface and as a monitor for processing. The use of a standard digital computer as the user interface allows the user to develop the process in whatever programming environment desired. With the capability to store up to 64 k of output data on each frame, it is possible to process two-dimensional image data in excess of video rates. The real-time communication bus, with user-defined interconnect structures, enables the system to solve a wide variety of problems. The system is best suited to perform local area processing on two-dimensional images. Using this system, each layer has the capacity to represent up to 65536 neurons. The fully operational system may contain up to 12 of these layers, giving the total system a capacity in excess of 745000 neurons.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"2 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1