首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
Visual inspection of soldered joints by using neural networks 基于神经网络的焊接点视觉检测
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170373
S. Jagannathan, S. Balakrishnan, N. Popplewell
The problem of solder joint inspection is viewed as a two-step process of pattern recognition and classification. A modified intelligent histogram regrading technique is used which divides the histogram of the captured image into different modes. Each distinct mode is identified, and the corresponding range of grey levels is separated and regraded by using neural networks. The output pattern of the networks is presented to a second stage of neural networks in order to select and interpret a histogram's features. A learning mechanism is also used which uses a backpropagation algorithm to successfully identify and classify the defective solder joints. The proposed technique has the high speed and low computational complexity typical of nonspatial techniques.<>
焊点检测问题被看作是一个模式识别和分类的两步过程。采用改进的智能直方图分级技术,将捕获图像的直方图划分为不同的模式。识别出每个不同的模式,并利用神经网络对相应的灰度范围进行分离和还原。网络的输出模式被呈现给神经网络的第二阶段,以便选择和解释直方图的特征。采用了一种学习机制,利用反向传播算法成功地对缺陷焊点进行识别和分类。该技术具有非空间技术的高速度和低计算复杂度。
{"title":"Visual inspection of soldered joints by using neural networks","authors":"S. Jagannathan, S. Balakrishnan, N. Popplewell","doi":"10.1109/IJCNN.1991.170373","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170373","url":null,"abstract":"The problem of solder joint inspection is viewed as a two-step process of pattern recognition and classification. A modified intelligent histogram regrading technique is used which divides the histogram of the captured image into different modes. Each distinct mode is identified, and the corresponding range of grey levels is separated and regraded by using neural networks. The output pattern of the networks is presented to a second stage of neural networks in order to select and interpret a histogram's features. A learning mechanism is also used which uses a backpropagation algorithm to successfully identify and classify the defective solder joints. The proposed technique has the high speed and low computational complexity typical of nonspatial techniques.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116735988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Location and stability of the equilibria of nonlinear neural networks 非线性神经网络平衡点的定位与稳定性
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170664
M. Vidyasagar
The number, location and stability behavior of the equilibria of arbitrary nonlinear neural networks are analyzed without resorting to energy arguments based on assumptions of symmetric interactions or no self-interactions. The following results are proved. Let H=
基于对称相互作用或无自相互作用的假设,在不使用能量参数的情况下,分析了任意非线性神经网络平衡点的数量、位置和稳定性行为。证明了以下结果。让H =
{"title":"Location and stability of the equilibria of nonlinear neural networks","authors":"M. Vidyasagar","doi":"10.1109/IJCNN.1991.170664","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170664","url":null,"abstract":"The number, location and stability behavior of the equilibria of arbitrary nonlinear neural networks are analyzed without resorting to energy arguments based on assumptions of symmetric interactions or no self-interactions. The following results are proved. Let H=","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117128856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Communication network routing using neural nets-numerical aspects and alternative approaches 使用神经网络的通信网络路由-数值方面和替代方法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170490
T. Fritsch, W. Mandel
The authors discuss various approaches of using Hopfield networks in routing problems in computer communication networks. It is shown that the classical approach using the original Hopfield network leads to evident numerical problems, and hence is not practicable. The heuristic choice of the Lagrange parameters, as presented in the literature, can result in incorrect solutions for variable dimensions, or is very time consuming, in order to search the correct parameter sets. The modified method using eigenvalue analysis using predetermined parameters yields recognizable improvements. On the other hand, it is not able to produce correct solutions for different topologies with higher dimensions. From a numerical viewpoint, determining the eigenvalues of the connection matrix involves severe problems, such as stiffness, and shows evident instability of the simulated differential equations. The authors present possible alternative approaches such as the self-organizing feature map and modifications of the Hopfield net, e.g. mean field annealing, and the Pottglas model.<>
作者讨论了在计算机通信网络路由问题中使用Hopfield网络的各种方法。结果表明,使用原始Hopfield网络的经典方法会导致明显的数值问题,因此是不可行的。如文献中所述,拉格朗日参数的启发式选择可能导致变量维的不正确解,或者为了搜索正确的参数集而非常耗时。使用预定参数的特征值分析的改进方法产生了可识别的改进。另一方面,它不能为不同的高维拓扑生成正确的解。从数值角度来看,确定连接矩阵的特征值涉及到刚度等严重问题,并显示出模拟微分方程的明显不稳定性。作者提出了可能的替代方法,如自组织特征映射和Hopfield网络的修改,例如平均场退火和potglas模型。
{"title":"Communication network routing using neural nets-numerical aspects and alternative approaches","authors":"T. Fritsch, W. Mandel","doi":"10.1109/IJCNN.1991.170490","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170490","url":null,"abstract":"The authors discuss various approaches of using Hopfield networks in routing problems in computer communication networks. It is shown that the classical approach using the original Hopfield network leads to evident numerical problems, and hence is not practicable. The heuristic choice of the Lagrange parameters, as presented in the literature, can result in incorrect solutions for variable dimensions, or is very time consuming, in order to search the correct parameter sets. The modified method using eigenvalue analysis using predetermined parameters yields recognizable improvements. On the other hand, it is not able to produce correct solutions for different topologies with higher dimensions. From a numerical viewpoint, determining the eigenvalues of the connection matrix involves severe problems, such as stiffness, and shows evident instability of the simulated differential equations. The authors present possible alternative approaches such as the self-organizing feature map and modifications of the Hopfield net, e.g. mean field annealing, and the Pottglas model.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
An implementation of short-timed speech recognition on layered neural nets 基于分层神经网络的短时语音识别实现
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170719
Haizhou Li, Bingzheng Xu
The authors show a new way to handle the sequential nature of speech signals in multilayer perceptrons (MLPs) or other neural net machines. A static model in the form of state transition probability matrices representing short speech units such as syllables which correspond to Chinese utterances of isolated characters were adopted and as learning patterns for MLPs. The network architecture and learning algorithms are described. Experimental results on speech recognition are included.<>
作者展示了一种在多层感知器(mlp)或其他神经网络机器中处理语音信号序列性质的新方法。采用状态转移概率矩阵形式的静态模型表示短语音单元(如音节),这些单元对应于孤立字符的汉语语音,并作为mlp的学习模式。介绍了网络结构和学习算法。最后给出了语音识别的实验结果。
{"title":"An implementation of short-timed speech recognition on layered neural nets","authors":"Haizhou Li, Bingzheng Xu","doi":"10.1109/IJCNN.1991.170719","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170719","url":null,"abstract":"The authors show a new way to handle the sequential nature of speech signals in multilayer perceptrons (MLPs) or other neural net machines. A static model in the form of state transition probability matrices representing short speech units such as syllables which correspond to Chinese utterances of isolated characters were adopted and as learning patterns for MLPs. The network architecture and learning algorithms are described. Experimental results on speech recognition are included.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116144428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new learning approach to enhance the storage capacity of the Hopfield model 一种新的学习方法来提高Hopfield模型的存储容量
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170650
H. Oh, S. Kothari
A new learning technique is introduced to solve the problem of the small and restrictive storage capacity of the Hopfield model. The technique exploits the maximum storage capacity. It fails only if appropriate weights do not exist to store the given set of patterns. The technique is not based on the concept of function minimization. Thus, there is no danger of getting stuck in local minima. The technique is free from the step size and moving target problems. Learning speed is very fast and depends on difficulties presented by the training patterns and not so much on the parameters of the algorithm. The technique is scalable. Its performance does not degrade as the problem size increases. An extensive analysis of the learning technique is provided through simulation results.<>
为了解决Hopfield模型存储容量小且受限的问题,引入了一种新的学习技术。该技术利用了最大的存储容量。只有当不存在适当的权重来存储给定的模式集时,它才会失败。该技术不是基于函数最小化的概念。因此,没有陷入局部极小值的危险。该技术不存在步长和移动目标问题。学习速度非常快,取决于训练模式所呈现的难度,而不是取决于算法的参数。这项技术是可扩展的。它的性能不会随着问题大小的增加而降低。通过仿真结果对学习技术进行了广泛的分析。
{"title":"A new learning approach to enhance the storage capacity of the Hopfield model","authors":"H. Oh, S. Kothari","doi":"10.1109/IJCNN.1991.170650","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170650","url":null,"abstract":"A new learning technique is introduced to solve the problem of the small and restrictive storage capacity of the Hopfield model. The technique exploits the maximum storage capacity. It fails only if appropriate weights do not exist to store the given set of patterns. The technique is not based on the concept of function minimization. Thus, there is no danger of getting stuck in local minima. The technique is free from the step size and moving target problems. Learning speed is very fast and depends on difficulties presented by the training patterns and not so much on the parameters of the algorithm. The technique is scalable. Its performance does not degrade as the problem size increases. An extensive analysis of the learning technique is provided through simulation results.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115134102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Improving error tolerance of self-organizing neural nets 提高自组织神经网络的容错性
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170279
F. Sha, Q. Gan
A hybrid neural net (HNN) combining the network introduced by G.A. Carpenter and S. Grossberg (1987, 1988) and the Hopfield associative memory (HAM) is developed. HAM diminishes noise in samples and provides ART1 samples as inputs. In order to match the capacity of HAM with that of ART1, a new recalling algorithm (NHAM) is also introduced to enlarge the capacity of HAM. Based on NHAM and HNN, a revised version of HNN (RHNN) is introduced. The difference between RHNN and HNN is that RHNN has feedback loops, while HNN has only feedforward paths. The ART1 in RHNN supplies information for HAM to recall memories. Computer simulation demonstrated that RHNN has several advantages.<>
将G.A. Carpenter和S. Grossberg(1987, 1988)引入的神经网络与Hopfield联想记忆(HAM)相结合,提出了一种混合神经网络(HNN)。HAM减少了样本中的噪声,并提供了ART1样本作为输入。为了使HAM的容量与ART1的容量匹配,还引入了一种新的召回算法(NHAM)来扩大HAM的容量。在NHAM和HNN的基础上,提出了HNN的改版(RHNN)。RHNN和HNN的区别在于RHNN有反馈回路,而HNN只有前馈路径。RHNN中的ART1为HAM提供信息来回忆记忆。计算机仿真表明,RHNN具有几个优点。
{"title":"Improving error tolerance of self-organizing neural nets","authors":"F. Sha, Q. Gan","doi":"10.1109/IJCNN.1991.170279","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170279","url":null,"abstract":"A hybrid neural net (HNN) combining the network introduced by G.A. Carpenter and S. Grossberg (1987, 1988) and the Hopfield associative memory (HAM) is developed. HAM diminishes noise in samples and provides ART1 samples as inputs. In order to match the capacity of HAM with that of ART1, a new recalling algorithm (NHAM) is also introduced to enlarge the capacity of HAM. Based on NHAM and HNN, a revised version of HNN (RHNN) is introduced. The difference between RHNN and HNN is that RHNN has feedback loops, while HNN has only feedforward paths. The ART1 in RHNN supplies information for HAM to recall memories. Computer simulation demonstrated that RHNN has several advantages.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115280576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PPNN: a faster learning and better generalizing neural net PPNN:一个更快的学习和更好的泛化神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170513
B. Xu, L. Zheng
It is pointed out that the planar topology of the current backpropagation neural network (BPNN) sets limits to the solution of the slow convergence rate problem, local minima, and other problems associated with BPNN. The parallel probabilistic neural network (PPNN) using a novel neural network topology, stereotopology, is proposed to overcome these problems. The learning ability and the generation ability of BPNN and PPNN are compared for several problems. Simulation results show that PPNN was capable of learning various kinds of problems much faster than BPNN, and also generalized better than BPNN. It is shown that the faster, universal learnability of PPNN was due to the parallel characteristic of PPNN's stereotopology, and the better generalization ability came from the probabilistic characteristic of PPNN's memory retrieval rule.<>
指出当前反向传播神经网络(BPNN)的平面拓扑结构限制了其收敛速度慢问题、局部极小值问题和其他与之相关的问题的解决。并行概率神经网络(PPNN)采用一种新颖的神经网络拓扑——立体拓扑,克服了这些问题。针对几个问题,比较了BPNN和PPNN的学习能力和生成能力。仿真结果表明,PPNN对各种问题的学习速度比BPNN快得多,泛化能力也比BPNN好。结果表明,PPNN具有较快的通用学习性是由于其立体拓扑结构的并行性,而较好的泛化能力是由于其记忆检索规则的概率性。
{"title":"PPNN: a faster learning and better generalizing neural net","authors":"B. Xu, L. Zheng","doi":"10.1109/IJCNN.1991.170513","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170513","url":null,"abstract":"It is pointed out that the planar topology of the current backpropagation neural network (BPNN) sets limits to the solution of the slow convergence rate problem, local minima, and other problems associated with BPNN. The parallel probabilistic neural network (PPNN) using a novel neural network topology, stereotopology, is proposed to overcome these problems. The learning ability and the generation ability of BPNN and PPNN are compared for several problems. Simulation results show that PPNN was capable of learning various kinds of problems much faster than BPNN, and also generalized better than BPNN. It is shown that the faster, universal learnability of PPNN was due to the parallel characteristic of PPNN's stereotopology, and the better generalization ability came from the probabilistic characteristic of PPNN's memory retrieval rule.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124836946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implementation of visual reconstruction networks-Alternatives to resistive networks 视觉重建网络的实现——电阻网络的替代方案
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170649
D. Mansor, D. Suter
The resistive grid approach has been adopted by the Harris coupled depth-slope analog network and generalized for regularization involving arbitrary degrees of smoothness. The authors consider implementations of arbitrary order regularization networks which do not require resistive grids. The approach followed is to generalize the original formulation of J.G. Harris (1987) and then to follow alternative paths to analog circuit realization allowed by the generalization.<>
Harris耦合深度-坡度模拟网络采用了电阻网格方法,并对涉及任意平滑度的正则化进行了推广。作者考虑了不需要电阻网格的任意阶正则化网络的实现。所采用的方法是推广J.G. Harris(1987)的原始公式,然后遵循推广允许的模拟电路实现的替代路径。
{"title":"Implementation of visual reconstruction networks-Alternatives to resistive networks","authors":"D. Mansor, D. Suter","doi":"10.1109/IJCNN.1991.170649","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170649","url":null,"abstract":"The resistive grid approach has been adopted by the Harris coupled depth-slope analog network and generalized for regularization involving arbitrary degrees of smoothness. The authors consider implementations of arbitrary order regularization networks which do not require resistive grids. The approach followed is to generalize the original formulation of J.G. Harris (1987) and then to follow alternative paths to analog circuit realization allowed by the generalization.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124889787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural network training using homotopy continuation methods 用同伦延拓方法训练神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170769
J. Chow, L. Udpa, S. Udpa
Neural networks are widely used in performing classification tasks. The networks are traditionally trained using gradient methods to minimize the training error. These techniques, however, are highly susceptible to getting trapped in local minima. The authors propose an innovative approach to obtain the global minimum of the training error. The globally optimum solution can be obtained by employing the homotopy continuation method for minimizing the classification error during training. Two different approaches are considered. The first approach involves the polynomial modeling of the nodal activation function and the second approach involves the traditional sigmoid function. Results illustrating the superiority of the homotopy method over the gradient descent method are presented.<>
神经网络被广泛应用于分类任务。传统的训练方法是使用梯度方法来最小化训练误差。然而,这些技术很容易陷入局部极小值。作者提出了一种新颖的方法来获得训练误差的全局最小值。利用同伦延拓方法最小化训练过程中的分类误差,得到全局最优解。考虑了两种不同的方法。第一种方法涉及节点激活函数的多项式建模,第二种方法涉及传统的s型函数。结果表明,同伦法优于梯度下降法。
{"title":"Neural network training using homotopy continuation methods","authors":"J. Chow, L. Udpa, S. Udpa","doi":"10.1109/IJCNN.1991.170769","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170769","url":null,"abstract":"Neural networks are widely used in performing classification tasks. The networks are traditionally trained using gradient methods to minimize the training error. These techniques, however, are highly susceptible to getting trapped in local minima. The authors propose an innovative approach to obtain the global minimum of the training error. The globally optimum solution can be obtained by employing the homotopy continuation method for minimizing the classification error during training. Two different approaches are considered. The first approach involves the polynomial modeling of the nodal activation function and the second approach involves the traditional sigmoid function. Results illustrating the superiority of the homotopy method over the gradient descent method are presented.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113969862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hopfield network with O(N) complexity using a constrained backpropagation learning 基于约束反向传播学习的复杂度为0 (N)的Hopfield网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170606
G. Martinelli, R. Prefetti
A novel associative memory model is presented, which is derived from the Hopfield discrete neural network. Its architecture is greatly simplified because the number of interconnections grows only linearly with the dimensionality of the stored patterns. It makes use of a modified backpropagation algorithm as a learning tool. During the retrieval phase the network operates as an autoassociative BAM (directional associative memory), which searches for a minimum of an appropriate energy function. Computer simulations point out the good performances of the proposed learning method in terms of capacity and number of spurious stable states.<>
提出了一种基于Hopfield离散神经网络的联想记忆模型。它的体系结构大大简化,因为互连的数量仅随存储模式的维数线性增长。它利用一种改进的反向传播算法作为学习工具。在检索阶段,网络作为一个自关联BAM(定向关联记忆)运行,它搜索一个合适的能量函数的最小值。计算机仿真结果表明,所提出的学习方法在容量和伪稳定状态数量方面具有良好的性能。
{"title":"Hopfield network with O(N) complexity using a constrained backpropagation learning","authors":"G. Martinelli, R. Prefetti","doi":"10.1109/IJCNN.1991.170606","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170606","url":null,"abstract":"A novel associative memory model is presented, which is derived from the Hopfield discrete neural network. Its architecture is greatly simplified because the number of interconnections grows only linearly with the dimensionality of the stored patterns. It makes use of a modified backpropagation algorithm as a learning tool. During the retrieval phase the network operates as an autoassociative BAM (directional associative memory), which searches for a minimum of an appropriate energy function. Computer simulations point out the good performances of the proposed learning method in terms of capacity and number of spurious stable states.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122759997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1