首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
A parallel Kalman algorithm for fast learning of multilayer neural networks 多层神经网络快速学习的并行卡尔曼算法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170644
C.-M. Cho, H.-S. Don
A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<>
将最优线性卡尔曼滤波理论与误差传播理论相结合,提出了一种多层前馈神经网络的快速学习算法。该算法利用从训练过程开始到当前训练样本的所有可用信息,以一种高效的并行递归方法对网络中每个神经元的权向量进行更新。这种创新是一种大规模并行实现,并且比传统的反向传播学习技术具有更好的收敛特性。通过异或逻辑运算和两个连续信号的非线性映射等实例说明了该方法的性能
{"title":"A parallel Kalman algorithm for fast learning of multilayer neural networks","authors":"C.-M. Cho, H.-S. Don","doi":"10.1109/IJCNN.1991.170644","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170644","url":null,"abstract":"A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Dynamic competitive learning for centroid estimation 质心估计的动态竞争学习
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170507
S. Kia, G. Coghill
Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<>
基于竞争学习方法的一种变体,提出了一种人工神经网络的模拟版本,称为微分器。该网络以无监督的方式进行训练,并可用于估计模式簇的质心。在一种称为控制神经元的新型神经元的帮助下,竞争神经元之间进行动态竞争以适应输入模式。在训练过程中,控制神经元的输出提供反馈强化信号来修改权向量。该训练算法与传统的竞争性学习方法的不同之处在于,在训练的每一步都对所有的权向量进行修改。计算机仿真结果证明了微分器在估计类质心时的行为。结果表明,该方法具有较强的动态竞争学习能力和较快的权向量收敛速度。
{"title":"Dynamic competitive learning for centroid estimation","authors":"S. Kia, G. Coghill","doi":"10.1109/IJCNN.1991.170507","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170507","url":null,"abstract":"Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134355715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Speaker-independent syllable recognition by a pyramidical neural net 基于金字塔神经网络的独立于说话人的音节识别
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170712
Shulin Yang, Youan Ke, Zhong Wang
The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<>
研究了金字塔多层神经网络在汉语孤立音节非说话人识别中的应用。描述了特征提取算法。对中国25个省份的90名说话者进行的实验表明,对10个孤立数字和7个典型音节的识别准确率分别达到82.7%和87.1%,跨性别识别率达到75%以上。结果表明,该神经网络技术可用于独立于说话人的音节识别,其性能可与隐马尔可夫模型方法相媲美
{"title":"Speaker-independent syllable recognition by a pyramidical neural net","authors":"Shulin Yang, Youan Ke, Zhong Wang","doi":"10.1109/IJCNN.1991.170712","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170712","url":null,"abstract":"The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An enhancement to MLP model to enforce closed decision regions 对MLP模型的增强,以实现封闭决策区域
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170486
R. Gemello, F. Mana
Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<>
描述了对基本MLP(多层感知器)模型的修改,以提高其执行封闭决策区域的能力。作者的建议是在第一个隐藏层上使用超球而不是超平面,然后通过下一层将它们组合起来。训练后,决策区域将自然关闭,因为它们是建立在简单的计算元素上的,只有当模式落在超球接受域中时,这些计算元素才会被触发。训练是在不使用自组织算法的情况下,通过对基本反向传播误差进行修正来实现的。举一个二维的例子
{"title":"An enhancement to MLP model to enforce closed decision regions","authors":"R. Gemello, F. Mana","doi":"10.1109/IJCNN.1991.170486","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170486","url":null,"abstract":"Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Bidirectional optical learnable neural networks for OEIC 面向OEIC的双向光学可学习神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170667
W. Kawakami, K. Kitayama
A novel configuration of an optical bidirectional learnable neural network is proposed, in which the recall and learning processes can be done by transmitting lights of synaptic weight and error signal, respectively, in the opposite direction between two facing OEICs (optoelectronic integrated circuits). Thus, both vector-matrix operation for recall and outer-product for modifying synaptic weights are optically performed bidirectionally. This compact configuration is especially suitable for neurochips. The feasibility of a three-dimensional neurochip is experimentally investigated based on a learning experiment using a 2*2 optical neuro-breadboard.<>
提出了一种新的光学双向可学习神经网络结构,在该结构中,记忆和学习过程可以通过在两个面向的光电集成电路(oeic)之间以相反的方向分别传输突触权和错误信号的光来完成。因此,用于回忆的向量矩阵运算和用于修改突触权重的外积运算都是双向光学执行的。这种紧凑的结构特别适用于神经芯片。基于2*2光学神经面包板的学习实验,对三维神经芯片的可行性进行了实验研究。
{"title":"Bidirectional optical learnable neural networks for OEIC","authors":"W. Kawakami, K. Kitayama","doi":"10.1109/IJCNN.1991.170667","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170667","url":null,"abstract":"A novel configuration of an optical bidirectional learnable neural network is proposed, in which the recall and learning processes can be done by transmitting lights of synaptic weight and error signal, respectively, in the opposite direction between two facing OEICs (optoelectronic integrated circuits). Thus, both vector-matrix operation for recall and outer-product for modifying synaptic weights are optically performed bidirectionally. This compact configuration is especially suitable for neurochips. The feasibility of a three-dimensional neurochip is experimentally investigated based on a learning experiment using a 2*2 optical neuro-breadboard.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115717899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cognitively-based neural network for determining paragraph coherence 基于认知的段落连贯判断神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170577
P. Carlson, A. The
The authors report on an effort in artificial neural network (ANN) technology to use content-independent elements of prose as predictors of paragraph logic structures. They intend to embed the trained network in an intelligent tutor to teach writing skills. An attempt is made to find patterns in the nonambiguous lexical and syntactic features if discourse that predict the semantic/cognitive level of interpretation. An NN implementation of the modified Christensen method is considered. It is noted that ANN technology's ability to deal with fuzzy logic, feature extraction, classification, and predictive modeling makes a neural network the best choice for the present application.<>
作者报告了人工神经网络(ANN)技术中使用散文内容独立元素作为段落逻辑结构预测器的一项努力。他们打算将训练有素的网络嵌入智能导师中,教授写作技巧。本文试图发现话语中非歧义词汇和句法特征的模式,这些特征可以预测解释的语义/认知水平。考虑了改进的Christensen方法的一种神经网络实现。值得注意的是,人工神经网络技术处理模糊逻辑、特征提取、分类和预测建模的能力使神经网络成为当前应用的最佳选择。
{"title":"A cognitively-based neural network for determining paragraph coherence","authors":"P. Carlson, A. The","doi":"10.1109/IJCNN.1991.170577","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170577","url":null,"abstract":"The authors report on an effort in artificial neural network (ANN) technology to use content-independent elements of prose as predictors of paragraph logic structures. They intend to embed the trained network in an intelligent tutor to teach writing skills. An attempt is made to find patterns in the nonambiguous lexical and syntactic features if discourse that predict the semantic/cognitive level of interpretation. An NN implementation of the modified Christensen method is considered. It is noted that ANN technology's ability to deal with fuzzy logic, feature extraction, classification, and predictive modeling makes a neural network the best choice for the present application.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115839688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Discovering production rules with higher order neural networks: a case study. II 用高阶神经网络发现生产规则:一个案例研究。2
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170457
A. Kowalczyk, H. Ferrá, K. Gardiner
It is demonstrated by example that neural networks can be used successfully for automatic extraction of production rules from empirical data. The case considered is a popular public domain database of 8124 mushrooms. With the use of a term selection algorithm, a number of very accurate mask perceptrons (a kind of high-order network or polynomial classifier) have been developed. Then rounding of synaptic weights was applied, leading in many cases to networks with integer weights which were subsequently converted to production rules. It is also shown that focusing of network attention onto a smaller subset of useful attributes ordered with respect to their decreasing discriminating abilities helps significantly in accurate rule generation.<>
实例表明,神经网络可以成功地用于从经验数据中自动提取产生规则。所考虑的案例是一个流行的公共领域的8124蘑菇数据库。使用术语选择算法,已经开发了许多非常精确的掩膜感知器(一种高阶网络或多项式分类器)。然后将突触权值四舍五入,在许多情况下导致具有整数权值的网络随后转换为产生规则。研究还表明,将网络注意力集中在更小的有用属性子集上,这些属性的排序与它们的区分能力的下降有关,这有助于准确地生成规则。
{"title":"Discovering production rules with higher order neural networks: a case study. II","authors":"A. Kowalczyk, H. Ferrá, K. Gardiner","doi":"10.1109/IJCNN.1991.170457","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170457","url":null,"abstract":"It is demonstrated by example that neural networks can be used successfully for automatic extraction of production rules from empirical data. The case considered is a popular public domain database of 8124 mushrooms. With the use of a term selection algorithm, a number of very accurate mask perceptrons (a kind of high-order network or polynomial classifier) have been developed. Then rounding of synaptic weights was applied, leading in many cases to networks with integer weights which were subsequently converted to production rules. It is also shown that focusing of network attention onto a smaller subset of useful attributes ordered with respect to their decreasing discriminating abilities helps significantly in accurate rule generation.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115194276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Feature selection for neural network recognition 神经网络识别的特征选择
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170481
T. Adachi, R. Furuya, Stephan Greene, K. Mikuriya
Presents a system designed to help in the development of image recognition applications, using a general neural-network classifier and an algorithm for selecting effective image features given a small number of samples. Input to the system consists of a number of primitive image features computed directly from pixel values. The feature selection subsystem generates an image recognition feature vector by operations on the primitive features. It uses a combination of rule-based techniques and statistical heuristics to select the best features. The authors propose a quality statistic function which is based on sample values for each primitive feature. The parameters of this function were decided, and the authors experimented on several different target image groups using this function. Recognition rates were perfect in each case.<>
提出了一个系统,旨在帮助开发图像识别应用,使用一般的神经网络分类器和算法来选择有效的图像特征给定少量的样本。系统的输入由直接从像素值计算的许多原始图像特征组成。特征选择子系统通过对原始特征的操作生成图像识别特征向量。它结合使用基于规则的技术和统计启发式来选择最佳特征。作者提出了一种基于每个原始特征的样本值的质量统计函数。确定了该函数的参数,并利用该函数对不同的目标图像组进行了实验。每种情况下的识别率都是完美的。
{"title":"Feature selection for neural network recognition","authors":"T. Adachi, R. Furuya, Stephan Greene, K. Mikuriya","doi":"10.1109/IJCNN.1991.170481","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170481","url":null,"abstract":"Presents a system designed to help in the development of image recognition applications, using a general neural-network classifier and an algorithm for selecting effective image features given a small number of samples. Input to the system consists of a number of primitive image features computed directly from pixel values. The feature selection subsystem generates an image recognition feature vector by operations on the primitive features. It uses a combination of rule-based techniques and statistical heuristics to select the best features. The authors propose a quality statistic function which is based on sample values for each primitive feature. The parameters of this function were decided, and the authors experimented on several different target image groups using this function. Recognition rates were perfect in each case.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Image transformation by spatial inhibition and local association 基于空间抑制和局部关联的图像变换
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170472
T. Omori
The author proposes a model of image transformation that can modulate any unlearned object with a general transformation. That is, the transformation is independent of an object's shape. The local associative neural network model can transform a figure represented by a local feature set. The model transforms a figure satisfying constraints that are given as external inhibition and completing conditions that any figure should satisfy to be a reasonable shape. The basic methods are a figure representation with local features, feature transformation with spatial inhibition, and figure restoration with their interactions. With this model, one can realize an elemental function that will lead to a general figure transformation model without learning or experience.<>
作者提出了一种图像变换模型,该模型可以用一般变换来调制任何未学习对象。也就是说,变换与物体的形状无关。局部关联神经网络模型可以对由局部特征集表示的图形进行变换。该模型将满足约束条件的图形进行变换,这些约束条件是给定的外部抑制条件和任何图形都应满足的合理形状条件。基于局部特征的图形表示、基于空间抑制的特征变换和基于相互作用的图形恢复是图像识别的基本方法。有了这个模型,人们可以实现一个基本的函数,它将导致一个通用的图形转换模型,而不需要学习或经验。
{"title":"Image transformation by spatial inhibition and local association","authors":"T. Omori","doi":"10.1109/IJCNN.1991.170472","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170472","url":null,"abstract":"The author proposes a model of image transformation that can modulate any unlearned object with a general transformation. That is, the transformation is independent of an object's shape. The local associative neural network model can transform a figure represented by a local feature set. The model transforms a figure satisfying constraints that are given as external inhibition and completing conditions that any figure should satisfy to be a reasonable shape. The basic methods are a figure representation with local features, feature transformation with spatial inhibition, and figure restoration with their interactions. With this model, one can realize an elemental function that will lead to a general figure transformation model without learning or experience.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115680237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FastProp: a selective training algorithm for fast error propagation FastProp:用于快速错误传播的选择性训练算法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170635
F. Wong
An improved backpropagation algorithm, called FastProp, for training a feedforward neural network is described. The unique feature of the algorithm is the selective training which is based on the instantaneous causal relationship between the input and output signals during the training process. The causal relationship is calculated based on the error backpropagated to the input layers. The accumulated error, referred to as the accumulated error indices (AEIs), are used to rank the input signals according to their correlation relation with the output signals. An entire set of time series data can be clustered into several situations based on the current input signal which has the highest AEI index, and the neurons can be activated based on the current situations. Experimental results showed that a significant reduction in training time can be achieved with the selective training algorithm compared to the traditional backpropagation algorithm.<>
描述了一种改进的反向传播算法FastProp,用于训练前馈神经网络。该算法的独特之处在于在训练过程中基于输入和输出信号之间的瞬时因果关系进行选择性训练。因果关系是基于误差反向传播到输入层计算的。累积误差称为累积误差指数(AEIs),根据输入信号与输出信号的相关关系对输入信号进行排序。基于AEI指数最高的当前输入信号,可以将整组时间序列数据聚类成多个情景,并根据当前情景激活神经元。实验结果表明,与传统的反向传播算法相比,选择性训练算法可以显著缩短训练时间。
{"title":"FastProp: a selective training algorithm for fast error propagation","authors":"F. Wong","doi":"10.1109/IJCNN.1991.170635","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170635","url":null,"abstract":"An improved backpropagation algorithm, called FastProp, for training a feedforward neural network is described. The unique feature of the algorithm is the selective training which is based on the instantaneous causal relationship between the input and output signals during the training process. The causal relationship is calculated based on the error backpropagated to the input layers. The accumulated error, referred to as the accumulated error indices (AEIs), are used to rank the input signals according to their correlation relation with the output signals. An entire set of time series data can be clustered into several situations based on the current input signal which has the highest AEI index, and the neurons can be activated based on the current situations. Experimental results showed that a significant reduction in training time can be achieved with the selective training algorithm compared to the traditional backpropagation algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1