首页 > 最新文献

Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)最新文献

英文 中文
Scale-based clustering using the radial basis function network 基于尺度的径向基函数网络聚类
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374299
S. Chakravarthy, Joydeep Ghosh
Adaptive learning dynamics of the radial basis function network (RBFN) are compared with a scale-based clustering technique and a relationship between the two is pointed out. Using this link, it is shown how scale-based clustering can be done using the RBFN, with the radial basis function (RBF) width as the scale parameter. The technique suggests the "right" scale at which the given data set must be clustered and obviates the need for knowing the number of clusters beforehand. We show how this method solves the problem of determining the number of RBF units and the widths required to get a good network solution.<>
将径向基函数网络(RBFN)的自适应学习动态与基于尺度的聚类技术进行了比较,并指出了两者之间的关系。通过这个链接,展示了如何使用径向基函数(RBF)宽度作为尺度参数的RBFN来完成基于尺度的聚类。该技术提出了给定数据集必须聚类的“正确”规模,并避免了事先知道聚类数量的需要。我们展示了这种方法如何解决确定RBF单元的数量和获得良好网络解所需的宽度的问题。
{"title":"Scale-based clustering using the radial basis function network","authors":"S. Chakravarthy, Joydeep Ghosh","doi":"10.1109/ICNN.1994.374299","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374299","url":null,"abstract":"Adaptive learning dynamics of the radial basis function network (RBFN) are compared with a scale-based clustering technique and a relationship between the two is pointed out. Using this link, it is shown how scale-based clustering can be done using the RBFN, with the radial basis function (RBF) width as the scale parameter. The technique suggests the \"right\" scale at which the given data set must be clustered and obviates the need for knowing the number of clusters beforehand. We show how this method solves the problem of determining the number of RBF units and the widths required to get a good network solution.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128235052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 114
Feedforward neural networks to learn drawing lines 前馈神经网络学习绘制线条
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374218
Yiwei Chen, F. Bastani
The paper examines the capability and performance of 1-hidden-layer feedforward neural networks with multi-activation product (MAP) units, through the application of drawing digital line segments. The MAP unit is a recently proposed multi-dendrite neuron model. The centroidal function is chosen as the MAP unit base activation function because it demonstrates a superior performance over the sigmoidal functions. The network with MAP units with more than one dendrite converges statistically faster during the learning phase with randomly selected training patterns. The generalization to the entire sample space is shown to be proportional to the size of the training patterns.<>
本文通过绘制数字线段的应用,考察了具有多激活积(MAP)单元的1隐层前馈神经网络的能力和性能。MAP单元是最近提出的一种多树突神经元模型。选择质心函数作为MAP单元基激活函数是因为它比s型函数表现出更好的性能。在随机选择训练模式的学习阶段,具有多个树突的MAP单元的网络在统计上收敛得更快。对整个样本空间的泛化与训练模式的大小成正比。
{"title":"Feedforward neural networks to learn drawing lines","authors":"Yiwei Chen, F. Bastani","doi":"10.1109/ICNN.1994.374218","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374218","url":null,"abstract":"The paper examines the capability and performance of 1-hidden-layer feedforward neural networks with multi-activation product (MAP) units, through the application of drawing digital line segments. The MAP unit is a recently proposed multi-dendrite neuron model. The centroidal function is chosen as the MAP unit base activation function because it demonstrates a superior performance over the sigmoidal functions. The network with MAP units with more than one dendrite converges statistically faster during the learning phase with randomly selected training patterns. The generalization to the entire sample space is shown to be proportional to the size of the training patterns.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130400298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking of the CM-5 and the Cray machines with a very large backpropagation neural network 用一个非常大的反向传播神经网络对CM-5和Cray机器进行基准测试
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374132
Xiao Liu, G. Wilcox
In this paper, we present a new, efficient implementation of the backpropagation algorithm (BP) on the CM-5 by fully taking advantage of its Control Network to avoid explicit message-passing. The nodes in the input and output layers are evenly distributed to all processors: all nodes in the hidden layer(s) are replicated in each processor, and all weights are distributed to all processors corresponding to the nodes. We have implemented this algorithm on the CM-5 in the MIMD mode using the C programming language. For a case study of protein tertiary structure prediction, we obtained performance of 76 million weight updates per second (WUPS) with the machine partitioned for 512 processors without vector units. Experiments using different sized partitions indicated an almost linear relationship between the computation time and the number of processors, indicating good parallelization. We have also implemented the backpropagation algorithm on the Cray machines using the C programming language. The Cray-2 implementation yields performance of 10 million WUPS; the Cray X-MP EA implementation yields 18 million WUPS; and the Cray Y-MP M92 implementation yields 40 million WUPS.<>
在本文中,我们提出了一种新的,有效的反向传播算法(BP)在CM-5上的实现,充分利用其控制网络来避免显式的消息传递。输入和输出层的节点均匀分布到所有处理器中:隐藏层的所有节点在每个处理器中复制,所有权重分布到节点对应的所有处理器中。我们用C语言在CM-5上以MIMD模式实现了该算法。对于蛋白质三级结构预测的案例研究,我们获得了每秒7600万次权重更新(WUPS)的性能,机器被划分为512个处理器,没有向量单元。使用不同大小分区的实验表明,计算时间与处理器数量之间几乎呈线性关系,表明并行性良好。我们还使用C语言在Cray机器上实现了反向传播算法。Cray-2实现的性能为1000万WUPS;Cray X-MP EA实现产生1800万WUPS;克雷Y-MP M92的实现产生了4000万wps。
{"title":"Benchmarking of the CM-5 and the Cray machines with a very large backpropagation neural network","authors":"Xiao Liu, G. Wilcox","doi":"10.1109/ICNN.1994.374132","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374132","url":null,"abstract":"In this paper, we present a new, efficient implementation of the backpropagation algorithm (BP) on the CM-5 by fully taking advantage of its Control Network to avoid explicit message-passing. The nodes in the input and output layers are evenly distributed to all processors: all nodes in the hidden layer(s) are replicated in each processor, and all weights are distributed to all processors corresponding to the nodes. We have implemented this algorithm on the CM-5 in the MIMD mode using the C programming language. For a case study of protein tertiary structure prediction, we obtained performance of 76 million weight updates per second (WUPS) with the machine partitioned for 512 processors without vector units. Experiments using different sized partitions indicated an almost linear relationship between the computation time and the number of processors, indicating good parallelization. We have also implemented the backpropagation algorithm on the Cray machines using the C programming language. The Cray-2 implementation yields performance of 10 million WUPS; the Cray X-MP EA implementation yields 18 million WUPS; and the Cray Y-MP M92 implementation yields 40 million WUPS.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123563099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Preprocessing of training set for backpropagation algorithm: histogram equalization 反向传播算法训练集预处理:直方图均衡化
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374200
T. Kwon, Ehsan H. Feroz, Hui Cheng
This paper introduces a data preprocessing algorithm that can improve the efficiency of the standard backpropagation (BP) algorithm. The basic approach is transforming input data to a range that associates high-slopes of sigmoid where relatively large modification of weights occurs. This helps escaping of early trapping from prematured saturation. However, a simple and uniform transformation to such desired range can lead to a slow learning if the data have a heavily skewed distribution. In order to improve the performance of BP algorithm on such distribution, the authors propose a modified histogram equalization technique which enhances the spacing between data points in the heavily concentrated regions of skewed distribution. The authors' simulation study shows that this modified histogram equalization can significantly speed up the BP training as well as improving the generalization capability of the trained network.<>
介绍了一种数据预处理算法,可以提高标准反向传播(BP)算法的效率。基本方法是将输入数据转换为与s型曲线的高斜率相关的范围,在此范围内发生相对较大的权重修改。这有助于从未成熟饱和中脱离早期圈闭。然而,如果数据分布严重偏斜,那么简单而统一的转换到这样的期望范围可能会导致缓慢的学习。为了提高BP算法在这种分布上的性能,作者提出了一种改进的直方图均衡化技术,该技术增强了偏态分布严重集中区域的数据点间距。仿真研究表明,这种改进的直方图均衡化方法可以显著加快BP的训练速度,并提高训练网络的泛化能力。
{"title":"Preprocessing of training set for backpropagation algorithm: histogram equalization","authors":"T. Kwon, Ehsan H. Feroz, Hui Cheng","doi":"10.1109/ICNN.1994.374200","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374200","url":null,"abstract":"This paper introduces a data preprocessing algorithm that can improve the efficiency of the standard backpropagation (BP) algorithm. The basic approach is transforming input data to a range that associates high-slopes of sigmoid where relatively large modification of weights occurs. This helps escaping of early trapping from prematured saturation. However, a simple and uniform transformation to such desired range can lead to a slow learning if the data have a heavily skewed distribution. In order to improve the performance of BP algorithm on such distribution, the authors propose a modified histogram equalization technique which enhances the spacing between data points in the heavily concentrated regions of skewed distribution. The authors' simulation study shows that this modified histogram equalization can significantly speed up the BP training as well as improving the generalization capability of the trained network.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115118767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Are modified back-propagation algorithms worth the effort? 修改后的反向传播算法值得付出努力吗?
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374227
D. Alpsan, M. Towsey, O. Ozdamar, A. Tsoi, D. Ghista
A wide range of modifications and extensions to the backpropagation (BP) algorithm have been tested on a real world medical problem. Our results show that: 1) proper tuning of learning parameters of standard BP not only increases the speed of learning but also has a significant effect on generalisation; 2) parameter combinations and training options which lead to fast learning do not usually yield good generalisation and vice versa; 3) standard BP may be fast enough when its parameters are finely tuned; 4) modifications developed on artificial problems for faster learning do not necessarily give faster learning on real-world problems, and when they do, it may be at the expense of generalisation; and 5) even when modified BP algorithms perform well, they may require extensive fine-tuning to achieve this performance. For our problem, none of the modifications could justify the effort to implement them.<>
对反向传播(BP)算法进行了广泛的修改和扩展,并在实际医疗问题上进行了测试。结果表明:1)适当调整标准BP的学习参数不仅提高了学习速度,而且对泛化有显著的影响;2)导致快速学习的参数组合和训练选项通常不会产生良好的泛化,反之亦然;3)标准BP在参数微调后可能足够快;4)为了更快地学习而对人工问题进行的修改并不一定能加快对现实问题的学习,即使这样做了,也可能以牺牲泛化为代价;5)即使修改后的BP算法表现良好,它们也可能需要大量的微调才能达到这种性能。对于我们的问题,没有任何修改可以证明实现它们的努力是合理的。
{"title":"Are modified back-propagation algorithms worth the effort?","authors":"D. Alpsan, M. Towsey, O. Ozdamar, A. Tsoi, D. Ghista","doi":"10.1109/ICNN.1994.374227","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374227","url":null,"abstract":"A wide range of modifications and extensions to the backpropagation (BP) algorithm have been tested on a real world medical problem. Our results show that: 1) proper tuning of learning parameters of standard BP not only increases the speed of learning but also has a significant effect on generalisation; 2) parameter combinations and training options which lead to fast learning do not usually yield good generalisation and vice versa; 3) standard BP may be fast enough when its parameters are finely tuned; 4) modifications developed on artificial problems for faster learning do not necessarily give faster learning on real-world problems, and when they do, it may be at the expense of generalisation; and 5) even when modified BP algorithms perform well, they may require extensive fine-tuning to achieve this performance. For our problem, none of the modifications could justify the effort to implement them.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"549 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123098449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
New results of Quick Learning for Bidirectional Associative Memory having high capacity 高容量双向联想记忆快速学习的新结果
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374333
M. Hattori, M. Hagiwara, M. Nakagawa
Several important characteristics of Quick Learning for Bidirectional Associative Memory (QLBAM) are introduced. QLBAM uses two stage learning. In the first stage, the BAM is trained by Hebbian learning and then by Pseudo-Relaxation Learning Algorithm for BAM (PRLAB). The following features of the QLBAM are made clear: it is insensitive to correlation of training pairs; it is robust for noisy inputs; the minimum absolute value of net inputs indexes a noise margin; the memory capacity is greatly improved: the maximum capacity in our simulation is about 2.2N.<>
介绍了双向联想记忆快速学习的几个重要特点。QLBAM使用两阶段学习。首先采用Hebbian学习对BAM进行训练,然后采用伪松弛学习算法对BAM进行训练。明确了QLBAM的以下特点:对训练对的相关性不敏感;它对噪声输入具有鲁棒性;净输入的最小绝对值表示噪声裕度;大大提高了内存容量:在我们的模拟中最大容量约为2.2N.>
{"title":"New results of Quick Learning for Bidirectional Associative Memory having high capacity","authors":"M. Hattori, M. Hagiwara, M. Nakagawa","doi":"10.1109/ICNN.1994.374333","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374333","url":null,"abstract":"Several important characteristics of Quick Learning for Bidirectional Associative Memory (QLBAM) are introduced. QLBAM uses two stage learning. In the first stage, the BAM is trained by Hebbian learning and then by Pseudo-Relaxation Learning Algorithm for BAM (PRLAB). The following features of the QLBAM are made clear: it is insensitive to correlation of training pairs; it is robust for noisy inputs; the minimum absolute value of net inputs indexes a noise margin; the memory capacity is greatly improved: the maximum capacity in our simulation is about 2.2N.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"4 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133644627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Remarks on neural network controller using different sigmoid functions 使用不同s型函数的神经网络控制器评述
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374636
T. Yamada, T. Yabuta
Many studies such as Kawato's work (1987) have been undertaken in order to apply both the flexibility and learning ability of neural networks to dynamic system controllers. Most of them used a fixed shape sigmoid function. We have confirmed that it is useful to change the sigmoid function shape to improve the nonlinear mapping capability of neural network controllers. This paper introduces the a new concept for autotuning sigmoid function shapes of neural network servo controllers. Three types of tuning method are proposed in order to improve the nonlinear mapping capability. The first type uses a uniform sigmoid function shape. With the second type, the sigmoid function shapes within one layer are the same and the shapes are tuned layer by layer is tuned. With the third type, the sigmoid function shape of each neuron is different and is tuned individually. Their characteristics are confirmed by simulation.<>
为了将神经网络的灵活性和学习能力应用于动态系统控制器,已经进行了许多研究,如Kawato的工作(1987)。它们大多使用固定形状的s型函数。结果表明,改变s型函数的形状有助于提高神经网络控制器的非线性映射能力。本文介绍了神经网络伺服控制器s型函数形状自整定的新概念。为了提高非线性映射能力,提出了三种调谐方法。第一种类型使用统一的s型函数形状。对于第二种类型,一层内的s型函数形状是相同的,并且形状是逐层调整的。对于第三种类型,每个神经元的s形函数形状是不同的,并且是单独调整的。通过仿真验证了它们的特性
{"title":"Remarks on neural network controller using different sigmoid functions","authors":"T. Yamada, T. Yabuta","doi":"10.1109/ICNN.1994.374636","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374636","url":null,"abstract":"Many studies such as Kawato's work (1987) have been undertaken in order to apply both the flexibility and learning ability of neural networks to dynamic system controllers. Most of them used a fixed shape sigmoid function. We have confirmed that it is useful to change the sigmoid function shape to improve the nonlinear mapping capability of neural network controllers. This paper introduces the a new concept for autotuning sigmoid function shapes of neural network servo controllers. Three types of tuning method are proposed in order to improve the nonlinear mapping capability. The first type uses a uniform sigmoid function shape. With the second type, the sigmoid function shapes within one layer are the same and the shapes are tuned layer by layer is tuned. With the third type, the sigmoid function shape of each neuron is different and is tuned individually. Their characteristics are confirmed by simulation.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131675108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Application of genetic algorithms in graph matching 遗传算法在图匹配中的应用
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374829
M. Krcmár, A. Dhawan
Genetic algorithms (GA) can be exploited for optimal graph matching. Graphs represent powerful method of a pattern formal description. Globally optimal graph matching is a NP-complete problem. Pattern distortions and noise increase an optimal search difficulty which could be tackled using GA. This paper describes results of simple GA applied on a graph matching problem. As a conclusion, the suitable GA for an optimal graph "isomorphism" and "monomorphism" is proposed. Used coding resembles the travelling salesman problem (TSP). Consequently, performance of ordering operators has been tested. In contrast to the TSP, the fitness function depends on chromosome value positioning not ordering. It results in differences between optimal GA configuration for graph matching and for TSP.<>
遗传算法(GA)可以用于最优图匹配。图是模式形式化描述的有力方法。全局最优图匹配是一个np完全问题。模式失真和噪声增加了最优搜索的难度,可以用遗传算法来解决。本文描述了简单遗传算法在图匹配问题上的应用结果。作为结论,提出了适合于最优图“同构”和“单态”的遗传算法。使用的编码类似于旅行推销员问题(TSP)。因此,对排序运算符的性能进行了测试。与TSP相比,适应度函数依赖于染色体值的定位而不是排序。结果表明,图匹配的最优遗传算法配置与TSP.>的最优遗传算法配置存在差异
{"title":"Application of genetic algorithms in graph matching","authors":"M. Krcmár, A. Dhawan","doi":"10.1109/ICNN.1994.374829","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374829","url":null,"abstract":"Genetic algorithms (GA) can be exploited for optimal graph matching. Graphs represent powerful method of a pattern formal description. Globally optimal graph matching is a NP-complete problem. Pattern distortions and noise increase an optimal search difficulty which could be tackled using GA. This paper describes results of simple GA applied on a graph matching problem. As a conclusion, the suitable GA for an optimal graph \"isomorphism\" and \"monomorphism\" is proposed. Used coding resembles the travelling salesman problem (TSP). Consequently, performance of ordering operators has been tested. In contrast to the TSP, the fitness function depends on chromosome value positioning not ordering. It results in differences between optimal GA configuration for graph matching and for TSP.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127193074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
The capacity of convergence-zone episodic memory 趋同区情景记忆的能力
Pub Date : 1994-10-05 DOI: 10.1109/ICNN.1994.375017
Mark Moll, R. Miikkulainen, Jonathan Abbey
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a computational model of episodic memory inspired by Damasio's idea of convergence zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern which in turn reactivates the entire stored pattern. A worst-case analysis shows that with realistic-size layers, the memory capacity of the model is several times larger than the number of units in the model, and could account for the large capacity of human episodic memory.<>
人类的情景记忆为日常经历提供了看似无限的存储空间,而一个检索系统允许我们通过部分激活它们的组成部分来访问这些经历。本文提出了一个情景记忆的计算模型,灵感来自达马西奥的趋同区思想。该模型由感知特征映射层和绑定层组成。感知特征模式在绑定层中进行粗编码,并存储在层与层之间的权重中。存储特征的部分激活激活绑定模式,绑定模式反过来重新激活整个存储模式。最坏情况分析表明,在实际大小的层中,模型的记忆容量比模型中的单元数量大几倍,并且可以解释人类情景记忆的大容量。
{"title":"The capacity of convergence-zone episodic memory","authors":"Mark Moll, R. Miikkulainen, Jonathan Abbey","doi":"10.1109/ICNN.1994.375017","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375017","url":null,"abstract":"Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a computational model of episodic memory inspired by Damasio's idea of convergence zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern which in turn reactivates the entire stored pattern. A worst-case analysis shows that with realistic-size layers, the memory capacity of the model is several times larger than the number of units in the model, and could account for the large capacity of human episodic memory.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124604249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Classification of elongated and contracted images using new regular moments 利用新规则矩对伸长和收缩图像进行分类
Pub Date : 1994-10-01 DOI: 10.1109/ICNN.1994.374880
P. Raveendran, S. Jegannathan, S. Omatu
This paper presents a technique to classify images that have been elongated or contracted. The problem is formulated using conventional regular moments. It is shown that the conventional regular moment-invariants remain no longer invariant when the image is scaled unequally in the x- and y-directions. A method is proposed to form moment-invariants that do not change under such unequal scaling. Results of computer simulations for images are also included verifying the validity of the method proposed.<>
本文提出了一种对被拉长或收缩的图像进行分类的技术。这个问题是用常规矩来表述的。结果表明,当图像在x和y方向上进行不相等缩放时,常规矩不变量不再保持不变。提出了一种在这种不等尺度下形成不变矩的方法。计算机图像仿真结果验证了所提方法的有效性。
{"title":"Classification of elongated and contracted images using new regular moments","authors":"P. Raveendran, S. Jegannathan, S. Omatu","doi":"10.1109/ICNN.1994.374880","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374880","url":null,"abstract":"This paper presents a technique to classify images that have been elongated or contracted. The problem is formulated using conventional regular moments. It is shown that the conventional regular moment-invariants remain no longer invariant when the image is scaled unequally in the x- and y-directions. A method is proposed to form moment-invariants that do not change under such unequal scaling. Results of computer simulations for images are also included verifying the validity of the method proposed.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124956483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1