首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
Computational modelling of learning and behaviour in small neuronal systems 小神经元系统中学习和行为的计算模型
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170439
T. W. Scutt, R. Damper
It is noted that almost all attempts to model neural and brain function have fallen into one of two categories: artificial neural networks using (ideally) large numbers of simple but densely interconnected processing elements, or detailed physiological models of single neurons. The authors report on their progress in formulating a computational model which functions at a level between these two extremes. Individual neurons are considered at the level of membrane potential; this allows outputs from the model to be compared directly with physiological data obtained in intracellular recording. An object-oriented programming language has been used to produce a model where each object equates to a neuron. The benefits of using an object-oriented language are two-fold. The program has been tested by modeling the learning and behavior of the gill-withdrawal reflex in Aplysia. The use of a parameter-based system has made it possible to specify appropriate characteristics for the particular neurons participating in this reflex and to simulate some of the subcircuits involved.<>
值得注意的是,几乎所有对神经和大脑功能建模的尝试都可以归为两类之一:使用(理想情况下)大量简单但紧密相连的处理元素的人工神经网络,或单个神经元的详细生理模型。作者报告了他们在制定一个在这两个极端之间的水平上运行的计算模型方面的进展。单个神经元在膜电位水平上考虑;这允许从模型输出直接与在细胞内记录获得的生理数据进行比较。一种面向对象的编程语言被用来产生一个模型,其中每个对象相当于一个神经元。使用面向对象语言的好处是双重的。该程序已经通过模拟鳃退缩反射的学习和行为进行了测试。使用基于参数的系统,可以为参与这种反射的特定神经元指定适当的特征,并模拟涉及的一些子电路。
{"title":"Computational modelling of learning and behaviour in small neuronal systems","authors":"T. W. Scutt, R. Damper","doi":"10.1109/IJCNN.1991.170439","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170439","url":null,"abstract":"It is noted that almost all attempts to model neural and brain function have fallen into one of two categories: artificial neural networks using (ideally) large numbers of simple but densely interconnected processing elements, or detailed physiological models of single neurons. The authors report on their progress in formulating a computational model which functions at a level between these two extremes. Individual neurons are considered at the level of membrane potential; this allows outputs from the model to be compared directly with physiological data obtained in intracellular recording. An object-oriented programming language has been used to produce a model where each object equates to a neuron. The benefits of using an object-oriented language are two-fold. The program has been tested by modeling the learning and behavior of the gill-withdrawal reflex in Aplysia. The use of a parameter-based system has made it possible to specify appropriate characteristics for the particular neurons participating in this reflex and to simulate some of the subcircuits involved.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115136174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
On using backpropagation for prediction: an empirical study 关于反向传播预测的实证研究
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170574
S. Srirengan, C. Looi
The authors describe the results of initial efforts in applying backpropagation to the prediction of future values of four time series, namely, the sunspot series, a monthly department store sales time series, and two financial index time series. They describe various ways of customizing the backpropagation network for prediction and discuss some experimental results. They also propose a modified learning rule based on optimizing correct predictions of upward and downward trends in a time series.<>
作者描述了将反向传播应用于四个时间序列(即太阳黑子序列、每月百货商店销售时间序列和两个金融指数时间序列)的未来值预测的初步努力的结果。他们描述了定制反向传播网络用于预测的各种方法,并讨论了一些实验结果。他们还提出了一种改进的学习规则,该规则基于优化时间序列中向上和向下趋势的正确预测。
{"title":"On using backpropagation for prediction: an empirical study","authors":"S. Srirengan, C. Looi","doi":"10.1109/IJCNN.1991.170574","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170574","url":null,"abstract":"The authors describe the results of initial efforts in applying backpropagation to the prediction of future values of four time series, namely, the sunspot series, a monthly department store sales time series, and two financial index time series. They describe various ways of customizing the backpropagation network for prediction and discuss some experimental results. They also propose a modified learning rule based on optimizing correct predictions of upward and downward trends in a time series.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125155761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A new approach to the design of Hopfield associative memory Hopfield联想记忆设计的新方法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170666
J. Hao, S. Tan, J. Vandewalle
The authors present a novel method for constructing the weight matrix for the Hopfield associative memory. The most important feature of this method is the explicit introduction of the size of the attraction basin to be a main design parameter, and the weight matrix is obtained as a result of optimizing this parameter. Another feature is that all the connection weights can only assume three different values, -1, +1, and 0, which facilitates the VLSI implementation of the weights. Compared to the widely used Hebbian rule, the method can guarantee all the given patterns to be stored at least as fixed points, regardless of the internal structure of the patterns. The proposed design method is illustrated by a few examples.<>
提出了一种构造Hopfield联想记忆权矩阵的新方法。该方法最大的特点是明确引入吸引池的大小作为主要设计参数,并通过对该参数的优化得到权重矩阵。另一个特点是,所有的连接权重只能假设三个不同的值,-1、+1和0,这有利于权重的VLSI实现。与广泛使用的Hebbian规则相比,该方法可以保证所有给定的模式至少作为定点存储,而不考虑模式的内部结构。通过几个实例说明了所提出的设计方法。
{"title":"A new approach to the design of Hopfield associative memory","authors":"J. Hao, S. Tan, J. Vandewalle","doi":"10.1109/IJCNN.1991.170666","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170666","url":null,"abstract":"The authors present a novel method for constructing the weight matrix for the Hopfield associative memory. The most important feature of this method is the explicit introduction of the size of the attraction basin to be a main design parameter, and the weight matrix is obtained as a result of optimizing this parameter. Another feature is that all the connection weights can only assume three different values, -1, +1, and 0, which facilitates the VLSI implementation of the weights. Compared to the widely used Hebbian rule, the method can guarantee all the given patterns to be stored at least as fixed points, regardless of the internal structure of the patterns. The proposed design method is illustrated by a few examples.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126183437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rotational quadratic function neural networks 旋转二次函数神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170509
K. Cheung, C. Leung
The authors present a novel architecture, known as the rotational quadratic function neuron (RQFN), to implement the quadratic function neuron (QFN). Although with some loss in the degree of freedom in the boundary formation, RQFN possesses some attributes which are unique when compared to QFN. In particular, the architecture of RQFN is modular, which facilitates VLSI implementation. Moreover, by replacing QFN by RQFN in a multilayer perceptron (MP), the fan-in and the interconnection volume are reduced to that of MP utilizing linear neurons. In terms of learning, RQFN also offers varieties such as the separate learning paradigm and the constrained learning paradigm. Single-layer MP utilizing RQFNs have been demonstrated to form more desirable boundaries than the normal MP. This is essential in the scenario where either the closure of the boundary or boundaries of higher orders are required.<>
作者提出了一种新的结构,称为旋转二次函数神经元(RQFN),以实现二次函数神经元(QFN)。虽然边界形成的自由度有一定的损失,但RQFN与QFN相比具有一些独特的属性。特别是,RQFN的架构是模块化的,这有利于VLSI的实现。此外,通过在多层感知器(MP)中使用RQFN代替QFN,扇入和互连体积减少到使用线性神经元的MP。在学习方面,RQFN还提供了诸如独立学习范式和约束学习范式之类的变体。利用RQFNs的单层MP已被证明比普通MP形成更理想的边界。这在需要闭合边界或更高阶边界的情况下是必不可少的。
{"title":"Rotational quadratic function neural networks","authors":"K. Cheung, C. Leung","doi":"10.1109/IJCNN.1991.170509","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170509","url":null,"abstract":"The authors present a novel architecture, known as the rotational quadratic function neuron (RQFN), to implement the quadratic function neuron (QFN). Although with some loss in the degree of freedom in the boundary formation, RQFN possesses some attributes which are unique when compared to QFN. In particular, the architecture of RQFN is modular, which facilitates VLSI implementation. Moreover, by replacing QFN by RQFN in a multilayer perceptron (MP), the fan-in and the interconnection volume are reduced to that of MP utilizing linear neurons. In terms of learning, RQFN also offers varieties such as the separate learning paradigm and the constrained learning paradigm. Single-layer MP utilizing RQFNs have been demonstrated to form more desirable boundaries than the normal MP. This is essential in the scenario where either the closure of the boundary or boundaries of higher orders are required.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123419141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A parallel neural network computing for the maximum clique problem 最大团问题的并行神经网络计算
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170515
K.C. Lee, N. Funabiki, Y.B. Cho, Yoshiyasu Takefuji
A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.<>
提出了一种新的大规模最大团问题的计算模型并进行了验证。首先将最大团问题表述为一个无约束的二次0 - 1规划问题,并在新构造的图中通过最小化同一分区上的权值和来求解最大团问题。所提出的极大值神经网络具有以下优点:(1)极大值神经网络不需要对运动方程进行系数参数整定,而传统神经网络则需要对运动方程进行系数参数整定;(2)为了终止算法,明确定义了最大神经网络的平衡状态,而现有神经网络没有明确的定义;(3)极大值神经网络总是允许系统的状态收敛到可行解,而现有的神经网络不能保证这一点。针对传统的分支定界法由于计算时间呈指数级增长而无法使用的问题,本文提出的并行算法在求解质量基本相同的情况下,在计算时间上优于现有的并行算法。
{"title":"A parallel neural network computing for the maximum clique problem","authors":"K.C. Lee, N. Funabiki, Y.B. Cho, Yoshiyasu Takefuji","doi":"10.1109/IJCNN.1991.170515","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170515","url":null,"abstract":"A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fault tolerant analysis of associative memories 联想记忆的容错分析
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170335
Y.-P. Huang, D. Gustafson
The performance of fault tolerant associative memories is investigated. Instead of presenting the results by simulation, the authors mathematically show that the one-step retrieval probability in most cases decreases with the increase in error ratio, number of error bits, and number of stored patterns. For the case of faulty resistance, however, the performance will surpass the nonerror situation under the positive weight change. This is not only true in the Hopfield interconnection topology but is also true in the exponential correlation case.<>
研究了容错联想记忆的性能。在大多数情况下,一步检索概率随着错误率、错误比特数和存储模式数的增加而降低,而不是通过仿真给出结果。而对于故障电阻,在权值为正变化的情况下,性能将超过非误差情况。这不仅在Hopfield互连拓扑中成立,而且在指数相关的情况下也是成立的
{"title":"Fault tolerant analysis of associative memories","authors":"Y.-P. Huang, D. Gustafson","doi":"10.1109/IJCNN.1991.170335","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170335","url":null,"abstract":"The performance of fault tolerant associative memories is investigated. Instead of presenting the results by simulation, the authors mathematically show that the one-step retrieval probability in most cases decreases with the increase in error ratio, number of error bits, and number of stored patterns. For the case of faulty resistance, however, the performance will surpass the nonerror situation under the positive weight change. This is not only true in the Hopfield interconnection topology but is also true in the exponential correlation case.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115062646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The effect of the dimensionality of interconnections on the storage capacity of a threshold controlled neural network 互连维数对阈值控制神经网络存储容量的影响
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170764
A. Hartstein
The author investigates the effect of the dimensionality of the interconnections in a Hopfield-type network on the storage capacity of the network. The analysis is performed for 1D, 2D, 3D and 4D interconnection geometries. The capacity was found to be independent of the dimensionality of the interconnections and to depend only on the total number of interconnections available in a given network. In addition, no evidence of any instabilities was observed, in contrast to physical systems of reduced dimensionality.<>
作者研究了hopfield型网络中互连的维数对网络存储容量的影响。对1D、2D、3D和4D互连几何形状进行了分析。研究发现,该容量与互连的维度无关,仅取决于给定网络中可用互连的总数。此外,与降维的物理系统相比,没有观察到任何不稳定的证据。
{"title":"The effect of the dimensionality of interconnections on the storage capacity of a threshold controlled neural network","authors":"A. Hartstein","doi":"10.1109/IJCNN.1991.170764","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170764","url":null,"abstract":"The author investigates the effect of the dimensionality of the interconnections in a Hopfield-type network on the storage capacity of the network. The analysis is performed for 1D, 2D, 3D and 4D interconnection geometries. The capacity was found to be independent of the dimensionality of the interconnections and to depend only on the total number of interconnections available in a given network. In addition, no evidence of any instabilities was observed, in contrast to physical systems of reduced dimensionality.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122987483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
K-means competitive learning for non-stationary environments 非平稳环境下的K-means竞争学习
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170277
C. Chinrungrueng, C. Séquin
A modified k-means competitive learning algorithm that can perform efficiently in situations where the input statistics are changing, such as in nonstationary environments, is presented. This modified algorithm is characterized by the membership indicator that attempts to balance the variations of all clusters and by the learning rate that is dynamically adjusted based on the estimated deviation of the current partition from an optimal one. Simulations comparing this new algorithm with other k-means competitive learning algorithms on stationary and nonstationary problems are presented.<>
提出了一种改进的k-均值竞争学习算法,该算法可以在输入统计数据变化的情况下有效地执行,例如在非平稳环境中。改进后的算法的特点是试图平衡所有聚类的变化的隶属度指标和基于当前分区与最优分区的估计偏差动态调整的学习率。并将该算法与其他k-均值竞争学习算法在平稳和非平稳问题上进行了仿真比较。
{"title":"K-means competitive learning for non-stationary environments","authors":"C. Chinrungrueng, C. Séquin","doi":"10.1109/IJCNN.1991.170277","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170277","url":null,"abstract":"A modified k-means competitive learning algorithm that can perform efficiently in situations where the input statistics are changing, such as in nonstationary environments, is presented. This modified algorithm is characterized by the membership indicator that attempts to balance the variations of all clusters and by the learning rate that is dynamically adjusted based on the estimated deviation of the current partition from an optimal one. Simulations comparing this new algorithm with other k-means competitive learning algorithms on stationary and nonstationary problems are presented.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123012937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Approximations of mappings and application to translational invariant networks 映射的近似及其在平移不变网络中的应用
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170730
P. Koiran
The author studies the approximation of continuous mappings and dichotomies by one-hidden-layer networks, from a computational point of view. The approach is based on a new approximation method, specially designed for constructing small networks. Upper bounds are given on the size of these networks. These results are specialized to the case of transitional invariant networks, i.e., networks whose outputs are unchanged when their inputs are submitted to a translation.<>
作者从计算的角度研究了单隐层网络对连续映射和二分类的逼近。该方法基于一种新的近似方法,专为构建小型网络而设计。给出了这些网络大小的上界。这些结果专门用于过渡不变网络的情况,即当其输入提交到翻译时,其输出不变的网络。
{"title":"Approximations of mappings and application to translational invariant networks","authors":"P. Koiran","doi":"10.1109/IJCNN.1991.170730","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170730","url":null,"abstract":"The author studies the approximation of continuous mappings and dichotomies by one-hidden-layer networks, from a computational point of view. The approach is based on a new approximation method, specially designed for constructing small networks. Upper bounds are given on the size of these networks. These results are specialized to the case of transitional invariant networks, i.e., networks whose outputs are unchanged when their inputs are submitted to a translation.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122108639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural activities and cluster-formation in a random neural network 随机神经网络中的神经活动和簇的形成
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170707
N. Matsui, E. Bamba
An approach to a macroscopic description of a cluster-formation algorithm by neural activities in a random neural network is considered. The activity interaction between clusters of neurons and the network entropy through the medium of the activity parameter x(p) for the input pattern p, are introduced as a system energy. By using the neural state transition rule similar to that in the Boltzmann network and some simple stochastic assumptions, cluster-formation of neurons was simulated. The relations between cluster sizes, or the simulated activity, and the setting activity parameter are shown. The validity of this macroscopic description is also discussed.<>
研究了随机神经网络中神经活动对聚类形成算法的宏观描述方法。神经元簇之间的活动相互作用和网络熵通过输入模式p的活动参数x(p)作为系统能量引入。利用类似于玻尔兹曼网络的神经状态转移规则和一些简单的随机假设,模拟了神经元簇的形成。显示了群集大小或模拟活动与设置活动参数之间的关系。本文还讨论了这种宏观描述的有效性。
{"title":"Neural activities and cluster-formation in a random neural network","authors":"N. Matsui, E. Bamba","doi":"10.1109/IJCNN.1991.170707","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170707","url":null,"abstract":"An approach to a macroscopic description of a cluster-formation algorithm by neural activities in a random neural network is considered. The activity interaction between clusters of neurons and the network entropy through the medium of the activity parameter x(p) for the input pattern p, are introduced as a system energy. By using the neural state transition rule similar to that in the Boltzmann network and some simple stochastic assumptions, cluster-formation of neurons was simulated. The relations between cluster sizes, or the simulated activity, and the setting activity parameter are shown. The validity of this macroscopic description is also discussed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117249351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1