首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
On the relations between radial basis function networks and fuzzy systems 论径向基函数网络与模糊系统的关系
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287132
P. A. Jokinen
Numerical estimators of nonlinear functions can be constructed using systems based on fuzzy logic, artificial neural networks, and nonparametric regression methods. Some interesting similarities between fuzzy systems and some types of neural network models that use radial basis functions are discussed. Both these methods can be regarded as structural numerical estimators, because a rough interpretation can be given in terms of pointwise (local) rules. This explanation capability is important if the models are used as building blocks of expert systems. Most of the neural network models currently lack this capability, which the structural numerical estimators have intrinsically.<>
非线性函数的数值估计可以使用基于模糊逻辑、人工神经网络和非参数回归方法的系统来构造。讨论了模糊系统与某些使用径向基函数的神经网络模型之间的一些有趣的相似之处。这两种方法都可以看作是结构数值估计,因为可以根据点(局部)规则给出粗略的解释。如果模型被用作专家系统的构建块,这种解释能力是很重要的。目前大多数神经网络模型缺乏这种能力,而结构数值估计器具有这种能力。
{"title":"On the relations between radial basis function networks and fuzzy systems","authors":"P. A. Jokinen","doi":"10.1109/IJCNN.1992.287132","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287132","url":null,"abstract":"Numerical estimators of nonlinear functions can be constructed using systems based on fuzzy logic, artificial neural networks, and nonparametric regression methods. Some interesting similarities between fuzzy systems and some types of neural network models that use radial basis functions are discussed. Both these methods can be regarded as structural numerical estimators, because a rough interpretation can be given in terms of pointwise (local) rules. This explanation capability is important if the models are used as building blocks of expert systems. Most of the neural network models currently lack this capability, which the structural numerical estimators have intrinsically.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124604763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Why tanh: choosing a sigmoidal function 为什么选择s型函数
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227257
B. Kalman, S. Kwasny
As hardware implementations of backpropagation and related training algorithms are anticipated, the choice of a sigmoidal function should be carefully justified. Attention should focus on choosing an activation function in a neural unit that exhibits the best properties for training. The author argues for the use of the hyperbolic tangent. While the exact shape of the sigmoidal makes little difference once the network is trained, it is shown that it possesses particular properties that make it appealing for use while training. By paying attention to scaling it is illustrated that tanh (1.5*) has the additional advantage of equalizing training over layers. This result can easily generalize to several standard sigmoidal functions commonly in use.<>
由于反向传播和相关训练算法的硬件实现是预期的,因此选择s型函数应该仔细地进行论证。应注意在神经单元中选择一个表现出最佳训练特性的激活函数。作者主张使用双曲切线。虽然一旦网络被训练,s型曲线的确切形状几乎没有什么不同,但它显示出它具有特殊的性质,使其在训练时具有吸引力。通过注意缩放,可以看出tanh(1.5*)具有均衡多层训练的额外优势。这个结果可以很容易地推广到常用的几个标准s型函数。
{"title":"Why tanh: choosing a sigmoidal function","authors":"B. Kalman, S. Kwasny","doi":"10.1109/IJCNN.1992.227257","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227257","url":null,"abstract":"As hardware implementations of backpropagation and related training algorithms are anticipated, the choice of a sigmoidal function should be carefully justified. Attention should focus on choosing an activation function in a neural unit that exhibits the best properties for training. The author argues for the use of the hyperbolic tangent. While the exact shape of the sigmoidal makes little difference once the network is trained, it is shown that it possesses particular properties that make it appealing for use while training. By paying attention to scaling it is illustrated that tanh (1.5*) has the additional advantage of equalizing training over layers. This result can easily generalize to several standard sigmoidal functions commonly in use.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129640956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 209
A net for automatic detection of minimal correlation order in contextual pattern recognition 上下文模式识别中最小相关顺序自动检测网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227213
P. Castiglione, G. Basti, Stefano Fusi, G. Morgavi, A. Perrone
The authors propose a neural net able to recognize input pattern sequences by memorizing only one of the transformed patterns, the prototype forming the sequence. This capacity depends on an automatic control of the minimal correlation order to perform recognition tasks and, in ambiguous cases, on a type of context-dependent memory recalling. The neural net model can use the noise constructively to modify continuously the learned prototype pattern in view of a contextual recognition of input pattern sequences. In such a way, the net is able to deduce, by itself, from the prototype pattern, the hypotheses by which it can recognize highly corrupted static patterns, or sequences of transformed patterns.<>
作者提出了一种神经网络,能够通过记忆转换后的模式序列中的一个来识别输入模式序列,即形成该序列的原型。这种能力依赖于对最小相关顺序的自动控制来执行识别任务,在模糊的情况下,依赖于一种依赖于上下文的记忆回忆。基于对输入模式序列的上下文识别,该神经网络模型可以利用噪声建设性地对学习到的原型模式进行连续修改。通过这种方式,网络能够自己从原型模式中推断出假设,通过这些假设,它可以识别高度损坏的静态模式或转换模式的序列。
{"title":"A net for automatic detection of minimal correlation order in contextual pattern recognition","authors":"P. Castiglione, G. Basti, Stefano Fusi, G. Morgavi, A. Perrone","doi":"10.1109/IJCNN.1992.227213","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227213","url":null,"abstract":"The authors propose a neural net able to recognize input pattern sequences by memorizing only one of the transformed patterns, the prototype forming the sequence. This capacity depends on an automatic control of the minimal correlation order to perform recognition tasks and, in ambiguous cases, on a type of context-dependent memory recalling. The neural net model can use the noise constructively to modify continuously the learned prototype pattern in view of a contextual recognition of input pattern sequences. In such a way, the net is able to deduce, by itself, from the prototype pattern, the hypotheses by which it can recognize highly corrupted static patterns, or sequences of transformed patterns.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126808588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive feedforward control of cyclic movements using artificial neural networks 基于人工神经网络的循环运动自适应前馈控制
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226884
J. Abbas, H. Chizeck
An adaptive neural network control system has been designed for the purpose of controlling cyclic movements of nonlinear dynamic systems with input time delays (as found in functional neuromuscular stimulation). The adaptive feedforward (FF) controller is implemented as a two-stage neural network. The first stage, the pattern generator (PG), generates a cyclic pattern of activity. The signals from the PG are adaptively filtered by the second stage, the pattern shaper (PS). This stage uses modifications to standard artificial neural network learning algorithms to adapt its filter properties. The control system is evaluated in computer simulation on a musculoskeletal model which consists of two muscles acting on a swinging pendulum. The control system provides automated customization of the FF controller parameters for a given musculoskeletal system as well as online adaptation of the FF controller parameters to account for changes in the musculoskeletal system.<>
设计了一种自适应神经网络控制系统,用于控制具有输入时滞的非线性动态系统的循环运动(如功能性神经肌肉刺激)。自适应前馈(FF)控制器采用两级神经网络实现。第一阶段是模式生成器(PG),生成活动的循环模式。来自PG的信号由第二阶段的模式整形器(PS)进行自适应滤波。这个阶段使用对标准人工神经网络学习算法的修改来适应其过滤特性。在一个由两块肌肉作用于摆动摆的肌肉骨骼模型上,对控制系统进行了计算机仿真。该控制系统为给定的肌肉骨骼系统提供自动自定义FF控制器参数,以及在线调整FF控制器参数以适应肌肉骨骼系统的变化
{"title":"Adaptive feedforward control of cyclic movements using artificial neural networks","authors":"J. Abbas, H. Chizeck","doi":"10.1109/IJCNN.1992.226884","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226884","url":null,"abstract":"An adaptive neural network control system has been designed for the purpose of controlling cyclic movements of nonlinear dynamic systems with input time delays (as found in functional neuromuscular stimulation). The adaptive feedforward (FF) controller is implemented as a two-stage neural network. The first stage, the pattern generator (PG), generates a cyclic pattern of activity. The signals from the PG are adaptively filtered by the second stage, the pattern shaper (PS). This stage uses modifications to standard artificial neural network learning algorithms to adapt its filter properties. The control system is evaluated in computer simulation on a musculoskeletal model which consists of two muscles acting on a swinging pendulum. The control system provides automated customization of the FF controller parameters for a given musculoskeletal system as well as online adaptation of the FF controller parameters to account for changes in the musculoskeletal system.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130577572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A feature selection method for multi-class-set classification 一种多类集分类的特征选择方法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227114
Bin Yu, Baozong Yuan
A versatile technique for set-feature selection from class features without any prior knowledge for multi-class-set classification is presented. A class set is a group of classes in which the patterns represented with class features can be classified with a existing classifier. The features used to classify patterns between classes within a class set are referred to as class features and the ones used to classify patterns between class sets as set features. A set-feature set is produced from class-feature sets under the criterion of minimizing the encounter zones between class sets in set-feature space. The performance of this technique was illustrated with an experiment on the understanding of circuit diagrams.<>
提出了一种不需要任何先验知识就能从类特征中选择集特征的通用技术。类集是一组类,其中用类特征表示的模式可以用现有的分类器进行分类。用于对类集中的类之间的模式进行分类的特征称为类特征,用于对类集之间的模式进行分类的特征称为集特征。在集-特征空间中,以最小化类集之间的相遇区域为准则,由类-特征集生成集-特征集。通过对电路图理解的实验,说明了该技术的性能
{"title":"A feature selection method for multi-class-set classification","authors":"Bin Yu, Baozong Yuan","doi":"10.1109/IJCNN.1992.227114","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227114","url":null,"abstract":"A versatile technique for set-feature selection from class features without any prior knowledge for multi-class-set classification is presented. A class set is a group of classes in which the patterns represented with class features can be classified with a existing classifier. The features used to classify patterns between classes within a class set are referred to as class features and the ones used to classify patterns between class sets as set features. A set-feature set is produced from class-feature sets under the criterion of minimizing the encounter zones between class sets in set-feature space. The performance of this technique was illustrated with an experiment on the understanding of circuit diagrams.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123340337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the application of feed forward neural networks to channel equalization 前馈神经网络在信道均衡中的应用
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226870
W. R. Kirkland, D. Taylor
The application of feedforward neural networks to adaptive channel equalization is examined. The Rummler channel model is used for modeling the digital microwave radio channel. In applying neural networks to the channel equalization problem, complex neurons in the neural network are used. This allows for a frequency interpretation of the weights of the neurons in the first hidden layer. This channel model allows examination of binary signaling in two dimensions, (4-quadrature amplitude modulation, or QAM), and higher-level signaling as well, (16-QAM). Results show that while neural nets provide a significant performance increase in the case of binary signaling in two dimensions (4-QAM), this performance is not reflected in the results for the higher-level signaling schemes. In this case the neural net equalizer performance tends to parallel that of the linear transversal equalizer.<>
研究了前馈神经网络在自适应信道均衡中的应用。采用Rummler信道模型对数字微波无线电信道进行建模。在将神经网络应用于信道均衡问题时,使用了神经网络中的复杂神经元。这允许对第一个隐藏层中神经元的权重进行频率解释。该通道模型允许在二维(4-正交调幅,或QAM)和更高级别的信号(16-QAM)中检查二进制信号。结果表明,虽然神经网络在二维二进制信令(4-QAM)的情况下提供了显着的性能提高,但这种性能并未反映在更高级别信令方案的结果中。在这种情况下,神经网络均衡器的性能趋于与线性横向均衡器的性能平行。
{"title":"On the application of feed forward neural networks to channel equalization","authors":"W. R. Kirkland, D. Taylor","doi":"10.1109/IJCNN.1992.226870","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226870","url":null,"abstract":"The application of feedforward neural networks to adaptive channel equalization is examined. The Rummler channel model is used for modeling the digital microwave radio channel. In applying neural networks to the channel equalization problem, complex neurons in the neural network are used. This allows for a frequency interpretation of the weights of the neurons in the first hidden layer. This channel model allows examination of binary signaling in two dimensions, (4-quadrature amplitude modulation, or QAM), and higher-level signaling as well, (16-QAM). Results show that while neural nets provide a significant performance increase in the case of binary signaling in two dimensions (4-QAM), this performance is not reflected in the results for the higher-level signaling schemes. In this case the neural net equalizer performance tends to parallel that of the linear transversal equalizer.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123473160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Enhancements to probabilistic neural networks 增强概率神经网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287095
D. Specht
Probabilistic neural networks (PNNs) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of a PNN stems from the fact that it requires one node or neuron for each training pattern. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. The correct choice of clustering technique will depend on the data distribution, data rate, and hardware implementation. Adaptation of kernel shape provides a tradeoff of increased accuracy for increased complexity and training time. The technique described also provides a basis for automatic feature selection and dimensionality reduction.<>
概率神经网络(PNNs)能够快速地从一个例子中学习,并渐近地达到贝叶斯最优决策边界。PNN的主要缺点是每个训练模式需要一个节点或神经元。已经提出了各种聚类技术来将这种需求减少到每个集群中心一个节点。正确选择聚类技术取决于数据分布、数据速率和硬件实现。核形状的调整提供了提高精度和增加复杂性和训练时间的折衷。所描述的技术也为自动特征选择和降维提供了基础。
{"title":"Enhancements to probabilistic neural networks","authors":"D. Specht","doi":"10.1109/IJCNN.1992.287095","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287095","url":null,"abstract":"Probabilistic neural networks (PNNs) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of a PNN stems from the fact that it requires one node or neuron for each training pattern. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. The correct choice of clustering technique will depend on the data distribution, data rate, and hardware implementation. Adaptation of kernel shape provides a tradeoff of increased accuracy for increased complexity and training time. The technique described also provides a basis for automatic feature selection and dimensionality reduction.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123656737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 197
Improving the performance of probabilistic neural networks 改进概率神经网络的性能
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287147
M. Musavi, K. Kalantri, W. Ahmed
A methodology for selection of appropriate widths or covariance matrices of the Gaussian functions in implementations of PNN (probabilistic neural network) classifiers is presented. The Gram-Schmidt orthogonalization process is employed to find these matrices. It has been shown that the proposed technique improves the generalization ability of the PNN classifiers over the standard approach. The result can be applied to other Gaussian-based classifiers such as the radial basis functions.<>
提出了一种在概率神经网络分类器中选择高斯函数的适当宽度或协方差矩阵的方法。利用Gram-Schmidt正交化方法求出这些矩阵。结果表明,与标准方法相比,该方法提高了PNN分类器的泛化能力。该结果可以应用于其他基于高斯的分类器,如径向基函数。
{"title":"Improving the performance of probabilistic neural networks","authors":"M. Musavi, K. Kalantri, W. Ahmed","doi":"10.1109/IJCNN.1992.287147","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287147","url":null,"abstract":"A methodology for selection of appropriate widths or covariance matrices of the Gaussian functions in implementations of PNN (probabilistic neural network) classifiers is presented. The Gram-Schmidt orthogonalization process is employed to find these matrices. It has been shown that the proposed technique improves the generalization ability of the PNN classifiers over the standard approach. The result can be applied to other Gaussian-based classifiers such as the radial basis functions.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"181 27","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120885663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Two dimensional curve shape primitives for detecting line defects in silicon wafers 用于硅片线缺陷检测的二维曲线形状基元
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227110
D. Sikka
A new set of two-dimensional curve shape primitives for detecting line defects on wafers in semiconductor manufacturing is presented. A supervised learning based neural network which incorporates these shape primitives has been built and tested on more than six months of real data from an Intel fabrication laboratory. Results demonstrate that the new set of shape primitives was very accurate in capturing the line defects.<>
提出了一套新的二维曲线形状基元,用于半导体制造中圆片线缺陷的检测。一个基于监督学习的神经网络整合了这些形状基元,并在英特尔制造实验室超过6个月的真实数据上进行了测试。结果表明,新的形状基元集在捕获线形缺陷方面非常准确。
{"title":"Two dimensional curve shape primitives for detecting line defects in silicon wafers","authors":"D. Sikka","doi":"10.1109/IJCNN.1992.227110","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227110","url":null,"abstract":"A new set of two-dimensional curve shape primitives for detecting line defects on wafers in semiconductor manufacturing is presented. A supervised learning based neural network which incorporates these shape primitives has been built and tested on more than six months of real data from an Intel fabrication laboratory. Results demonstrate that the new set of shape primitives was very accurate in capturing the line defects.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121223095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Generalized McCullouch-Pitts neuron model with threshold dynamics 具有阈值动态的广义McCullouch-Pitts神经元模型
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227119
H. Szu, G. Rogers
The McCullouch-Pitts (M-P) model for a neuron is generalized to endow the axon threshold with a time-dependent nonlinear dynamics. Two components of the threshold vector can be used to generate a pulsed coding output with the same qualitative characteristics as real axon hillocks, which could be useful for communications pulse coding. A simple dynamical neuron model that can include internal dynamics involving multiple internal degrees of freedom is proposed. The model reduces to the M-P model for static inputs and no internal dynamical degrees of freedom. The treatment is restricted to a single neuron without learning. Two examples are included.<>
推广了神经元的McCullouch-Pitts (M-P)模型,使轴突阈值具有时变非线性动力学。阈值矢量的两个分量可用于产生与真实轴突丘具有相同定性特征的脉冲编码输出,这可能对通信脉冲编码有用。提出了一个简单的动态神经元模型,该模型可以包含涉及多个内部自由度的内部动力学。该模型简化为静态输入和无内部动态自由度时的M-P模型。这种治疗仅限于一个没有学习能力的神经元。包括两个例子。
{"title":"Generalized McCullouch-Pitts neuron model with threshold dynamics","authors":"H. Szu, G. Rogers","doi":"10.1109/IJCNN.1992.227119","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227119","url":null,"abstract":"The McCullouch-Pitts (M-P) model for a neuron is generalized to endow the axon threshold with a time-dependent nonlinear dynamics. Two components of the threshold vector can be used to generate a pulsed coding output with the same qualitative characteristics as real axon hillocks, which could be useful for communications pulse coding. A simple dynamical neuron model that can include internal dynamics involving multiple internal degrees of freedom is proposed. The model reduces to the M-P model for static inputs and no internal dynamical degrees of freedom. The treatment is restricted to a single neuron without learning. Two examples are included.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121302507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1