首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
Comparative performance measures of fuzzy ARTMAP, learned vector quantization, and back propagation for handwritten character recognition 比较性能指标的模糊ARTMAP,学习矢量量化,和反向传播手写字符识别
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287090
G. Carpenter, S. Grossberg, K. Iizuka
The authors compare the performance of fuzzy ARTMAP with that of learned vector quantization and backpropagation on a handwritten character recognition task. Training with fuzzy ARTMAP to a fixed criterion used many fewer epochs. Voting with fuzzy ARTMAP yielded the highest recognition rates.<>
作者比较了模糊ARTMAP与学习向量量化和反向传播算法在手写体字符识别中的性能。使用模糊ARTMAP到固定准则的训练使用了更少的epoch。使用模糊ARTMAP投票获得了最高的识别率。
{"title":"Comparative performance measures of fuzzy ARTMAP, learned vector quantization, and back propagation for handwritten character recognition","authors":"G. Carpenter, S. Grossberg, K. Iizuka","doi":"10.1109/IJCNN.1992.287090","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287090","url":null,"abstract":"The authors compare the performance of fuzzy ARTMAP with that of learned vector quantization and backpropagation on a handwritten character recognition task. Training with fuzzy ARTMAP to a fixed criterion used many fewer epochs. Voting with fuzzy ARTMAP yielded the highest recognition rates.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115394441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
On the convergence of a block-gradient algorithm for back-propagation learning 关于反向传播学习的块梯度算法的收敛性
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227082
H. Paugam-Moisy
A block-gradient algorithm is defined, where the weight matrix is updated after every presentation of a block of b examples each. Total and stochastic gradients are included in the block-gradient algorithm, for particular values of b. Experimental laws are stated on the speed of convergence, according to the block size. The first law indicates that an adaptive learning rate has to respect an exponential decreasing function of the number of examples presented between two successive weight updates. The second law states that, with an adaptive learning rate value, the number of epochs grows linearly with the size of the exemplar blocks. The last one shows how the number of epochs for reaching a given level of performance depends on the learning rate.<>
定义了一个块梯度算法,其中权重矩阵在每次呈现一个由b个示例组成的块后更新。对于特定值的b,块梯度算法中包含总梯度和随机梯度。根据块大小,说明了收敛速度的实验规律。第一定律表明,自适应学习率必须遵循两个连续权重更新之间呈现的示例数的指数递减函数。第二定律指出,在具有自适应学习率值的情况下,epoch的数量随着样本块的大小线性增长。最后一张图显示了达到给定性能水平的迭代次数是如何取决于学习率的
{"title":"On the convergence of a block-gradient algorithm for back-propagation learning","authors":"H. Paugam-Moisy","doi":"10.1109/IJCNN.1992.227082","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227082","url":null,"abstract":"A block-gradient algorithm is defined, where the weight matrix is updated after every presentation of a block of b examples each. Total and stochastic gradients are included in the block-gradient algorithm, for particular values of b. Experimental laws are stated on the speed of convergence, according to the block size. The first law indicates that an adaptive learning rate has to respect an exponential decreasing function of the number of examples presented between two successive weight updates. The second law states that, with an adaptive learning rate value, the number of epochs grows linearly with the size of the exemplar blocks. The last one shows how the number of epochs for reaching a given level of performance depends on the learning rate.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115705023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An extended back-propagation learning algorithm by using heterogeneous processing units 基于异构处理单元的扩展反向传播学习算法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227071
C.-L. Chen, R. S. Nutter
Based on the idea of using heterogeneous processing units (PUs) in a network, a variation of the backpropagation (BP) learning algorithm is presented. Three parameters, which are adjustable like connection weights, are incorporated into each PU to increase its autonomous capability by enhancing the output function. The extended BP learning algorithm thus is developed by updating the three parameters as well as connection weights. The extended BP is intended not only to improve the learning speed, but also to reduce the occurrence of local minima. The algorithm has been intensively tested on the XOR problem. By carefully choosing learning rates, results show that the extended BP appears to have advantages over the standard BP in terms of faster learning speed and fewer local minima.<>
基于在网络中使用异构处理单元(pu)的思想,提出了一种反向传播(BP)学习算法的变体。每个PU都包含三个参数,如连接权重可调,通过增强输出功能来增加其自主能力。扩展BP学习算法通过更新三个参数和连接权值来实现。扩展BP不仅可以提高学习速度,还可以减少局部极小值的出现。该算法在异或问题上进行了大量的测试。通过仔细选择学习率,结果表明扩展BP在学习速度更快和局部最小值更少方面优于标准BP。
{"title":"An extended back-propagation learning algorithm by using heterogeneous processing units","authors":"C.-L. Chen, R. S. Nutter","doi":"10.1109/IJCNN.1992.227071","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227071","url":null,"abstract":"Based on the idea of using heterogeneous processing units (PUs) in a network, a variation of the backpropagation (BP) learning algorithm is presented. Three parameters, which are adjustable like connection weights, are incorporated into each PU to increase its autonomous capability by enhancing the output function. The extended BP learning algorithm thus is developed by updating the three parameters as well as connection weights. The extended BP is intended not only to improve the learning speed, but also to reduce the occurrence of local minima. The algorithm has been intensively tested on the XOR problem. By carefully choosing learning rates, results show that the extended BP appears to have advantages over the standard BP in terms of faster learning speed and fewer local minima.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121826530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Learning of the Coulomb energy network on the variation of the temperature 学习库仑能量网络对温度变化的影响
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287097
Hee-Sook Choi, K. Lee, Yung Hwan Kim, Won Don Lee
A method that minimizes the energy function on the variation not only of weight but also of temperature for the Coulomb energy network (CEN) is proposed. The proposed method is compared with the traditional learning method using only weight variation. It is shown that learning is done more efficiently and accurately with the proposed method. Since weight and temperature can be learned in parallel, the speed of learning might be doubled if appropriate hardware support is provided. The concept of the distance is used to solve the linearly nonseparable classification problem, which cannot be solved in the traditional supervised CEN.<>
提出了一种库仑能量网络(CEN)的能量函数既随重量变化又随温度变化最小化的方法。将该方法与仅使用权值变化的传统学习方法进行了比较。实验结果表明,该方法可以提高学习效率和准确性。由于权重和温度可以并行学习,如果提供适当的硬件支持,学习速度可能会翻倍。利用距离的概念解决了传统有监督CEN算法无法解决的线性不可分分类问题
{"title":"Learning of the Coulomb energy network on the variation of the temperature","authors":"Hee-Sook Choi, K. Lee, Yung Hwan Kim, Won Don Lee","doi":"10.1109/IJCNN.1992.287097","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287097","url":null,"abstract":"A method that minimizes the energy function on the variation not only of weight but also of temperature for the Coulomb energy network (CEN) is proposed. The proposed method is compared with the traditional learning method using only weight variation. It is shown that learning is done more efficiently and accurately with the proposed method. Since weight and temperature can be learned in parallel, the speed of learning might be doubled if appropriate hardware support is provided. The concept of the distance is used to solve the linearly nonseparable classification problem, which cannot be solved in the traditional supervised CEN.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117052793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-order attention-shifting networks for relational structure matching 关系结构匹配的高阶注意力转移网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227270
K. R. Miller, P. Zunde
The Hopfield-Tank optimization network has been applied to the model-image matching problem in computer vision using a graph matching formulation. However, the network has been criticized for unreliable convergence to feasible solutions and for poor solution quality, and the graph matching formulation is unable to represent matching problems with multiple object types, and multiple relations, and high-order relations. The Hopfield-Tank network dynamics is generalized to provide a basis for reliable convergence to feasible solutions, for finding high-quality solutions, and for solving a broad class of optimization problems. The extensions include a new technique called attention-shifting, the introduction of high-order connections in the network, and relaxation of the unit hypercube restriction.<>
将Hopfield-Tank优化网络应用于计算机视觉中的模型-图像匹配问题。然而,该网络收敛到可行解不可靠,解质量差,图匹配公式无法表示具有多对象类型、多关系和高阶关系的匹配问题。对Hopfield-Tank网络动力学进行了推广,为可靠收敛到可行解、寻找高质量解以及求解广泛的优化问题提供了基础。这些扩展包括一种称为注意力转移的新技术,在网络中引入高阶连接,以及放宽单位超立方体限制。
{"title":"High-order attention-shifting networks for relational structure matching","authors":"K. R. Miller, P. Zunde","doi":"10.1109/IJCNN.1992.227270","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227270","url":null,"abstract":"The Hopfield-Tank optimization network has been applied to the model-image matching problem in computer vision using a graph matching formulation. However, the network has been criticized for unreliable convergence to feasible solutions and for poor solution quality, and the graph matching formulation is unable to represent matching problems with multiple object types, and multiple relations, and high-order relations. The Hopfield-Tank network dynamics is generalized to provide a basis for reliable convergence to feasible solutions, for finding high-quality solutions, and for solving a broad class of optimization problems. The extensions include a new technique called attention-shifting, the introduction of high-order connections in the network, and relaxation of the unit hypercube restriction.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120961936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Silicon implementation of an artificial dendritic tree 用硅实现的一棵人工树突树
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287143
J. G. Elias, H.-H. Chu, S. M. Meshreki
The silicon implementation of an artificial passive dendritic tree which can be used to process and classify dynamic signals is described. The electrical circuit architecture is modeled after complex neurons in the vertebrate brain which have spatially extensive dendritic tree structures that support large numbers of synapses. The circuit is primarily analog and, as in the biological model system, is virtually immune to process variations and other factors which often plague more conventional circuits. The nonlinear circuit is sensitive to both temporal and spatial signal characteristics but does not make use of the conventional neural network concept of weights, and as such does not use multipliers, adders, or other complex computational devices. As in biological neuronal circuits, a high degree of local connectivity is required. However, unlike biology, multiplexing of connections is done to reduce the number of conductors to a reasonable level for standard packages.<>
描述了一种用于动态信号处理和分类的人工无源树突状树的硅实现。电路结构是模仿脊椎动物大脑中的复杂神经元,这些神经元具有空间广泛的树突树结构,支持大量突触。该电路主要是模拟的,并且与生物模型系统一样,几乎不受过程变化和其他经常困扰传统电路的因素的影响。非线性电路对时间和空间信号特征都很敏感,但不使用传统的神经网络权重概念,因此不使用乘法器、加法器或其他复杂的计算设备。在生物神经回路中,需要高度的局部连接。然而,与生物学不同的是,连接的多路复用是为了将导体的数量减少到标准封装的合理水平。
{"title":"Silicon implementation of an artificial dendritic tree","authors":"J. G. Elias, H.-H. Chu, S. M. Meshreki","doi":"10.1109/IJCNN.1992.287143","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287143","url":null,"abstract":"The silicon implementation of an artificial passive dendritic tree which can be used to process and classify dynamic signals is described. The electrical circuit architecture is modeled after complex neurons in the vertebrate brain which have spatially extensive dendritic tree structures that support large numbers of synapses. The circuit is primarily analog and, as in the biological model system, is virtually immune to process variations and other factors which often plague more conventional circuits. The nonlinear circuit is sensitive to both temporal and spatial signal characteristics but does not make use of the conventional neural network concept of weights, and as such does not use multipliers, adders, or other complex computational devices. As in biological neuronal circuits, a high degree of local connectivity is required. However, unlike biology, multiplexing of connections is done to reduce the number of conductors to a reasonable level for standard packages.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124861469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Auditory orienting: automatic detection of auditory change over brief intervals of time: a neural net model of evoked brain potentials 听觉定向:在短时间间隔内听觉变化的自动检测:诱发脑电位的神经网络模型
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227058
J. Antrobus, S. Alankar, D. Deacon, W. Ritter
The human auditory system has a neurophysiological component, mismatch negativity (MMN), that automatically registers change over time of a variety of simple auditory features, e.g., loudness, pitch, duration, and spatial location. A neural network automatic auditory orienting (AAO-MMN) model which simulates the MMN response is described. The main assumption of the proposed AAO-MMN model is that the broad range characteristic of MMN is achieved by local inhibition of the nonlocal thalamic sources of distributed neural activation. The model represents this activation source by a single thalamic (T) unit that is always fully active. The second assumption is that the buildup of MMN over several repetitions of the standard stimulus is accomplished by a local cumulative activation function. All the local accumulator neurons inhibit the nonlocal, steady-state, thalamic activation represented by T.<>
人类听觉系统有一个神经生理组成部分,失配消极性(MMN),它自动记录各种简单的听觉特征随时间的变化,如响度、音调、持续时间和空间位置。描述了一种模拟听觉网络响应的神经网络自动听觉定向(AAO-MMN)模型。提出的AAO-MMN模型的主要假设是MMN的宽范围特性是通过局部抑制分布式神经激活的非局部丘脑源来实现的。该模型通过一个始终处于完全激活状态的丘脑(T)单元来表示这个激活源。第二个假设是,在标准刺激的多次重复中,MMN的积累是由局部累积激活函数完成的。所有的局部蓄能器神经元都抑制以t为代表的非局部稳态丘脑激活。
{"title":"Auditory orienting: automatic detection of auditory change over brief intervals of time: a neural net model of evoked brain potentials","authors":"J. Antrobus, S. Alankar, D. Deacon, W. Ritter","doi":"10.1109/IJCNN.1992.227058","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227058","url":null,"abstract":"The human auditory system has a neurophysiological component, mismatch negativity (MMN), that automatically registers change over time of a variety of simple auditory features, e.g., loudness, pitch, duration, and spatial location. A neural network automatic auditory orienting (AAO-MMN) model which simulates the MMN response is described. The main assumption of the proposed AAO-MMN model is that the broad range characteristic of MMN is achieved by local inhibition of the nonlocal thalamic sources of distributed neural activation. The model represents this activation source by a single thalamic (T) unit that is always fully active. The second assumption is that the buildup of MMN over several repetitions of the standard stimulus is accomplished by a local cumulative activation function. All the local accumulator neurons inhibit the nonlocal, steady-state, thalamic activation represented by T.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125088101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A parallel network for the computation of structure from long-range motion 一种用于结构远程运动计算的并行网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227161
R. Laganière, F. Labrosse, P. Cohen
The authors propose a parallel architecture for computing the 3-D structure of a moving scene from a long image sequence, using a principle known as the incremental rigidity scheme. At each instant an internal model of the 3-D structure is updated, based upon the observations accumulated until that time. The updating process favors rigid transformations but tolerates a limited deviation from rigidity. This deviation eventually leads the internal model to converge towards the actual 3-D structure of the scene. The main advantage of this architecture is its ability to accurately estimate the 3-D structure of the scene at a low computational cost. Testing has been successfully performed on synthetic data as well as real image sequences.<>
作者提出了一种并行架构,用于从长图像序列中计算运动场景的三维结构,使用称为增量刚性方案的原理。在每一个瞬间,三维结构的内部模型都会根据当时积累的观测结果进行更新。更新过程倾向于刚性转换,但允许对刚性的有限偏差。这种偏差最终导致内部模型向场景的实际三维结构收敛。该架构的主要优点是能够以较低的计算成本准确地估计场景的三维结构。测试已成功地进行了合成数据和真实图像序列。
{"title":"A parallel network for the computation of structure from long-range motion","authors":"R. Laganière, F. Labrosse, P. Cohen","doi":"10.1109/IJCNN.1992.227161","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227161","url":null,"abstract":"The authors propose a parallel architecture for computing the 3-D structure of a moving scene from a long image sequence, using a principle known as the incremental rigidity scheme. At each instant an internal model of the 3-D structure is updated, based upon the observations accumulated until that time. The updating process favors rigid transformations but tolerates a limited deviation from rigidity. This deviation eventually leads the internal model to converge towards the actual 3-D structure of the scene. The main advantage of this architecture is its ability to accurately estimate the 3-D structure of the scene at a low computational cost. Testing has been successfully performed on synthetic data as well as real image sequences.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126080983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence of recurrent networks as contraction mappings 作为收缩映射的循环网络的收敛性
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227131
J. Steck
Three theorems are presented which establish an upper bound on the magnitude of the weights which guarantees convergence of the network to a stable unique fixed point. It is shown that the bound on the weights is inversely proportional to the product of the number of neurons in the network and the maximum slope of the neuron activation functions. The location of its fixed point is determined by the network architecture, weights, and the external input values. The proofs are constructive, consisting of representing the network as a contraction mapping and then applying the contraction mapping theorem from point set topology. The resulting sufficient conditions for network stability are shown to be general enough to allow the network to have nontrivial fixed points.<>
给出了保证网络收敛到一个稳定的唯一不动点的权值上界的三个定理。结果表明,权值的界与网络中神经元数量与神经元激活函数的最大斜率的乘积成反比。其固定点的位置由网络结构、权值和外部输入值决定。这些证明是建设性的,包括将网络表示为一个收缩映射,然后应用点集拓扑的收缩映射定理。由此得到的网络稳定性的充分条件足够普遍,足以允许网络具有非平凡不动点。
{"title":"Convergence of recurrent networks as contraction mappings","authors":"J. Steck","doi":"10.1109/IJCNN.1992.227131","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227131","url":null,"abstract":"Three theorems are presented which establish an upper bound on the magnitude of the weights which guarantees convergence of the network to a stable unique fixed point. It is shown that the bound on the weights is inversely proportional to the product of the number of neurons in the network and the maximum slope of the neuron activation functions. The location of its fixed point is determined by the network architecture, weights, and the external input values. The proofs are constructive, consisting of representing the network as a contraction mapping and then applying the contraction mapping theorem from point set topology. The resulting sufficient conditions for network stability are shown to be general enough to allow the network to have nontrivial fixed points.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125350462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A what-and-where neural network for invariant image preprocessing 用于图像预处理的what-and-where神经网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227157
G. Carpenter, S. Grossberg, G. Lesher
A feedforward neural networks for invariant image preprocessing is proposed that represents the position, orientation, and size of an image figure (where it is) in a multiplexed spatial map. This map is used to generate an invariant representation of the figure that is insensitive to position, orientation, and size for purposes of pattern recognition (what it is). Image recognition is based upon the output from the what channel. A multiscale array of oriented filters, followed by competition between orientations and scales, is used to define the where filter.<>
提出了一种用于不变图像预处理的前馈神经网络,该网络在多路复用空间地图中表示图像图形的位置、方向和大小(它在哪里)。该映射用于生成图形的不受位置、方向和大小影响的不变表示,以用于模式识别(它是什么)。图像识别是基于什么通道的输出。一个多尺度定向滤波器阵列,然后是方向和尺度之间的竞争,用于定义位置滤波器。
{"title":"A what-and-where neural network for invariant image preprocessing","authors":"G. Carpenter, S. Grossberg, G. Lesher","doi":"10.1109/IJCNN.1992.227157","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227157","url":null,"abstract":"A feedforward neural networks for invariant image preprocessing is proposed that represents the position, orientation, and size of an image figure (where it is) in a multiplexed spatial map. This map is used to generate an invariant representation of the figure that is insensitive to position, orientation, and size for purposes of pattern recognition (what it is). Image recognition is based upon the output from the what channel. A multiscale array of oriented filters, followed by competition between orientations and scales, is used to define the where filter.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126609799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1