首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
A product-of-norms model for recurrent neural networks 递归神经网络的范数积模型
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287164
J. Hou, F. Salam
The authors present a model for recurrent artificial neural networks which can store any number of any prespecified patterns as energy local minima. Therefore, all the prespecified patterns can be stored and retrieved. The authors summarize the model's stability properties. They then give two examples, showing how this model can be used in image recognition and association.<>
提出了一种循环人工神经网络模型,该模型可以存储任意数量的任意预设模式作为能量局部最小值。因此,可以存储和检索所有预先指定的模式。总结了该模型的稳定性。然后,他们给出了两个例子,展示了如何将该模型用于图像识别和关联。
{"title":"A product-of-norms model for recurrent neural networks","authors":"J. Hou, F. Salam","doi":"10.1109/IJCNN.1992.287164","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287164","url":null,"abstract":"The authors present a model for recurrent artificial neural networks which can store any number of any prespecified patterns as energy local minima. Therefore, all the prespecified patterns can be stored and retrieved. The authors summarize the model's stability properties. They then give two examples, showing how this model can be used in image recognition and association.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127872418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning probabilities for causal networks 学习因果网络的概率
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227283
Y. Peng
The author presents an unsupervised method to learn probabilities of random events. Learning is done by letting variables adaptively respond to positive and negative environmental stimuli. The basic learning rule is applied to learn prior and conditional probabilities for causal networks. By combining with a stochastic factor, this method is extended to learn probabilities of hidden causations, a type of event important in modeling causal relationships. In contrast to many existing neural network learning paradigms, probabilistic knowledge learned by this method is independent of any particular type of task. This method is especially suited for acquiring and updating knowledge in systems based on traditional artificial intelligence representation techniques.<>
提出了一种学习随机事件概率的无监督方法。学习是通过让变量对积极和消极的环境刺激做出适应性反应来完成的。将基本学习规则应用于因果网络的先验概率和条件概率的学习。通过结合随机因素,将该方法扩展到学习隐藏原因的概率,这是一种对因果关系建模很重要的事件。与许多现有的神经网络学习范式相比,通过这种方法学习的概率知识与任何特定类型的任务无关。该方法特别适用于基于传统人工智能表示技术的系统中知识的获取和更新
{"title":"Learning probabilities for causal networks","authors":"Y. Peng","doi":"10.1109/IJCNN.1992.227283","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227283","url":null,"abstract":"The author presents an unsupervised method to learn probabilities of random events. Learning is done by letting variables adaptively respond to positive and negative environmental stimuli. The basic learning rule is applied to learn prior and conditional probabilities for causal networks. By combining with a stochastic factor, this method is extended to learn probabilities of hidden causations, a type of event important in modeling causal relationships. In contrast to many existing neural network learning paradigms, probabilistic knowledge learned by this method is independent of any particular type of task. This method is especially suited for acquiring and updating knowledge in systems based on traditional artificial intelligence representation techniques.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"521 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115352150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Is the distribution-free sample bound for generalization tight? 泛化的无分布样本界紧吗?
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227076
C. Ji
A general relationship is developed between the two sharp transition points, the statistical capacity which represents the memorization, and the universal sample bound for generalization, for a network composed of random samples drawn from a specific class of distributions. This relationship indicates that generalization happens after memorization. It is shown through one example that the sample complexity needed for generalization can coincide with the capacity point. For the worst case, the sample complexity for generalization can be on the order of the distribution-free bound, whereas, for a more structured case, it can be smaller than the worst case bound. The analysis sheds light on why in practice the number of samples needed for generalization can be smaller than the bound given in term of the VC-dimension.<>
对于由从特定类别的分布中抽取的随机样本组成的网络,在两个尖锐的过渡点,代表记忆的统计容量和用于泛化的通用样本界之间建立了一般关系。这种关系表明泛化发生在记忆之后。通过一个实例表明,泛化所需的样本复杂度可以与容量点重合。在最坏的情况下,泛化的样本复杂度可能与无分布边界的数量级相同,而在更结构化的情况下,它可能小于最坏的情况。分析揭示了为什么在实践中泛化所需的样本数量可以小于vc维给出的界限。
{"title":"Is the distribution-free sample bound for generalization tight?","authors":"C. Ji","doi":"10.1109/IJCNN.1992.227076","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227076","url":null,"abstract":"A general relationship is developed between the two sharp transition points, the statistical capacity which represents the memorization, and the universal sample bound for generalization, for a network composed of random samples drawn from a specific class of distributions. This relationship indicates that generalization happens after memorization. It is shown through one example that the sample complexity needed for generalization can coincide with the capacity point. For the worst case, the sample complexity for generalization can be on the order of the distribution-free bound, whereas, for a more structured case, it can be smaller than the worst case bound. The analysis sheds light on why in practice the number of samples needed for generalization can be smaller than the bound given in term of the VC-dimension.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115400314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An unsupervised learning and fuzzy logic approach for software category identification and capacity planning 基于无监督学习和模糊逻辑的软件类别识别与容量规划
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227148
R. A. Clinkenbeard, X. Feng
A hybrid unsupervised neural network and fuzzy logic approach is presented to achieve the primary goal of software categorization and feature interpretation. This method permits new software applications to be evaluated quickly for capacity planning and project management purposes. Fuzzy logic techniques were successfully applied to interpret the internal structure of the trained network, leading to an understanding of which application attributes most clearly distinguish the resulting categories. The resulting fuzzy membership functions can be used as inputs to subsequent analysis. These techniques can derive useful categories based on broad, external attributes of the software. This makes the technique useful to users of off-the-shelf software or to developers in the early stages of program specification. Experiments explicitly demonstrated the advantages of this method.<>
为了实现软件分类和特征解释的主要目标,提出了一种无监督神经网络和模糊逻辑的混合方法。这种方法允许为了容量规划和项目管理目的而快速评估新的软件应用程序。模糊逻辑技术成功地应用于解释训练网络的内部结构,从而了解哪些应用属性最清楚地区分结果类别。得到的模糊隶属函数可以作为后续分析的输入。这些技术可以根据软件的广泛的外部属性派生出有用的类别。这使得该技术对现成软件的用户或在程序规范的早期阶段的开发人员很有用。实验清楚地证明了这种方法的优越性。
{"title":"An unsupervised learning and fuzzy logic approach for software category identification and capacity planning","authors":"R. A. Clinkenbeard, X. Feng","doi":"10.1109/IJCNN.1992.227148","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227148","url":null,"abstract":"A hybrid unsupervised neural network and fuzzy logic approach is presented to achieve the primary goal of software categorization and feature interpretation. This method permits new software applications to be evaluated quickly for capacity planning and project management purposes. Fuzzy logic techniques were successfully applied to interpret the internal structure of the trained network, leading to an understanding of which application attributes most clearly distinguish the resulting categories. The resulting fuzzy membership functions can be used as inputs to subsequent analysis. These techniques can derive useful categories based on broad, external attributes of the software. This makes the technique useful to users of off-the-shelf software or to developers in the early stages of program specification. Experiments explicitly demonstrated the advantages of this method.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115443080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A general scheme for minimising Bayes risk and incorporating priors applicable to supervised learning systems 最小化贝叶斯风险和纳入适用于监督学习系统的先验的一般方案
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227075
D. McMichael
BARTIN (Bayesian real-time network) is a general structure for learning Bayesian minimum-risk decision schemes. It comprises two unspecified supervised learning nets and associated elements. The structure allows separate prior compensation and risk minimization and is thus able to learn Bayesian minimum-risk decision schemes accurately from training data and priors alone. The design provides a new mechanism (the prior compensator) for correcting for discrepancies between class probabilities in training and recall. The same mechanism can be adapted to bias output decisions. The general structure of BARTIN is described and the enumerative and Gaussian specific form are presented. The enumerative form of BARTIN was applied to a visual inspection problem in comparison with the multilayer perceptron.<>
BARTIN(贝叶斯实时网络)是一种学习贝叶斯最小风险决策方案的通用结构。它包括两个未指定的监督学习网络和相关元素。该结构允许单独的先验补偿和风险最小化,因此能够仅从训练数据和先验中准确地学习贝叶斯最小风险决策方案。该设计提供了一种新的机制(先验补偿器)来纠正训练和召回中类别概率之间的差异。同样的机制也适用于偏见输出决策。描述了BARTIN的一般结构,给出了其枚举形式和高斯特表示形式。与多层感知器相比,将列举形式的BARTIN应用于视觉检测问题。
{"title":"A general scheme for minimising Bayes risk and incorporating priors applicable to supervised learning systems","authors":"D. McMichael","doi":"10.1109/IJCNN.1992.227075","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227075","url":null,"abstract":"BARTIN (Bayesian real-time network) is a general structure for learning Bayesian minimum-risk decision schemes. It comprises two unspecified supervised learning nets and associated elements. The structure allows separate prior compensation and risk minimization and is thus able to learn Bayesian minimum-risk decision schemes accurately from training data and priors alone. The design provides a new mechanism (the prior compensator) for correcting for discrepancies between class probabilities in training and recall. The same mechanism can be adapted to bias output decisions. The general structure of BARTIN is described and the enumerative and Gaussian specific form are presented. The enumerative form of BARTIN was applied to a visual inspection problem in comparison with the multilayer perceptron.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115773559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A self-learning visual pattern explorer and recognizer using a higher order neural network 使用高阶神经网络的自学习视觉模式浏览器和识别器
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227069
G. Linhart, G. Dorffner
A proposal by M. B. Reid et al. (1989) to improve the efficiency of higher-order neural networks was built into a pattern recognition system that autonomously learns to categorize and recognize patterns independently of their position in an input image. It does this by combining higher-order with first-order networks and the mechanisms known from ART. Its recognition is based on a 16*16 pixel input which contains a section of the image found by a separate centering mechanism. With this system position invariant recognition can be implemented efficiently, while combining all the advantages of the subsystems.<>
M. B. Reid等人(1989)提出了一个提高高阶神经网络效率的建议,该建议被构建到一个模式识别系统中,该系统可以自主学习分类和识别模式,而不依赖于模式在输入图像中的位置。它通过结合高阶网络和一阶网络以及ART中已知的机制来做到这一点。它的识别是基于一个16*16像素的输入,其中包含由一个单独的定心机制找到的图像的一部分。该系统结合了各子系统的优点,可以有效地实现位置不变识别。
{"title":"A self-learning visual pattern explorer and recognizer using a higher order neural network","authors":"G. Linhart, G. Dorffner","doi":"10.1109/IJCNN.1992.227069","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227069","url":null,"abstract":"A proposal by M. B. Reid et al. (1989) to improve the efficiency of higher-order neural networks was built into a pattern recognition system that autonomously learns to categorize and recognize patterns independently of their position in an input image. It does this by combining higher-order with first-order networks and the mechanisms known from ART. Its recognition is based on a 16*16 pixel input which contains a section of the image found by a separate centering mechanism. With this system position invariant recognition can be implemented efficiently, while combining all the advantages of the subsystems.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124156412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Prediction of software reliability using feedforward and recurrent neural nets 基于前馈和递归神经网络的软件可靠性预测
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287089
N. Karunanithi, L. D. Whitley
The authors present an adaptive modeling approach based on connectionist networks and demonstrate how both feedforward and recurrent networks and various training regimes can be applied to predict software reliability. They make an empirical comparison between this new approach and five well-known software reliability growth prediction models using data sets from 14 different software projects. The results presented suggest that connectionist networks adapt well to different data sets and exhibit better overall long-term predictive accuracy than the analytic models. This observation is true not only for the aggregate data, but for each individual item of data as well. The connectionist approach offers a distinct advantage for software reliability modeling in that the model development is automatic if one uses a training algorithm such as the cascade correlation. Two important characteristics of connectionist models are easy construction of appropriate models and good adaptability towards different data sets (i.e., different software projects).<>
作者提出了一种基于连接主义网络的自适应建模方法,并演示了如何将前馈和循环网络以及各种训练制度应用于预测软件可靠性。他们利用来自14个不同软件项目的数据集,将这种新方法与五种知名的软件可靠性增长预测模型进行了实证比较。结果表明,连接主义网络能很好地适应不同的数据集,并表现出比分析模型更好的整体长期预测准确性。这种观察结果不仅适用于汇总数据,也适用于每个单独的数据项。连接主义方法为软件可靠性建模提供了一个明显的优势,因为如果使用诸如级联相关之类的训练算法,则模型开发是自动的。连接主义模型的两个重要特征是易于构建合适的模型和对不同数据集(即不同的软件项目)的良好适应性。
{"title":"Prediction of software reliability using feedforward and recurrent neural nets","authors":"N. Karunanithi, L. D. Whitley","doi":"10.1109/IJCNN.1992.287089","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287089","url":null,"abstract":"The authors present an adaptive modeling approach based on connectionist networks and demonstrate how both feedforward and recurrent networks and various training regimes can be applied to predict software reliability. They make an empirical comparison between this new approach and five well-known software reliability growth prediction models using data sets from 14 different software projects. The results presented suggest that connectionist networks adapt well to different data sets and exhibit better overall long-term predictive accuracy than the analytic models. This observation is true not only for the aggregate data, but for each individual item of data as well. The connectionist approach offers a distinct advantage for software reliability modeling in that the model development is automatic if one uses a training algorithm such as the cascade correlation. Two important characteristics of connectionist models are easy construction of appropriate models and good adaptability towards different data sets (i.e., different software projects).<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124239558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Structural properties of network attractor associated with neuronal dynamics transition 与神经元动力学跃迁相关的网络吸引子结构特性
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227120
M. Nakao, K. Watanabe, T. Takahashi, Y. Mizutani, M. Yamamoto
It was found that single neuronal activities in various regions in the brain commonly exhibit the distinct dynamics transition from the white to the a/f spectral profiles during the sleep cycle in cats. The dynamics transition was simulated by using a symmetrically connected neural network model including a globally applied inhibitory input. The structure of the network attractor was suggested to vary in association with the change in inhibitory level. To examine the robustness of the dynamics transition, the symmetry network structure is extended to the asymmetrically connected network model. This asymmetricity follows the rule which approximately reflects the characteristics of synaptic contacts between neurons. Computer simulations showed that the inhibitory input could change the neuronal dynamics from the white to the 1/f profiles under more realistic situations. The geometry of the network attractor realizing the dynamics transition is discussed.<>
研究发现,在猫的睡眠周期中,大脑不同区域的单个神经元活动通常表现出从白谱到a/f谱的明显动态转变。采用包含全局应用抑制输入的对称连接神经网络模型对动力学过渡进行了仿真。网络吸引子的结构随抑制水平的变化而变化。为了检验动力学转移的鲁棒性,将对称网络结构推广到非对称连接网络模型。这种不对称性遵循的规律近似地反映了神经元之间突触接触的特征。计算机模拟表明,在更真实的情况下,抑制输入可以使神经元动力学从白色到1/f。讨论了实现动态过渡的网络吸引子的几何形状。
{"title":"Structural properties of network attractor associated with neuronal dynamics transition","authors":"M. Nakao, K. Watanabe, T. Takahashi, Y. Mizutani, M. Yamamoto","doi":"10.1109/IJCNN.1992.227120","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227120","url":null,"abstract":"It was found that single neuronal activities in various regions in the brain commonly exhibit the distinct dynamics transition from the white to the a/f spectral profiles during the sleep cycle in cats. The dynamics transition was simulated by using a symmetrically connected neural network model including a globally applied inhibitory input. The structure of the network attractor was suggested to vary in association with the change in inhibitory level. To examine the robustness of the dynamics transition, the symmetry network structure is extended to the asymmetrically connected network model. This asymmetricity follows the rule which approximately reflects the characteristics of synaptic contacts between neurons. Computer simulations showed that the inhibitory input could change the neuronal dynamics from the white to the 1/f profiles under more realistic situations. The geometry of the network attractor realizing the dynamics transition is discussed.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"411 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124388669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An integrated associative structure for vision 视觉的综合联想结构
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227159
A. Cerrato, G. Parodi, R. Zunino
An associative architecture for mapping input images into a set of predefined bit patterns (messages) is described. The running general methodology exploits memory content-addressability to perform robust vision tasks. A noiselike coding associative memory works out message samples from input images, while a superimposed feedforward network filters out memory crosstalk and provides clean messages patterns. The integrated structure combines the generalization power of neural networks with the massive processing capability of associative memories. Tests have involved image sets which stress the system's discrimination efficacy. Experimental results confirmed the system's robustness and flexibility. The overall structure can be regarded as a general domain-independent method for visual stimulus-response mapping.<>
描述了将输入图像映射到一组预定义的位模式(消息)的关联体系结构。运行通用方法利用内存内容寻址能力来执行稳健的视觉任务。类噪声编码联想记忆从输入图像中提取信息样本,而叠加前馈网络过滤掉记忆串扰并提供清晰的信息模式。这种集成结构结合了神经网络的泛化能力和联想记忆的海量处理能力。测试涉及图像集,这些图像集强调了系统的识别功效。实验结果验证了该系统的鲁棒性和灵活性。整体结构可视为视觉刺激-反应映射的一般领域无关方法。
{"title":"An integrated associative structure for vision","authors":"A. Cerrato, G. Parodi, R. Zunino","doi":"10.1109/IJCNN.1992.227159","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227159","url":null,"abstract":"An associative architecture for mapping input images into a set of predefined bit patterns (messages) is described. The running general methodology exploits memory content-addressability to perform robust vision tasks. A noiselike coding associative memory works out message samples from input images, while a superimposed feedforward network filters out memory crosstalk and provides clean messages patterns. The integrated structure combines the generalization power of neural networks with the massive processing capability of associative memories. Tests have involved image sets which stress the system's discrimination efficacy. Experimental results confirmed the system's robustness and flexibility. The overall structure can be regarded as a general domain-independent method for visual stimulus-response mapping.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114448210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Training algorithm based on Newton's method with dynamic error control 基于牛顿法的动态误差控制训练算法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227085
S. J. Huang, S. N. Koh, H. K. Tang
The use of Newton's method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton's method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton's method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods.<>
研究了用带有动态误差控制的牛顿方法作为反向传播神经网络的训练算法。从理论上可以证明牛顿法是二阶收敛的,而最广泛使用的最陡下降法是一阶收敛的。这表明牛顿方法对于BP网络来说可能是一个更快的算法。详细分析了两种方法的更新方程,并结合误差面特性提取了一些重要的特性。使用常见的基准异或问题来比较方法的性能
{"title":"Training algorithm based on Newton's method with dynamic error control","authors":"S. J. Huang, S. N. Koh, H. K. Tang","doi":"10.1109/IJCNN.1992.227085","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227085","url":null,"abstract":"The use of Newton's method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton's method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton's method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117298929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1