首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
Intracellular mechanisms in neuronal learning: adaptive models 神经元学习的细胞内机制:适应性模型
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287218
J. Dayhoff, S. Hameroff, R. Lahoz-Beltra, C. Swenberg
The cytoskeletal intraneuronal structure and some candidate mechanisms for signaling within nerve cells are described. Models were developing for the interaction of the cytoskeleton with cell membranes, synapses, and an internal signaling model that renders back-error propagation biologically plausible. Orientation-selective units observed in the primate motor cortex may be organized by such internal signaling mechanisms. The impact on sensorimotor systems and learning is discussed. It is concluded that the cytoskeleton's anatomical presence suggested that it plays a potentially key role in neuronal learning. The cytoskeleton could participate in synaptic processes by supporting the synapse and possibly by sending intracellular signals as well. Paradigms for adaptational mechanisms and information processing can be modeled utilizing the cytoskeleton and cytoskeletal signals.<>
描述了细胞骨架神经元内结构和一些神经细胞内信号传导的候选机制。细胞骨架与细胞膜、突触和内部信号模型相互作用的模型正在发展,该模型使得反向传播在生物学上是合理的。在灵长类动物运动皮层中观察到的定向选择单元可能是由这种内部信号机制组织的。讨论了对感觉运动系统和学习的影响。结论是,细胞骨架的解剖存在表明它在神经元学习中起着潜在的关键作用。细胞骨架可能通过支持突触并可能通过发送细胞内信号参与突触过程。适应机制和信息处理的范例可以利用细胞骨架和细胞骨架信号来建模
{"title":"Intracellular mechanisms in neuronal learning: adaptive models","authors":"J. Dayhoff, S. Hameroff, R. Lahoz-Beltra, C. Swenberg","doi":"10.1109/IJCNN.1992.287218","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287218","url":null,"abstract":"The cytoskeletal intraneuronal structure and some candidate mechanisms for signaling within nerve cells are described. Models were developing for the interaction of the cytoskeleton with cell membranes, synapses, and an internal signaling model that renders back-error propagation biologically plausible. Orientation-selective units observed in the primate motor cortex may be organized by such internal signaling mechanisms. The impact on sensorimotor systems and learning is discussed. It is concluded that the cytoskeleton's anatomical presence suggested that it plays a potentially key role in neuronal learning. The cytoskeleton could participate in synaptic processes by supporting the synapse and possibly by sending intracellular signals as well. Paradigms for adaptational mechanisms and information processing can be modeled utilizing the cytoskeleton and cytoskeletal signals.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Helicopter fault detection and classification with neural networks 基于神经网络的直升机故障检测与分类
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226865
R.M. Kuczewski, D.R. Eames
The application of neural networks to helicopter drive train fault detection and classification is discussed. A practical approach to the problem is outlined including preprocessing and network design issues. Two different neural networks are designed, constructed and demonstrated. The results indicate that a low-resolution fast Fourier transform (FFT) may provide a sufficiently rich feature set for fault detection and classification if combined with a properly structured and controlled neural network. Future directions for this work are discussed, including more data, longer time window, channel synchronization to pulse, and additional layers of cross-checking class neurons.<>
讨论了神经网络在直升机传动系故障检测与分类中的应用。本文概述了一种实用的解决方法,包括预处理和网络设计问题。设计、构造并演示了两种不同的神经网络。结果表明,低分辨率快速傅里叶变换(FFT)如果与结构合理、控制合理的神经网络相结合,可以为故障检测和分类提供足够丰富的特征集。讨论了这项工作的未来方向,包括更多的数据,更长的时间窗口,通道同步到脉冲,以及交叉检查类神经元的额外层。
{"title":"Helicopter fault detection and classification with neural networks","authors":"R.M. Kuczewski, D.R. Eames","doi":"10.1109/IJCNN.1992.226865","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226865","url":null,"abstract":"The application of neural networks to helicopter drive train fault detection and classification is discussed. A practical approach to the problem is outlined including preprocessing and network design issues. Two different neural networks are designed, constructed and demonstrated. The results indicate that a low-resolution fast Fourier transform (FFT) may provide a sufficiently rich feature set for fault detection and classification if combined with a properly structured and controlled neural network. Future directions for this work are discussed, including more data, longer time window, channel synchronization to pulse, and additional layers of cross-checking class neurons.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114210984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Joint optimization of classifier and feature space in speech recognition 语音识别中分类器与特征空间的联合优化
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227235
G. Kuhn
The author presents a feedforward network which classifies the spoken letter names 'b', 'd', 'e', and 'v' with 88.5% accuracy. For many poorly discriminated training examples, the outputs of this network are unstable or sensitive to perturbations of the values of the input features. This residual sensitivity is exploited by inserting into the network a new first hidden layer with localized receptive fields. The new layer gives the network a few additional degrees of freedom with which to optimize the input feature space for the desired classification. The benefit of further, joint optimization of the classifier and the input features was suggested in an experiment in which recognition accuracy was raised to 89.6%.<>
作者提出了一个前馈网络,该网络对口语字母“b”、“d”、“e”和“v”进行分类,准确率为88.5%。对于许多判别差的训练样例,该网络的输出不稳定或对输入特征值的扰动敏感。通过在网络中插入一个具有局部接受域的新的第一隐藏层来利用这种剩余灵敏度。新层为网络提供了一些额外的自由度,用于优化输入特征空间以实现所需的分类。实验结果表明,分类器和输入特征进一步联合优化的好处是识别准确率提高到89.6%。
{"title":"Joint optimization of classifier and feature space in speech recognition","authors":"G. Kuhn","doi":"10.1109/IJCNN.1992.227235","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227235","url":null,"abstract":"The author presents a feedforward network which classifies the spoken letter names 'b', 'd', 'e', and 'v' with 88.5% accuracy. For many poorly discriminated training examples, the outputs of this network are unstable or sensitive to perturbations of the values of the input features. This residual sensitivity is exploited by inserting into the network a new first hidden layer with localized receptive fields. The new layer gives the network a few additional degrees of freedom with which to optimize the input feature space for the desired classification. The benefit of further, joint optimization of the classifier and the input features was suggested in an experiment in which recognition accuracy was raised to 89.6%.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125179292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic extraction of strokes by quadratic neural nets 基于二次神经网络的笔画自动提取
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287153
M. Alder, Y. Attikiouzel
The authors present a preliminary exploration of some ideas from syntactic pattern recognition theory and some insights of D.A. Marr (1970). The use of quadratic neural nets for the automatic extraction of strokes is examined. The concrete problem of optical character recognition (OCR) of handwritten characters is considered. That human OCR of cursive script entails both upwriting and downwriting into strokes and presumably other structures is eminently plausible, as an examination of the differences between human and machine OCR makes clear. That this is accomplished by arrays of neurons in the central nervous system is indisputable.<>
作者对句法模式识别理论的一些观点和d.a Marr(1970)的一些见解进行了初步探讨。研究了二次神经网络在笔画自动提取中的应用。研究了手写体光学字符识别的具体问题。人类对草书的OCR需要在笔画和可能的其他结构中进行向上和向下的书写,这是非常合理的,正如对人类和机器OCR之间差异的研究所表明的那样。这是由中枢神经系统中的神经元阵列完成的,这是无可争辩的。
{"title":"Automatic extraction of strokes by quadratic neural nets","authors":"M. Alder, Y. Attikiouzel","doi":"10.1109/IJCNN.1992.287153","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287153","url":null,"abstract":"The authors present a preliminary exploration of some ideas from syntactic pattern recognition theory and some insights of D.A. Marr (1970). The use of quadratic neural nets for the automatic extraction of strokes is examined. The concrete problem of optical character recognition (OCR) of handwritten characters is considered. That human OCR of cursive script entails both upwriting and downwriting into strokes and presumably other structures is eminently plausible, as an examination of the differences between human and machine OCR makes clear. That this is accomplished by arrays of neurons in the central nervous system is indisputable.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125333371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Speech recognition using dynamic neural networks 基于动态神经网络的语音识别
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227230
N. M. Botros, S. Premnath
The authors present an algorithm for isolated-word recognition that takes into consideration the duration variability of the different utterances of the same word. The algorithm is based on extracting acoustical features from the speech signal and using them as the input to a sequence of multilayer perceptron neural networks. The networks were implemented as predictors for the speech samples for a certain duration of time. The networks were trained by a combination of the back-propagation and the dynamic time warping (DTW) techniques. The DTW technique was implemented to normalize the duration variability. The networks were trained to recognize the correct words and to reject the wrong words. The training set consisted of ten words, each uttered seven times by three different speakers. The test set consisted of three utterances of each of the ten words. The results show that all these words could be recognized.<>
作者提出了一种考虑了同一单词不同发音的持续时间变化的孤立词识别算法。该算法基于从语音信号中提取声学特征,并将其作为多层感知器神经网络序列的输入。这些网络在一段时间内作为语音样本的预测器。采用反向传播和动态时间规整相结合的方法对网络进行训练。采用DTW技术对持续时间变异性进行归一化。这些神经网络经过训练,可以识别正确的单词,并拒绝错误的单词。训练集由十个单词组成,每个单词由三个不同的说话者说七遍。测试集由十个单词中的每个单词的三个发音组成。结果表明,这些词都能被识别。
{"title":"Speech recognition using dynamic neural networks","authors":"N. M. Botros, S. Premnath","doi":"10.1109/IJCNN.1992.227230","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227230","url":null,"abstract":"The authors present an algorithm for isolated-word recognition that takes into consideration the duration variability of the different utterances of the same word. The algorithm is based on extracting acoustical features from the speech signal and using them as the input to a sequence of multilayer perceptron neural networks. The networks were implemented as predictors for the speech samples for a certain duration of time. The networks were trained by a combination of the back-propagation and the dynamic time warping (DTW) techniques. The DTW technique was implemented to normalize the duration variability. The networks were trained to recognize the correct words and to reject the wrong words. The training set consisted of ten words, each uttered seven times by three different speakers. The test set consisted of three utterances of each of the ten words. The results show that all these words could be recognized.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Method of deciding ANNs parameters for pattern recognition 模式识别中人工神经网络参数的确定方法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227295
S. Watanabe, N. Iijima, M. Sone, H. Mitsui, Y. Yoshida
A method for tuning artificial neural network (ANN) parameters for pattern recognition is described. A pattern recognition experiment carried out for phoneme recognition of English pure vowels in ANNs is presented. The significant of parameters that affect recognition rate seriously are defined. To determine the influence of these parameters on the recognition rate a tuning method is given. The tuning method is independent of the recognition rate.<>
描述了一种用于模式识别的人工神经网络(ANN)参数的整定方法。提出了一种在人工神经网络中进行英语纯元音音位识别的模式识别实验。定义了严重影响识别率的参数的显著性。为了确定这些参数对识别率的影响,给出了一种调谐方法。该调谐方法与识别率无关。
{"title":"Method of deciding ANNs parameters for pattern recognition","authors":"S. Watanabe, N. Iijima, M. Sone, H. Mitsui, Y. Yoshida","doi":"10.1109/IJCNN.1992.227295","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227295","url":null,"abstract":"A method for tuning artificial neural network (ANN) parameters for pattern recognition is described. A pattern recognition experiment carried out for phoneme recognition of English pure vowels in ANNs is presented. The significant of parameters that affect recognition rate seriously are defined. To determine the influence of these parameters on the recognition rate a tuning method is given. The tuning method is independent of the recognition rate.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122419518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Plastic network for predicting the Mackey-Glass time series 预测麦基-格拉斯时间序列的塑料网络
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226866
W. Hsu, M. F. Tenorio
A novel plastic network is introduced as a tool for predicting chaotic time series. When the goal is prediction accuracy for chaotic time series, local-in-time and local-in-state-space plastic networks can outperform the traditional global methods. The key ingredient of a plastic network is a model selection criterion that allows it to self organize by choosing among a collection of candidate models. Among the advantages of the plastic network for the prediction of (chaotic) time series are the simplicity of the models used, accuracy, relatively small data requirement, online usage, and ease of understanding of the algorithms. When reporting prediction results on chaotic time series, a careful analysis of the data is recommended. Specifically for the Mackey-Glass time series, the authors find that different forward lead size can result in different prediction accuracy.<>
介绍了一种新的塑性网络作为混沌时间序列预测工具。当以混沌时间序列的预测精度为目标时,局部时间和局部状态空间塑性网络优于传统的全局方法。塑料网络的关键成分是模型选择标准,该标准允许它通过在一组候选模型中进行选择来进行自组织。塑料网络预测(混沌)时间序列的优点包括模型简单、准确、数据需求相对较少、在线使用以及算法易于理解。在混沌时间序列上报告预测结果时,建议对数据进行仔细分析。具体来说,对于Mackey-Glass时间序列,作者发现不同的前导程大小会导致不同的预测精度。
{"title":"Plastic network for predicting the Mackey-Glass time series","authors":"W. Hsu, M. F. Tenorio","doi":"10.1109/IJCNN.1992.226866","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226866","url":null,"abstract":"A novel plastic network is introduced as a tool for predicting chaotic time series. When the goal is prediction accuracy for chaotic time series, local-in-time and local-in-state-space plastic networks can outperform the traditional global methods. The key ingredient of a plastic network is a model selection criterion that allows it to self organize by choosing among a collection of candidate models. Among the advantages of the plastic network for the prediction of (chaotic) time series are the simplicity of the models used, accuracy, relatively small data requirement, online usage, and ease of understanding of the algorithms. When reporting prediction results on chaotic time series, a careful analysis of the data is recommended. Specifically for the Mackey-Glass time series, the authors find that different forward lead size can result in different prediction accuracy.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122768284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A neural computational scheme for extracting optical flow from the Gabor phase differences of successive images 一种从连续图像Gabor相位差中提取光流的神经网络计算方法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227303
Tien-Ren Tsao, V. C. Chen
The authors propose a neurobiologically plausible representation of the Gabor phase information, and present a neural computation scheme for extracting visual motion information from the Gabor phase information. The scheme can compute visual motion accurately from a scene with illumination changes, while other neural schemes for optical flow must assume stable brightness. The computational tests on synthetic and natural image data showed that the scheme was robust to the natural scenes. An architecture is presented of a neural network system based on the Gabor phase representation of visual motion.<>
作者提出了一种神经生物学上合理的Gabor相位信息表示,并提出了一种从Gabor相位信息中提取视觉运动信息的神经计算方案。该方案能够准确地从光照变化的场景中计算视觉运动,而其他的光流神经算法必须假设亮度稳定。对合成图像和自然图像数据的计算测试表明,该方案对自然场景具有较强的鲁棒性。提出了一种基于视觉运动Gabor相位表示的神经网络系统结构。
{"title":"A neural computational scheme for extracting optical flow from the Gabor phase differences of successive images","authors":"Tien-Ren Tsao, V. C. Chen","doi":"10.1109/IJCNN.1992.227303","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227303","url":null,"abstract":"The authors propose a neurobiologically plausible representation of the Gabor phase information, and present a neural computation scheme for extracting visual motion information from the Gabor phase information. The scheme can compute visual motion accurately from a scene with illumination changes, while other neural schemes for optical flow must assume stable brightness. The computational tests on synthetic and natural image data showed that the scheme was robust to the natural scenes. An architecture is presented of a neural network system based on the Gabor phase representation of visual motion.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131532992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wavelets as basis functions for localized learning in a multi-resolution hierarchy 小波作为多分辨率层次中局部学习的基函数
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227017
B. R. Bakshi, G. Stephanopoulos
An artificial neural network with one hidden layer of nodes, whose basis functions are drawn from a family of orthonormal wavelets, is developed. Wavelet networks or wave-nets are based on firm theoretical foundations of functional analysis. The good localization characteristics of the basis functions, both in the input and frequency domains, allow hierarchical, multi-resolution learning of input-output maps from experimental data. Wave-nets allow explicit estimation of global and local prediction error-bounds, and thus lend themselves to a rigorous and transparent design of the network. Computational complexity arguments prove that the training and adaptation efficiency of wave-nets is at least an order of magnitude better than other networks. The mathematical framework for the development of wave-nets is presented and various aspects of their practical implementation are discussed. The problem of predicting a chaotic time-series is solved as an illustrative example.<>
提出了一种隐含一层节点的人工神经网络,其基函数取自一组正交小波。小波网络或波网建立在功能分析的坚实理论基础之上。基函数在输入域和频域的良好定位特性,允许从实验数据中分层、多分辨率地学习输入-输出图。波浪网允许明确估计全局和局部预测误差界限,因此使其具有严格和透明的网络设计。计算复杂度的争论证明了波浪网络的训练和自适应效率至少比其他网络好一个数量级。提出了波网发展的数学框架,并讨论了其实际实现的各个方面。以混沌时间序列的预测问题为例进行了求解。
{"title":"Wavelets as basis functions for localized learning in a multi-resolution hierarchy","authors":"B. R. Bakshi, G. Stephanopoulos","doi":"10.1109/IJCNN.1992.227017","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227017","url":null,"abstract":"An artificial neural network with one hidden layer of nodes, whose basis functions are drawn from a family of orthonormal wavelets, is developed. Wavelet networks or wave-nets are based on firm theoretical foundations of functional analysis. The good localization characteristics of the basis functions, both in the input and frequency domains, allow hierarchical, multi-resolution learning of input-output maps from experimental data. Wave-nets allow explicit estimation of global and local prediction error-bounds, and thus lend themselves to a rigorous and transparent design of the network. Computational complexity arguments prove that the training and adaptation efficiency of wave-nets is at least an order of magnitude better than other networks. The mathematical framework for the development of wave-nets is presented and various aspects of their practical implementation are discussed. The problem of predicting a chaotic time-series is solved as an illustrative example.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Fuzzy ARTMAP: an adaptive resonance architecture for incremental learning of analog maps 模糊ARTMAP:一种用于模拟图增量学习的自适应共振结构
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227156
G. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, D. B. Rosen
Fuzzy ARTMAP achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. Fuzzy ARTMAP realizes a new minimax learning rule that conjointly minimizes predictive error and maximizes code compression or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or hidden units, to meet accuracy criteria. Improved prediction is achieved by training the system several times using different orderings of the input set, and then voting. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Simulations illustrated fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithmic systems.<>
模糊ARTMAP实现了模糊逻辑和自适应共振理论(ART)神经网络的综合。模糊ARTMAP实现了最小化预测误差和最大化代码压缩或泛化的一种新的极大极小学习规则。这是通过匹配跟踪过程实现的,该过程将ART警戒参数增加到纠正预测错误所需的最小量。因此,系统自动学习最小数量的识别类别或隐藏单元,以满足准确性标准。改进的预测是通过使用输入集的不同顺序对系统进行多次训练,然后进行投票来实现的。这种投票策略还可以用于为给定小的、有噪声的或不完整的训练集的竞争预测分配概率估计。与基准反向传播和遗传算法系统相比,仿真说明了模糊ARTMAP的性能。
{"title":"Fuzzy ARTMAP: an adaptive resonance architecture for incremental learning of analog maps","authors":"G. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, D. B. Rosen","doi":"10.1109/IJCNN.1992.227156","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227156","url":null,"abstract":"Fuzzy ARTMAP achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. Fuzzy ARTMAP realizes a new minimax learning rule that conjointly minimizes predictive error and maximizes code compression or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or hidden units, to meet accuracy criteria. Improved prediction is achieved by training the system several times using different orderings of the input set, and then voting. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Simulations illustrated fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithmic systems.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127696931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1