首页 > 最新文献

IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)最新文献

英文 中文
Using RBF neural networks and a fuzzy logic controller to stabilize wood pulp freeness 采用RBF神经网络和模糊控制器对木浆游离度进行稳定
J. Bard, J. Patton, M. Musavi
The quality of paper produced in a papermaking process is largely dependent on the properties of the wood pulp used. One important property is pulp freeness. Ideally, a constant, predetermined level of freeness is desired to achieve the highest quality of paper possible. The focus of this paper is on developing a system to control the wood pulp freeness. A radial basis function (RBF) artificial neural network was used to model the freeness and a fuzzy logic controller was used to control the input parameters to maintain a desired level of freeness. Ideally, the controller will reduce pulp freeness fluctuations in order to improve overall paper sheet quality and production.
在造纸过程中生产的纸的质量在很大程度上取决于所用木浆的性质。一个重要的特性是果肉游离度。理想情况下,为了获得尽可能高的纸张质量,需要一个恒定的、预定的自由度水平。本文的重点是开发一种控制木浆游离度的系统。采用径向基函数(RBF)人工神经网络对自由度进行建模,并采用模糊控制器对输入参数进行控制,以保持期望的自由度水平。理想情况下,控制器将减少纸浆游离度波动,以提高整体纸张质量和产量。
{"title":"Using RBF neural networks and a fuzzy logic controller to stabilize wood pulp freeness","authors":"J. Bard, J. Patton, M. Musavi","doi":"10.1109/IJCNN.1999.830848","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830848","url":null,"abstract":"The quality of paper produced in a papermaking process is largely dependent on the properties of the wood pulp used. One important property is pulp freeness. Ideally, a constant, predetermined level of freeness is desired to achieve the highest quality of paper possible. The focus of this paper is on developing a system to control the wood pulp freeness. A radial basis function (RBF) artificial neural network was used to model the freeness and a fuzzy logic controller was used to control the input parameters to maintain a desired level of freeness. Ideally, the controller will reduce pulp freeness fluctuations in order to improve overall paper sheet quality and production.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133459266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analysis of autoassociative mapping neural networks 自关联映射神经网络分析
S. Ikbal, Hemant Misra, B. Yegnanarayana
In this paper we analyse the mapping behavior of an autoassociative neural network (AANN). The mapping in an AANN is achieved by using a dimension reduction followed by a dimension expansion. One of the major results of the analysis is that, the network performs better autoassociation as the size increases. This is because, a network of a given size can deal with only a certain level of nonlinearity. Performance of autoassociative mapping is illustrated with 2D examples. We have shown the utility of the mapping feature of an AANN for speaker verification.
本文分析了自关联神经网络的映射行为。AANN中的映射是通过使用降维和扩展维来实现的。分析的一个主要结果是,随着网络大小的增加,网络的自动关联性能会更好。这是因为,给定规模的网络只能处理一定程度的非线性。用二维实例说明了自关联映射的性能。我们已经展示了AANN在说话人验证方面的映射特性的实用性。
{"title":"Analysis of autoassociative mapping neural networks","authors":"S. Ikbal, Hemant Misra, B. Yegnanarayana","doi":"10.1109/IJCNN.1999.836037","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836037","url":null,"abstract":"In this paper we analyse the mapping behavior of an autoassociative neural network (AANN). The mapping in an AANN is achieved by using a dimension reduction followed by a dimension expansion. One of the major results of the analysis is that, the network performs better autoassociation as the size increases. This is because, a network of a given size can deal with only a certain level of nonlinearity. Performance of autoassociative mapping is illustrated with 2D examples. We have shown the utility of the mapping feature of an AANN for speaker verification.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127850949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Robust regularized learning using distributed approximating functional networks 使用分布式逼近功能网络的鲁棒正则化学习
Zhuoer Shi, Desheng Zhang, D. Kouri, D. Hoffman
We present a novel polynomial functional neural networks using distributed approximating functional (DAF) wavelets (infinitely smooth filters in both time and frequency regimes), for signal estimation and surface fitting. The remarkable advantage of these polynomial nets is that the functional space smoothness is identical to the state space smoothness (consisting of the weighting vectors). The constrained cost energy function using optimal regularization programming endows the networks with a natural time-varying filtering feature. Theoretical analysis and an application show that the approach is extremely stable and efficient for signal processing and curve/surface fitting.
我们提出了一种新的多项式泛函神经网络,使用分布式近似泛函(DAF)小波(时间和频率范围内的无限光滑滤波器)进行信号估计和表面拟合。这些多项式网络的显著优点是函数空间平滑性与状态空间平滑性相同(由加权向量组成)。采用最优正则化规划的约束代价能量函数使网络具有自然的时变滤波特性。理论分析和应用表明,该方法对信号处理和曲线/曲面拟合具有很高的稳定性和效率。
{"title":"Robust regularized learning using distributed approximating functional networks","authors":"Zhuoer Shi, Desheng Zhang, D. Kouri, D. Hoffman","doi":"10.1109/IJCNN.1999.836169","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836169","url":null,"abstract":"We present a novel polynomial functional neural networks using distributed approximating functional (DAF) wavelets (infinitely smooth filters in both time and frequency regimes), for signal estimation and surface fitting. The remarkable advantage of these polynomial nets is that the functional space smoothness is identical to the state space smoothness (consisting of the weighting vectors). The constrained cost energy function using optimal regularization programming endows the networks with a natural time-varying filtering feature. Theoretical analysis and an application show that the approach is extremely stable and efficient for signal processing and curve/surface fitting.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127064061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intelligent reconfigurable control of robot manipulators 机器人机械手的智能可重构控制
J. Chung, S. Velinsky
One approach towards improving the reliability of high-performance robotic systems is to allow for the automatic reconfiguration of the robot's control system to accommodate actuator failure and/or damage. The new concept of the extended plant and its identification in closed loop is introduced for developing a reconfigurable robot manipulator controller. It is made possible through the use of artificial neural networks. A simulation study demonstrates the effectiveness of the developed control algorithm.
提高高性能机器人系统可靠性的一种方法是允许机器人控制系统的自动重新配置,以适应执行器故障和/或损坏。为开发可重构机器人机械手控制器,引入了扩展对象及其闭环辨识的新概念。这是通过使用人工神经网络实现的。仿真研究验证了该控制算法的有效性。
{"title":"Intelligent reconfigurable control of robot manipulators","authors":"J. Chung, S. Velinsky","doi":"10.1109/IJCNN.1999.832690","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832690","url":null,"abstract":"One approach towards improving the reliability of high-performance robotic systems is to allow for the automatic reconfiguration of the robot's control system to accommodate actuator failure and/or damage. The new concept of the extended plant and its identification in closed loop is introduced for developing a reconfigurable robot manipulator controller. It is made possible through the use of artificial neural networks. A simulation study demonstrates the effectiveness of the developed control algorithm.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115582471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Biophysical basis of neural memory 神经记忆的生物物理基础
A. Radchenko
The model of neural membrane describes interaction of gating charges (GC), their conformational mobility and immobilization during excitation. Volt-conformational and current-voltage characteristic (VCC and CVC) of the membrane are analytically derived. Inactivation is shown to change these characteristics during excitation; this is caused by GC immobilization, instead of the contrary. VCC and CVC have hysteretic properties. Due to them electroexcitable units of the somato-dendritic (SD) membrane arrange a memory medium well adapted to record, keep and reconstruct afferent information. GC immobilization underlies consolidation of memory traces. The theory of quasi-holographic associative memory is constructed where role of memory medium is carried out by synaptic addressed units of electroexcitable mosaics of SD-membranes. Small changes of membrane potential (slow potentials) select modes of such memory: if the working point on VCC is displaced inside the hysteretic loop, then the neuron is in writing mode, if outside then in a reading mode. Current distribution of slow potentials shares neuron population on writing, reading and intermediate sets (short-term memory), they are in relative dynamic (metabolic dependent) balance.
神经膜模型描述了门控电荷(GC)在激发过程中的相互作用、构象迁移和固定。对膜的电压-构象特性和电流-电压特性(VCC和CVC)进行了解析推导。在激发期间,失活会改变这些特性;这是由GC固定引起的,而不是相反。VCC和CVC具有滞回特性。由于它们,体树突(SD)膜的电兴奋单元安排了一种记忆介质,非常适合记录、保存和重建传入信息。GC固定是巩固内存轨迹的基础。建立了准全息联想记忆理论,其中记忆介质的作用是由sd膜的电兴奋嵌合的突触寻址单元完成的。膜电位(慢电位)的微小变化选择了这种记忆模式:如果VCC上的工作点在滞回环内移位,则神经元处于写入模式,如果在滞回环外则处于读取模式。当前慢电位分布在写、读和中间集(短期记忆)上共享神经元群,它们处于相对动态(代谢依赖)平衡。
{"title":"Biophysical basis of neural memory","authors":"A. Radchenko","doi":"10.1109/IJCNN.1999.831448","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831448","url":null,"abstract":"The model of neural membrane describes interaction of gating charges (GC), their conformational mobility and immobilization during excitation. Volt-conformational and current-voltage characteristic (VCC and CVC) of the membrane are analytically derived. Inactivation is shown to change these characteristics during excitation; this is caused by GC immobilization, instead of the contrary. VCC and CVC have hysteretic properties. Due to them electroexcitable units of the somato-dendritic (SD) membrane arrange a memory medium well adapted to record, keep and reconstruct afferent information. GC immobilization underlies consolidation of memory traces. The theory of quasi-holographic associative memory is constructed where role of memory medium is carried out by synaptic addressed units of electroexcitable mosaics of SD-membranes. Small changes of membrane potential (slow potentials) select modes of such memory: if the working point on VCC is displaced inside the hysteretic loop, then the neuron is in writing mode, if outside then in a reading mode. Current distribution of slow potentials shares neuron population on writing, reading and intermediate sets (short-term memory), they are in relative dynamic (metabolic dependent) balance.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115590790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-source neural networks for speech recognition 语音识别的多源神经网络
R. Gemello, D. Albesano, F. Mana
In speech recognition the most diffused technology (hidden Markov models) is constrained by the condition of stochastic independence of its input features. That limits the simultaneous use of features derived from the speech signal with different processing algorithms. On the contrary artificial neural networks (ANN) are capable of incorporating multiple heterogeneous input features, which do not need to be treated as independent, finding the optimal combination of these features for classification. The purpose of this work is the exploitation of this characteristic of ANNs to improve the speech recognition accuracy through the combined use of input features coming from different sources (different feature extraction algorithms). We integrate two input sources: the Mel based cepstral coefficients (MFCC) derived from FFT and the RASTA-PLP cepstral coefficients. The results show that this integration leads to an error reduction of 26% on a telephone quality test set.
隐马尔可夫模型是语音识别中应用最广泛的一种技术,其输入特征的随机独立性限制了该技术的应用。这限制了用不同的处理算法同时使用从语音信号中提取的特征。相反,人工神经网络(ANN)能够结合多个异构输入特征,这些特征不需要被视为独立的,可以找到这些特征的最佳组合进行分类。这项工作的目的是利用人工神经网络的这一特性,通过组合使用来自不同来源的输入特征(不同的特征提取算法)来提高语音识别的准确性。我们整合了两个输入源:基于Mel的FFT倒谱系数(MFCC)和RASTA-PLP倒谱系数。结果表明,这种集成使电话质量测试集的误差降低了26%。
{"title":"Multi-source neural networks for speech recognition","authors":"R. Gemello, D. Albesano, F. Mana","doi":"10.1109/IJCNN.1999.835942","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.835942","url":null,"abstract":"In speech recognition the most diffused technology (hidden Markov models) is constrained by the condition of stochastic independence of its input features. That limits the simultaneous use of features derived from the speech signal with different processing algorithms. On the contrary artificial neural networks (ANN) are capable of incorporating multiple heterogeneous input features, which do not need to be treated as independent, finding the optimal combination of these features for classification. The purpose of this work is the exploitation of this characteristic of ANNs to improve the speech recognition accuracy through the combined use of input features coming from different sources (different feature extraction algorithms). We integrate two input sources: the Mel based cepstral coefficients (MFCC) derived from FFT and the RASTA-PLP cepstral coefficients. The results show that this integration leads to an error reduction of 26% on a telephone quality test set.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123899650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A neural network controller based on the rule of bang-bang control 一种基于砰砰控制规则的神经网络控制器
Chungyong Tsai, Chih-Chi Chang
Applying neural networks or fuzzy systems to the field of optimal control encounters the difficulty of locating adequate samples that can be used to train the neural networks or modify the fuzzy rules such that the optimal control value for a given state can be produced. Instead of an exhaustive search, this work presents a simple method based on the rule of bang-bang control to locate the training samples for time optimal control. Although the samples obtained by the proposed method can be learned by multilayer perceptrons and radial basis networks, a neural network deemed appropriate for learning these samples is proposed as well. Simulation results demonstrate the effectiveness of the proposed method.
将神经网络或模糊系统应用于最优控制领域遇到了找到足够的样本的困难,这些样本可用于训练神经网络或修改模糊规则,以便可以产生给定状态的最优控制值。本文提出了一种基于bang-bang控制规则的简单方法来定位训练样本,以实现时间最优控制,而不是穷举搜索。虽然该方法获得的样本可以通过多层感知器和径向基网络学习,但也提出了一种适合学习这些样本的神经网络。仿真结果验证了该方法的有效性。
{"title":"A neural network controller based on the rule of bang-bang control","authors":"Chungyong Tsai, Chih-Chi Chang","doi":"10.1109/IJCNN.1999.833412","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833412","url":null,"abstract":"Applying neural networks or fuzzy systems to the field of optimal control encounters the difficulty of locating adequate samples that can be used to train the neural networks or modify the fuzzy rules such that the optimal control value for a given state can be produced. Instead of an exhaustive search, this work presents a simple method based on the rule of bang-bang control to locate the training samples for time optimal control. Although the samples obtained by the proposed method can be learned by multilayer perceptrons and radial basis networks, a neural network deemed appropriate for learning these samples is proposed as well. Simulation results demonstrate the effectiveness of the proposed method.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilayer perceptron based dimensionality reduction 基于多层感知器的降维
R. Lotlikar, R. Kothari
Dimensionality reduction is the process of mapping high dimensional patterns to a lower dimensional manifold and is typically used for visualization or as a preprocessing step in classification applications. From a classification viewpoint, the rate of increase of Bayes error serves as an ideal choice to measure the loss of information relevant to classification. Motivated by that, we present a multilayer perceptron which produces as output the lower dimensional representation. The multilayer perceptron is trained so as to minimize the classification error in the subspace. It thus differs from autoassociative like multilayer perceptrons which have been proposed and used for dimensionality reduction. We examine the performance of the proposed method of dimensionality reduction and the effect that varying the parameters have on the algorithm.
降维是将高维模式映射到低维流形的过程,通常用于可视化或作为分类应用程序中的预处理步骤。从分类的角度来看,贝叶斯误差的增长率是衡量分类相关信息损失的理想选择。基于此,我们提出了一种多层感知器,该感知器输出低维表示。对多层感知器进行训练,使子空间中的分类误差最小化。因此,它不同于自关联的多层感知器,这些感知器已被提出并用于降维。我们检验了所提出的降维方法的性能以及参数变化对算法的影响。
{"title":"Multilayer perceptron based dimensionality reduction","authors":"R. Lotlikar, R. Kothari","doi":"10.1109/IJCNN.1999.832629","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832629","url":null,"abstract":"Dimensionality reduction is the process of mapping high dimensional patterns to a lower dimensional manifold and is typically used for visualization or as a preprocessing step in classification applications. From a classification viewpoint, the rate of increase of Bayes error serves as an ideal choice to measure the loss of information relevant to classification. Motivated by that, we present a multilayer perceptron which produces as output the lower dimensional representation. The multilayer perceptron is trained so as to minimize the classification error in the subspace. It thus differs from autoassociative like multilayer perceptrons which have been proposed and used for dimensionality reduction. We examine the performance of the proposed method of dimensionality reduction and the effect that varying the parameters have on the algorithm.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fingerprint recognition using wavelet transform and probabilistic neural network 基于小波变换和概率神经网络的指纹识别
S. Lee, B. Nam
In the recognition of fingerprint, preprocessing such as smoothing, binarization and thinning is needed. Then fingerprint minutiae feature is extracted. Some fingerprint identification algorithm (such as using FFT etc.) may require so much computation as to be impractical. Wavelet based algorithm may be the key to making a low cost fingerprint identification system that would operate on a small computer. We present a fast and effective method to identify fingerprint.
在指纹识别中,需要进行平滑、二值化和细化等预处理。然后提取指纹细节特征。一些指纹识别算法(如使用FFT等)可能需要如此多的计算,以至于不切实际。基于小波变换的指纹识别算法可能是实现小型计算机低成本指纹识别系统的关键。提出了一种快速有效的指纹识别方法。
{"title":"Fingerprint recognition using wavelet transform and probabilistic neural network","authors":"S. Lee, B. Nam","doi":"10.1109/IJCNN.1999.836183","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836183","url":null,"abstract":"In the recognition of fingerprint, preprocessing such as smoothing, binarization and thinning is needed. Then fingerprint minutiae feature is extracted. Some fingerprint identification algorithm (such as using FFT etc.) may require so much computation as to be impractical. Wavelet based algorithm may be the key to making a low cost fingerprint identification system that would operate on a small computer. We present a fast and effective method to identify fingerprint.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124452834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs 前馈神经网络容错、泛化与Vapnik-Chervonenkis (VC)维数的关系
D. Phatak
It is demonstrated that fault tolerance, generalization and the Vapnik-Chertonenkis (VC) dimension are inter-related attributes. It is well known that the generalization error if plotted as a function of the VC dimension h, exhibits a well defined minimum corresponding to an optimal value of h, say h/sub opt/. We show that if the VC dimension h of an ANN satisfies h/spl les/h/sub opt/ (i.e., there is no excess capacity or redundancy), then fault tolerance and generalization are mutually conflicting attributes. On the other hand, if h>h/sub opt/ (i.e., there is excess capacity or redundancy), then fault tolerance and generalization are mutually synergistic attributes. In other words, training methods geared towards improving the fault tolerance can also lead to better generalization and vice versa, only when there is excess capacity or redundancy. This is consistent with our previous results indicating that complete fault tolerance in ANNs requires a significant amount of redundancy.
证明了容错、泛化和Vapnik-Chertonenkis (VC)维是相互关联的属性。众所周知,如果将泛化误差绘制为VC维h的函数,则显示出一个定义良好的最小值,对应于h的最优值,例如h/sub opt/。如果神经网络的VC维h满足h/spl les/h/sub opt/(即不存在多余容量或冗余),则容错和泛化是相互冲突的属性。另一方面,如果h>h/sub opt/(即存在过剩容量或冗余),则容错和泛化是相互协同的属性。换句话说,旨在提高容错性的训练方法也可以导致更好的泛化,反之亦然,只有在存在过剩容量或冗余的情况下。这与我们之前的结果一致,表明人工神经网络中的完全容错需要大量的冗余。
{"title":"Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs","authors":"D. Phatak","doi":"10.1109/IJCNN.1999.831587","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831587","url":null,"abstract":"It is demonstrated that fault tolerance, generalization and the Vapnik-Chertonenkis (VC) dimension are inter-related attributes. It is well known that the generalization error if plotted as a function of the VC dimension h, exhibits a well defined minimum corresponding to an optimal value of h, say h/sub opt/. We show that if the VC dimension h of an ANN satisfies h/spl les/h/sub opt/ (i.e., there is no excess capacity or redundancy), then fault tolerance and generalization are mutually conflicting attributes. On the other hand, if h>h/sub opt/ (i.e., there is excess capacity or redundancy), then fault tolerance and generalization are mutually synergistic attributes. In other words, training methods geared towards improving the fault tolerance can also lead to better generalization and vice versa, only when there is excess capacity or redundancy. This is consistent with our previous results indicating that complete fault tolerance in ANNs requires a significant amount of redundancy.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114572249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1