主成分分析和高斯噪声在MLP神经网络训练中提高了小数据集和不平衡数据集问题的泛化能力

Icamaan B. Viegas da Silva, P. Adeodato
{"title":"主成分分析和高斯噪声在MLP神经网络训练中提高了小数据集和不平衡数据集问题的泛化能力","authors":"Icamaan B. Viegas da Silva, P. Adeodato","doi":"10.1109/IJCNN.2011.6033567","DOIUrl":null,"url":null,"abstract":"Machine learning approaches have been successfully applied for automatic decision support in several domains. The quality of these systems, however, degrades severely in classification problems with small and unbalanced data sets for knowledge acquisition. Inherent to several real-world problems, data sets with these characteristics are the reality to be tackled by learning algorithms, but the small amount of data affects the classifiers' generalization power while the imbalance in class distribution makes the classifiers biased towards the larger classes. Previous work had addressed these data constraints with the addition of Gaussian noise to the input patterns' variables during the iterative training process of a MultiLayer perceptron (MLP) neural network (NN). This paper improves the quality of such classifier by decorrelating the input variables via a Principal Component Analysis (PCA) transformation of the original input space before applying additive Gaussian noise to each transformed variable for each input pattern. PCA transformation prevents the conflicting effect of adding decorrelated noise to correlated variables, an effect which increases with the noise level. Three public data sets from a well-known benchmark (Proben1) were used to validate the proposed approach. Experimental results indicate that the proposed methodology improves the performance of the previous approach being statistically better than the traditional training method (95% confidence) in further experimental set-ups.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"PCA and Gaussian noise in MLP neural network training improve generalization in problems with small and unbalanced data sets\",\"authors\":\"Icamaan B. Viegas da Silva, P. Adeodato\",\"doi\":\"10.1109/IJCNN.2011.6033567\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning approaches have been successfully applied for automatic decision support in several domains. The quality of these systems, however, degrades severely in classification problems with small and unbalanced data sets for knowledge acquisition. Inherent to several real-world problems, data sets with these characteristics are the reality to be tackled by learning algorithms, but the small amount of data affects the classifiers' generalization power while the imbalance in class distribution makes the classifiers biased towards the larger classes. Previous work had addressed these data constraints with the addition of Gaussian noise to the input patterns' variables during the iterative training process of a MultiLayer perceptron (MLP) neural network (NN). This paper improves the quality of such classifier by decorrelating the input variables via a Principal Component Analysis (PCA) transformation of the original input space before applying additive Gaussian noise to each transformed variable for each input pattern. PCA transformation prevents the conflicting effect of adding decorrelated noise to correlated variables, an effect which increases with the noise level. Three public data sets from a well-known benchmark (Proben1) were used to validate the proposed approach. Experimental results indicate that the proposed methodology improves the performance of the previous approach being statistically better than the traditional training method (95% confidence) in further experimental set-ups.\",\"PeriodicalId\":415833,\"journal\":{\"name\":\"The 2011 International Joint Conference on Neural Networks\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2011 International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2011.6033567\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2011 International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2011.6033567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

摘要

机器学习方法已经成功地应用于多个领域的自动决策支持。然而,这些系统的质量在知识获取的小而不平衡的数据集的分类问题中严重下降。具有这些特征的数据集是一些现实问题的固有特征,是学习算法需要解决的现实问题,但是数据量少会影响分类器的泛化能力,而类分布的不平衡又会使分类器偏向于更大的类。先前的工作通过在多层感知器(MLP)神经网络(NN)的迭代训练过程中向输入模式变量添加高斯噪声来解决这些数据约束。本文通过对原始输入空间进行主成分分析(PCA)变换,在对每个输入模式的每个变换变量施加加性高斯噪声之前,对输入变量进行去相关处理,从而提高了该分类器的质量。PCA变换防止了在相关变量中加入去相关噪声的冲突效应,这种影响随着噪声水平的增加而增加。使用来自知名基准(Proben1)的三个公共数据集来验证所提出的方法。实验结果表明,在进一步的实验设置中,所提出的方法在统计上优于传统训练方法(95%置信度)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PCA and Gaussian noise in MLP neural network training improve generalization in problems with small and unbalanced data sets
Machine learning approaches have been successfully applied for automatic decision support in several domains. The quality of these systems, however, degrades severely in classification problems with small and unbalanced data sets for knowledge acquisition. Inherent to several real-world problems, data sets with these characteristics are the reality to be tackled by learning algorithms, but the small amount of data affects the classifiers' generalization power while the imbalance in class distribution makes the classifiers biased towards the larger classes. Previous work had addressed these data constraints with the addition of Gaussian noise to the input patterns' variables during the iterative training process of a MultiLayer perceptron (MLP) neural network (NN). This paper improves the quality of such classifier by decorrelating the input variables via a Principal Component Analysis (PCA) transformation of the original input space before applying additive Gaussian noise to each transformed variable for each input pattern. PCA transformation prevents the conflicting effect of adding decorrelated noise to correlated variables, an effect which increases with the noise level. Three public data sets from a well-known benchmark (Proben1) were used to validate the proposed approach. Experimental results indicate that the proposed methodology improves the performance of the previous approach being statistically better than the traditional training method (95% confidence) in further experimental set-ups.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Chaos of protein folding EEG-based brain dynamics of driving distraction Residential energy system control and management using adaptive dynamic programming How the core theory of CLARION captures human decision-making Wiener systems for reconstruction of missing seismic traces
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1