{"title":"An analysis of underfitting in MLP networks","authors":"S. Narayan, G. Tagliarini","doi":"10.1109/IJCNN.2005.1555986","DOIUrl":null,"url":null,"abstract":"The generalization ability of an MLP network has been shown to be related to both the number and magnitudes of the network weights. Thus, there exists a tension between employing networks with few weights that have relatively large magnitudes, and networks with a greater number of weights with relatively small magnitudes. The analysis presented in this paper indicates that large magnitudes for network weights potentially increase the propensity of a network to interpolate poorly. Experimental results indicate that when bounds are imposed on network weights, the backpropagation algorithm is capable of discovering networks with small weight magnitudes that retain their expressive power and exhibit good generalization.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"6 11","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2005.1555986","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
The generalization ability of an MLP network has been shown to be related to both the number and magnitudes of the network weights. Thus, there exists a tension between employing networks with few weights that have relatively large magnitudes, and networks with a greater number of weights with relatively small magnitudes. The analysis presented in this paper indicates that large magnitudes for network weights potentially increase the propensity of a network to interpolate poorly. Experimental results indicate that when bounds are imposed on network weights, the backpropagation algorithm is capable of discovering networks with small weight magnitudes that retain their expressive power and exhibit good generalization.