{"title":"Optimization of Probabilistic Neural Networks Based on Center Neighbor","authors":"Lianzhong Liu, Chunfang Li, Lipu Qian","doi":"10.1109/ICEE.2010.366","DOIUrl":null,"url":null,"abstract":"Probabilistic Neural Networks (PNN) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of PNN is that it requires one node or neuron for each training sample. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. Decision boundaries of clustering centers are approximation to training samples. A new optimization of PNN is investigated here using iteratively computing the centers of each class samples unrecognized and add the nearest neighbors to pattern layer. This algorithm takes into account not only the approximation of probability density but also the necessary of classification. For compensating the loss of generalization accuracy to some degree, ensemble learning technique is introduced to boost the accuracy in test dataset. Experiments on UCI show the appropriate tradeoff in training time, number of nodes and generalization ability.","PeriodicalId":420284,"journal":{"name":"2010 International Conference on E-Business and E-Government","volume":"182 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 International Conference on E-Business and E-Government","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEE.2010.366","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Probabilistic Neural Networks (PNN) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of PNN is that it requires one node or neuron for each training sample. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. Decision boundaries of clustering centers are approximation to training samples. A new optimization of PNN is investigated here using iteratively computing the centers of each class samples unrecognized and add the nearest neighbors to pattern layer. This algorithm takes into account not only the approximation of probability density but also the necessary of classification. For compensating the loss of generalization accuracy to some degree, ensemble learning technique is introduced to boost the accuracy in test dataset. Experiments on UCI show the appropriate tradeoff in training time, number of nodes and generalization ability.