Ye-tian Fan, Wei Wu, Wenyu Yang, Qin-wei Fan, Jian Wang
{"title":"A pruning algorithm with L1/2 regularizer for extreme learning machine","authors":"Ye-tian Fan, Wei Wu, Wenyu Yang, Qin-wei Fan, Jian Wang","doi":"10.1631/jzus.C1300197","DOIUrl":null,"url":null,"abstract":"Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.","PeriodicalId":49947,"journal":{"name":"Journal of Zhejiang University-Science C-Computers & Electronics","volume":"15 1","pages":"119 - 125"},"PeriodicalIF":0.0000,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1631/jzus.C1300197","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Zhejiang University-Science C-Computers & Electronics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1631/jzus.C1300197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.