{"title":"一种基于萤火虫算法优化的极限学习机","authors":"Qiang Zhang, Hongxin Li, Changnian Liu, Wei Hu","doi":"10.1109/ISCID.2013.147","DOIUrl":null,"url":null,"abstract":"Extreme learning machine (ELM) is a new type of feed forward neural network. Compared with traditional single hidden layer feed forward neural networks, ELM executes with higher training speed and produces smaller error. Due to random input weights and hidden biases, ELM might need numerous hidden neurons to achieve a reasonable accuracy. A new ELM learning algorithm, which was optimized by the Firefly Algorithm (FA), was proposed in this paper. FA was used to select the input weights and biases of hidden layer, and then the output weights could be calculated. To test the validity of proposed method, a simulation experiments about the approximation curves of the SINC function was done. The results showed that the proposed algorithm achieved better performance with less hidden neurons than other similar methods.","PeriodicalId":297027,"journal":{"name":"2013 Sixth International Symposium on Computational Intelligence and Design","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A New Extreme Learning Machine Optimized by Firefly Algorithm\",\"authors\":\"Qiang Zhang, Hongxin Li, Changnian Liu, Wei Hu\",\"doi\":\"10.1109/ISCID.2013.147\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Extreme learning machine (ELM) is a new type of feed forward neural network. Compared with traditional single hidden layer feed forward neural networks, ELM executes with higher training speed and produces smaller error. Due to random input weights and hidden biases, ELM might need numerous hidden neurons to achieve a reasonable accuracy. A new ELM learning algorithm, which was optimized by the Firefly Algorithm (FA), was proposed in this paper. FA was used to select the input weights and biases of hidden layer, and then the output weights could be calculated. To test the validity of proposed method, a simulation experiments about the approximation curves of the SINC function was done. The results showed that the proposed algorithm achieved better performance with less hidden neurons than other similar methods.\",\"PeriodicalId\":297027,\"journal\":{\"name\":\"2013 Sixth International Symposium on Computational Intelligence and Design\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 Sixth International Symposium on Computational Intelligence and Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCID.2013.147\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Sixth International Symposium on Computational Intelligence and Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCID.2013.147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A New Extreme Learning Machine Optimized by Firefly Algorithm
Extreme learning machine (ELM) is a new type of feed forward neural network. Compared with traditional single hidden layer feed forward neural networks, ELM executes with higher training speed and produces smaller error. Due to random input weights and hidden biases, ELM might need numerous hidden neurons to achieve a reasonable accuracy. A new ELM learning algorithm, which was optimized by the Firefly Algorithm (FA), was proposed in this paper. FA was used to select the input weights and biases of hidden layer, and then the output weights could be calculated. To test the validity of proposed method, a simulation experiments about the approximation curves of the SINC function was done. The results showed that the proposed algorithm achieved better performance with less hidden neurons than other similar methods.