{"title":"Efficient Mini-batch Training for Echo State Networks","authors":"Chunyuan Zhang, Chao Liu, Jie Zhao","doi":"10.1145/3449301.3449341","DOIUrl":null,"url":null,"abstract":"Echo state networks (ESNs) are generally optimized by the ordinary recursive least squares (ORLS) algorithm. Although ORLS has fast convergence, it can process only one sample per iteration, which makes ESNs difficult to scale to large datasets. To tackle this problem, a novel mini-batch RLS (MRLS) algorithm is proposed in this paper. On this basis, to prevent overfitting in the ESN training, an L1regularization method is suggested for MRLS. In addition, to make ESNs more suitable for time-varying tasks, an adaptive method of the forgetting factor is also introduced for MRLS. Experimental results of two time-series problems show that ESNs have faster processing speed and better convergence performance with MRLS than with ORLS. CCS CONCEPTS • Computing methodologies → Machine learning; Machine learning approaches; Neural networks; Machine learning; Learning settings; Batch learning.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3449301.3449341","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Echo state networks (ESNs) are generally optimized by the ordinary recursive least squares (ORLS) algorithm. Although ORLS has fast convergence, it can process only one sample per iteration, which makes ESNs difficult to scale to large datasets. To tackle this problem, a novel mini-batch RLS (MRLS) algorithm is proposed in this paper. On this basis, to prevent overfitting in the ESN training, an L1regularization method is suggested for MRLS. In addition, to make ESNs more suitable for time-varying tasks, an adaptive method of the forgetting factor is also introduced for MRLS. Experimental results of two time-series problems show that ESNs have faster processing speed and better convergence performance with MRLS than with ORLS. CCS CONCEPTS • Computing methodologies → Machine learning; Machine learning approaches; Neural networks; Machine learning; Learning settings; Batch learning.