{"title":"Forecasting Method based upon GRU-based Deep Learning Model","authors":"A. Almalki, P. Wocjan","doi":"10.1109/CSCI51800.2020.00096","DOIUrl":null,"url":null,"abstract":"In this research, the world model has a modified RNN model carried out by a bi-directional gated recurrent unit (BGRU) as opposed to a traditional long short-term memory (LSTM) model. BGRU tends to use less memory while executing and training faster than an LSTM, as it uses fewer training parameters. However, the LSTM model provides greater accuracy with datasets using longer sequences. Based upon practical implementation, the BGRU model produced better performance results. In BGRU, the memory is combined with the network. There is no update gate and forget in the GRU. The forget and update gate are treated as one unit thus it is the primary reason of parameter reduction.","PeriodicalId":336929,"journal":{"name":"2020 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Computational Science and Computational Intelligence (CSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSCI51800.2020.00096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this research, the world model has a modified RNN model carried out by a bi-directional gated recurrent unit (BGRU) as opposed to a traditional long short-term memory (LSTM) model. BGRU tends to use less memory while executing and training faster than an LSTM, as it uses fewer training parameters. However, the LSTM model provides greater accuracy with datasets using longer sequences. Based upon practical implementation, the BGRU model produced better performance results. In BGRU, the memory is combined with the network. There is no update gate and forget in the GRU. The forget and update gate are treated as one unit thus it is the primary reason of parameter reduction.