{"title":"基于经验模态分解和两种注意机制的综合时间序列预测模型","authors":"Xianchang Wang, Siyu Dong, Rui Zhang","doi":"10.3390/info14110610","DOIUrl":null,"url":null,"abstract":"In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention mechanism, we propose an integrated model for time series prediction based on signal decomposition and two attention mechanisms. This model combines the results of three networks—LSTM, LSTM-self-attention, and LSTM-temporal attention—all trained using subsequences obtained from EMD. Additionally, since previous research on EMD has been limited to single series analysis, this paper includes multiple series by employing two data pre-processing methods: ‘overall normalization’ and ‘respective normalization’. Experimental results on various datasets demonstrate that compared to models without attention mechanisms, temporal attention improves the prediction accuracy of short- and medium-term decomposed series by 15~28% and 45~72%, respectively; furthermore, it reduces the overall prediction error by 10~17%. The integrated model with temporal attention achieves a reduction in error of approximately 0.3%, primarily when compared to models utilizing only general forms of attention mechanisms. Moreover, after normalizing multiple series separately, the predictive performance is equivalent to that achieved for individual series.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms\",\"authors\":\"Xianchang Wang, Siyu Dong, Rui Zhang\",\"doi\":\"10.3390/info14110610\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention mechanism, we propose an integrated model for time series prediction based on signal decomposition and two attention mechanisms. This model combines the results of three networks—LSTM, LSTM-self-attention, and LSTM-temporal attention—all trained using subsequences obtained from EMD. Additionally, since previous research on EMD has been limited to single series analysis, this paper includes multiple series by employing two data pre-processing methods: ‘overall normalization’ and ‘respective normalization’. Experimental results on various datasets demonstrate that compared to models without attention mechanisms, temporal attention improves the prediction accuracy of short- and medium-term decomposed series by 15~28% and 45~72%, respectively; furthermore, it reduces the overall prediction error by 10~17%. The integrated model with temporal attention achieves a reduction in error of approximately 0.3%, primarily when compared to models utilizing only general forms of attention mechanisms. Moreover, after normalizing multiple series separately, the predictive performance is equivalent to that achieved for individual series.\",\"PeriodicalId\":38479,\"journal\":{\"name\":\"Information (Switzerland)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information (Switzerland)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/info14110610\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information (Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14110610","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms
In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention mechanism, we propose an integrated model for time series prediction based on signal decomposition and two attention mechanisms. This model combines the results of three networks—LSTM, LSTM-self-attention, and LSTM-temporal attention—all trained using subsequences obtained from EMD. Additionally, since previous research on EMD has been limited to single series analysis, this paper includes multiple series by employing two data pre-processing methods: ‘overall normalization’ and ‘respective normalization’. Experimental results on various datasets demonstrate that compared to models without attention mechanisms, temporal attention improves the prediction accuracy of short- and medium-term decomposed series by 15~28% and 45~72%, respectively; furthermore, it reduces the overall prediction error by 10~17%. The integrated model with temporal attention achieves a reduction in error of approximately 0.3%, primarily when compared to models utilizing only general forms of attention mechanisms. Moreover, after normalizing multiple series separately, the predictive performance is equivalent to that achieved for individual series.