G.H. Harish Nayak , Md Wasi Alam , G. Avinash , Rajeev Ranjan Kumar , Mrinmoy Ray , Samir Barman , K.N. Singh , B. Samuel Naik , Nurnabi Meherul Alam , Prasenjit Pal , Santosha Rathod , Jaiprakash Bisen
{"title":"基于变换的时间序列预测深度学习架构","authors":"G.H. Harish Nayak , Md Wasi Alam , G. Avinash , Rajeev Ranjan Kumar , Mrinmoy Ray , Samir Barman , K.N. Singh , B. Samuel Naik , Nurnabi Meherul Alam , Prasenjit Pal , Santosha Rathod , Jaiprakash Bisen","doi":"10.1016/j.simpa.2024.100716","DOIUrl":null,"url":null,"abstract":"<div><div>Time series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like RNNs, LSTMs, and GRUs process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"22 ","pages":"Article 100716"},"PeriodicalIF":1.3000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transformer-based deep learning architecture for time series forecasting\",\"authors\":\"G.H. Harish Nayak , Md Wasi Alam , G. Avinash , Rajeev Ranjan Kumar , Mrinmoy Ray , Samir Barman , K.N. Singh , B. Samuel Naik , Nurnabi Meherul Alam , Prasenjit Pal , Santosha Rathod , Jaiprakash Bisen\",\"doi\":\"10.1016/j.simpa.2024.100716\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Time series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like RNNs, LSTMs, and GRUs process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing.</div></div>\",\"PeriodicalId\":29771,\"journal\":{\"name\":\"Software Impacts\",\"volume\":\"22 \",\"pages\":\"Article 100716\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Impacts\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2665963824001040\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Impacts","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2665963824001040","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Transformer-based deep learning architecture for time series forecasting
Time series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like RNNs, LSTMs, and GRUs process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing.