利用并行卷积和分解稀疏变换器进行多变量时间序列建模和预测

Shusen Ma;Yun-Bo Zhao;Yu Kang;Peng Bai
{"title":"利用并行卷积和分解稀疏变换器进行多变量时间序列建模和预测","authors":"Shusen Ma;Yun-Bo Zhao;Yu Kang;Peng Bai","doi":"10.1109/TAI.2024.3410934","DOIUrl":null,"url":null,"abstract":"Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence time-series forecasting (LSTF), such as predicting traffic flow and electricity consumption. However, existing time-series prediction models encounter certain limitations. First, they struggle with mapping the multidimensional information present in each time step to high dimensions, resulting in information coupling and increased prediction difficulty. Second, these models fail to effectively decompose the intertwined temporal patterns within the time series, which hinders their ability to learn more predictable features. To overcome these challenges, we propose a novel end-to-end LSTF model with parallelized convolution and decomposed sparse-Transformer (PCDformer). PCDformer achieves the decoupling of input sequences by parallelizing the convolutional layers, enabling the simultaneous processing of different variables within the input sequence. To decompose distinct temporal patterns, PCDformer incorporates a temporal decomposition module within the encoder–decoder structure, effectively separating the input sequence into predictable seasonal and trend components. Additionally, to capture the correlation between variables and mitigate the impact of irrelevant information, PCDformer utilizes a sparse self-attention mechanism. Extensive experimentation conducted on five diverse datasets demonstrates the superior performance of PCDformer in LSTF tasks compared to existing approaches, particularly outperforming encoder–decoder-based models.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multivariate Time-Series Modeling and Forecasting With Parallelized Convolution and Decomposed Sparse-Transformer\",\"authors\":\"Shusen Ma;Yun-Bo Zhao;Yu Kang;Peng Bai\",\"doi\":\"10.1109/TAI.2024.3410934\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence time-series forecasting (LSTF), such as predicting traffic flow and electricity consumption. However, existing time-series prediction models encounter certain limitations. First, they struggle with mapping the multidimensional information present in each time step to high dimensions, resulting in information coupling and increased prediction difficulty. Second, these models fail to effectively decompose the intertwined temporal patterns within the time series, which hinders their ability to learn more predictable features. To overcome these challenges, we propose a novel end-to-end LSTF model with parallelized convolution and decomposed sparse-Transformer (PCDformer). PCDformer achieves the decoupling of input sequences by parallelizing the convolutional layers, enabling the simultaneous processing of different variables within the input sequence. To decompose distinct temporal patterns, PCDformer incorporates a temporal decomposition module within the encoder–decoder structure, effectively separating the input sequence into predictable seasonal and trend components. Additionally, to capture the correlation between variables and mitigate the impact of irrelevant information, PCDformer utilizes a sparse self-attention mechanism. Extensive experimentation conducted on five diverse datasets demonstrates the superior performance of PCDformer in LSTF tasks compared to existing approaches, particularly outperforming encoder–decoder-based models.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10552140/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10552140/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现实世界的许多场景都需要对时间序列进行精确预测,尤其是长序列时间序列预测(LSTF),例如预测交通流量和电力消耗。然而,现有的时间序列预测模型存在一定的局限性。首先,它们难以将每个时间步中的多维信息映射到高维度,导致信息耦合,增加了预测难度。其次,这些模型无法有效分解时间序列中相互交织的时间模式,这阻碍了它们学习更多可预测特征的能力。为了克服这些挑战,我们提出了一种新型端到端 LSTF 模型,该模型具有并行化卷积和分解稀疏变换器(PCDformer)。PCDformer 通过并行化卷积层实现输入序列的解耦,从而能够同时处理输入序列中的不同变量。为了分解不同的时间模式,PCDformer 在编码器-解码器结构中加入了时间分解模块,从而有效地将输入序列分离为可预测的季节和趋势成分。此外,为了捕捉变量之间的相关性并减轻无关信息的影响,PCDformer 采用了一种稀疏的自我关注机制。在五个不同的数据集上进行的广泛实验表明,与现有方法相比,PCDformer 在 LSTF 任务中的性能更为出色,尤其是优于基于编码器-解码器的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multivariate Time-Series Modeling and Forecasting With Parallelized Convolution and Decomposed Sparse-Transformer
Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence time-series forecasting (LSTF), such as predicting traffic flow and electricity consumption. However, existing time-series prediction models encounter certain limitations. First, they struggle with mapping the multidimensional information present in each time step to high dimensions, resulting in information coupling and increased prediction difficulty. Second, these models fail to effectively decompose the intertwined temporal patterns within the time series, which hinders their ability to learn more predictable features. To overcome these challenges, we propose a novel end-to-end LSTF model with parallelized convolution and decomposed sparse-Transformer (PCDformer). PCDformer achieves the decoupling of input sequences by parallelizing the convolutional layers, enabling the simultaneous processing of different variables within the input sequence. To decompose distinct temporal patterns, PCDformer incorporates a temporal decomposition module within the encoder–decoder structure, effectively separating the input sequence into predictable seasonal and trend components. Additionally, to capture the correlation between variables and mitigate the impact of irrelevant information, PCDformer utilizes a sparse self-attention mechanism. Extensive experimentation conducted on five diverse datasets demonstrates the superior performance of PCDformer in LSTF tasks compared to existing approaches, particularly outperforming encoder–decoder-based models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
Table of Contents Front Cover IEEE Transactions on Artificial Intelligence Publication Information Front Cover Table of Contents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1