MR变换器:用于多变量时间序列预测的多分辨率变换器。

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2023-11-06 DOI:10.1109/TNNLS.2023.3327416
Siying Zhu, Jiawei Zheng, Qianli Ma
{"title":"MR变换器:用于多变量时间序列预测的多分辨率变换器。","authors":"Siying Zhu, Jiawei Zheng, Qianli Ma","doi":"10.1109/TNNLS.2023.3327416","DOIUrl":null,"url":null,"abstract":"<p><p>Multivariate time series (MTS) prediction has been studied broadly, which is widely applied in real-world applications. Recently, transformer-based methods have shown the potential in this task for their strong sequence modeling ability. Despite progress, these methods pay little attention to extracting short-term information in the context, while short-term patterns play an essential role in reflecting local temporal dynamics. Moreover, we argue that there are both consistent and specific characteristics among multiple variables, which should be fully considered for MTS modeling. To this end, we propose a multiresolution transformer (MR-Transformer) for MTS prediction, modeling MTS from both the temporal and the variable resolution. Specifically, for the temporal resolution, we design a long short-term transformer. We first split the sequence into nonoverlapping segments in an adaptive way and then extract short-term patterns within segments, while long-term patterns are captured by the inherent attention mechanism. Both of them are aggregated together to capture the temporal dependencies. For the variable resolution, besides the variable-consistent features learned by long short-term transformer, we also design a temporal convolution module to capture the specific features of each variable individually. MR-Transformer enhances the MTS modeling ability by combining multiresolution features between both time steps and variables. Extensive experiments conducted on real-world time series datasets show that MR-Transformer significantly outperforms the state-of-the-art MTS prediction models. The visualization analysis also demonstrates the effectiveness of the proposed model.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction.\",\"authors\":\"Siying Zhu, Jiawei Zheng, Qianli Ma\",\"doi\":\"10.1109/TNNLS.2023.3327416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Multivariate time series (MTS) prediction has been studied broadly, which is widely applied in real-world applications. Recently, transformer-based methods have shown the potential in this task for their strong sequence modeling ability. Despite progress, these methods pay little attention to extracting short-term information in the context, while short-term patterns play an essential role in reflecting local temporal dynamics. Moreover, we argue that there are both consistent and specific characteristics among multiple variables, which should be fully considered for MTS modeling. To this end, we propose a multiresolution transformer (MR-Transformer) for MTS prediction, modeling MTS from both the temporal and the variable resolution. Specifically, for the temporal resolution, we design a long short-term transformer. We first split the sequence into nonoverlapping segments in an adaptive way and then extract short-term patterns within segments, while long-term patterns are captured by the inherent attention mechanism. Both of them are aggregated together to capture the temporal dependencies. For the variable resolution, besides the variable-consistent features learned by long short-term transformer, we also design a temporal convolution module to capture the specific features of each variable individually. MR-Transformer enhances the MTS modeling ability by combining multiresolution features between both time steps and variables. Extensive experiments conducted on real-world time series datasets show that MR-Transformer significantly outperforms the state-of-the-art MTS prediction models. The visualization analysis also demonstrates the effectiveness of the proposed model.</p>\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2023-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/TNNLS.2023.3327416\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2023.3327416","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多变量时间序列预测已经得到了广泛的研究,在现实世界中得到了广泛应用。最近,基于变换器的方法由于其强大的序列建模能力而在这项任务中显示出了潜力。尽管取得了进展,但这些方法很少关注在上下文中提取短期信息,而短期模式在反映局部时间动态方面发挥着重要作用。此外,我们认为多个变量之间既有一致性又有特定性,这在MTS建模中应该得到充分考虑。为此,我们提出了一种用于MTS预测的多分辨率变换器(MR transformer),从时间分辨率和可变分辨率对MTS进行建模。具体来说,对于时间分辨率,我们设计了一个长短期变压器。我们首先以自适应的方式将序列划分为不重叠的片段,然后提取片段中的短期模式,而长期模式则由固有的注意力机制捕获。这两者都被聚合在一起以捕获时间依赖关系。对于变量分辨率,除了长短期变换器学习到的变量一致性特征外,我们还设计了一个时间卷积模块来单独捕捉每个变量的特定特征。MR Transformer通过结合时间步长和变量之间的多分辨率特征,增强了MTS建模能力。在真实世界的时间序列数据集上进行的大量实验表明,MR Transformer显著优于最先进的MTS预测模型。可视化分析也证明了该模型的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction.

Multivariate time series (MTS) prediction has been studied broadly, which is widely applied in real-world applications. Recently, transformer-based methods have shown the potential in this task for their strong sequence modeling ability. Despite progress, these methods pay little attention to extracting short-term information in the context, while short-term patterns play an essential role in reflecting local temporal dynamics. Moreover, we argue that there are both consistent and specific characteristics among multiple variables, which should be fully considered for MTS modeling. To this end, we propose a multiresolution transformer (MR-Transformer) for MTS prediction, modeling MTS from both the temporal and the variable resolution. Specifically, for the temporal resolution, we design a long short-term transformer. We first split the sequence into nonoverlapping segments in an adaptive way and then extract short-term patterns within segments, while long-term patterns are captured by the inherent attention mechanism. Both of them are aggregated together to capture the temporal dependencies. For the variable resolution, besides the variable-consistent features learned by long short-term transformer, we also design a temporal convolution module to capture the specific features of each variable individually. MR-Transformer enhances the MTS modeling ability by combining multiresolution features between both time steps and variables. Extensive experiments conducted on real-world time series datasets show that MR-Transformer significantly outperforms the state-of-the-art MTS prediction models. The visualization analysis also demonstrates the effectiveness of the proposed model.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Boundary-Aware Axial Attention Network for High-Quality Pavement Crack Detection Granular Ball Twin Support Vector Machine Decoupled Prioritized Resampling for Offline RL Adaptive Graph Convolutional Network for Unsupervised Generalizable Tabular Representation Learning Gently Sloped and Extended Classification Margin for Overconfidence Relaxation of Out-of-Distribution Samples
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1