D2Vformer:基于时间位置嵌入的灵活时间序列预测模型

Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc
{"title":"D2Vformer:基于时间位置嵌入的灵活时间序列预测模型","authors":"Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc","doi":"arxiv-2409.11024","DOIUrl":null,"url":null,"abstract":"Time position embeddings capture the positional information of time steps,\noften serving as auxiliary inputs to enhance the predictive capabilities of\ntime series models. However, existing models exhibit limitations in capturing\nintricate time positional information and effectively utilizing these\nembeddings. To address these limitations, this paper proposes a novel model\ncalled D2Vformer. Unlike typical prediction methods that rely on RNNs or\nTransformers, this approach can directly handle scenarios where the predicted\nsequence is not adjacent to the input sequence or where its length dynamically\nchanges. In comparison to conventional methods, D2Vformer undoubtedly saves a\nsignificant amount of training resources. In D2Vformer, the Date2Vec module\nuses the timestamp information and feature sequences to generate time position\nembeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an\nattention mechanism to explore the similarity in time positions between the\nembeddings of the input sequence and the predicted sequence, thereby generating\npredictions based on this similarity. Through extensive experiments on six\ndatasets, we demonstrate that Date2Vec outperforms other time position\nembedding methods, and D2Vformer surpasses state-of-the-art methods in both\nfixed-length and variable-length prediction tasks.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding\",\"authors\":\"Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc\",\"doi\":\"arxiv-2409.11024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Time position embeddings capture the positional information of time steps,\\noften serving as auxiliary inputs to enhance the predictive capabilities of\\ntime series models. However, existing models exhibit limitations in capturing\\nintricate time positional information and effectively utilizing these\\nembeddings. To address these limitations, this paper proposes a novel model\\ncalled D2Vformer. Unlike typical prediction methods that rely on RNNs or\\nTransformers, this approach can directly handle scenarios where the predicted\\nsequence is not adjacent to the input sequence or where its length dynamically\\nchanges. In comparison to conventional methods, D2Vformer undoubtedly saves a\\nsignificant amount of training resources. In D2Vformer, the Date2Vec module\\nuses the timestamp information and feature sequences to generate time position\\nembeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an\\nattention mechanism to explore the similarity in time positions between the\\nembeddings of the input sequence and the predicted sequence, thereby generating\\npredictions based on this similarity. Through extensive experiments on six\\ndatasets, we demonstrate that Date2Vec outperforms other time position\\nembedding methods, and D2Vformer surpasses state-of-the-art methods in both\\nfixed-length and variable-length prediction tasks.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

时间位置嵌入捕捉时间步长的位置信息,通常作为辅助输入来增强时间序列模型的预测能力。然而,现有模型在捕捉错综复杂的时间位置信息和有效利用这些嵌入方面表现出局限性。为了解决这些局限性,本文提出了一种名为 D2Vformer 的新型模型。与依赖 RNN 或变换器的典型预测方法不同,这种方法可以直接处理预测序列与输入序列不相邻或其长度动态变化的情况。与传统方法相比,D2Vformer 无疑节省了大量的训练资源。在 D2Vformer 中,Date2Vec 模块利用时间戳信息和特征序列生成时间位置嵌套。之后,D2Vformer 引入了一个新的融合模块,利用注意力机制来探索输入序列嵌入和预测序列嵌入在时间位置上的相似性,从而根据这种相似性生成预测结果。通过在六个数据集上的广泛实验,我们证明了 Date2Vec 的性能优于其他时间位置嵌入方法,而 D2Vformer 在定长和变长预测任务中的性能都超过了最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding
Time position embeddings capture the positional information of time steps, often serving as auxiliary inputs to enhance the predictive capabilities of time series models. However, existing models exhibit limitations in capturing intricate time positional information and effectively utilizing these embeddings. To address these limitations, this paper proposes a novel model called D2Vformer. Unlike typical prediction methods that rely on RNNs or Transformers, this approach can directly handle scenarios where the predicted sequence is not adjacent to the input sequence or where its length dynamically changes. In comparison to conventional methods, D2Vformer undoubtedly saves a significant amount of training resources. In D2Vformer, the Date2Vec module uses the timestamp information and feature sequences to generate time position embeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an attention mechanism to explore the similarity in time positions between the embeddings of the input sequence and the predicted sequence, thereby generating predictions based on this similarity. Through extensive experiments on six datasets, we demonstrate that Date2Vec outperforms other time position embedding methods, and D2Vformer surpasses state-of-the-art methods in both fixed-length and variable-length prediction tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1