Implicit Reasoning in Deep Time Series Forecasting

Willa Potosnak, Cristian Challu, Mononito Goswami, Michał Wiliński, Nina Żukowska
{"title":"Implicit Reasoning in Deep Time Series Forecasting","authors":"Willa Potosnak, Cristian Challu, Mononito Goswami, Michał Wiliński, Nina Żukowska","doi":"arxiv-2409.10840","DOIUrl":null,"url":null,"abstract":"Recently, time series foundation models have shown promising zero-shot\nforecasting performance on time series from a wide range of domains. However,\nit remains unclear whether their success stems from a true understanding of\ntemporal dynamics or simply from memorizing the training data. While implicit\nreasoning in language models has been studied, similar evaluations for time\nseries models have been largely unexplored. This work takes an initial step\ntoward assessing the reasoning abilities of deep time series forecasting\nmodels. We find that certain linear, MLP-based, and patch-based Transformer\nmodels generalize effectively in systematically orchestrated\nout-of-distribution scenarios, suggesting underexplored reasoning capabilities\nbeyond simple pattern memorization.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10840","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, time series foundation models have shown promising zero-shot forecasting performance on time series from a wide range of domains. However, it remains unclear whether their success stems from a true understanding of temporal dynamics or simply from memorizing the training data. While implicit reasoning in language models has been studied, similar evaluations for time series models have been largely unexplored. This work takes an initial step toward assessing the reasoning abilities of deep time series forecasting models. We find that certain linear, MLP-based, and patch-based Transformer models generalize effectively in systematically orchestrated out-of-distribution scenarios, suggesting underexplored reasoning capabilities beyond simple pattern memorization.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
深度时间序列预测中的隐含推理
最近,时间序列基础模型在广泛领域的时间序列上显示出了良好的零点预测性能。然而,这些模型的成功是源于对时间动态的真正理解,还是仅仅源于对训练数据的记忆,目前仍不清楚。虽然语言模型中的隐式推理已被研究过,但时间序列模型的类似评估在很大程度上还未被探索过。这项研究迈出了评估深度时间序列预测模型推理能力的第一步。我们发现,某些线性模型、基于 MLP 的模型和基于补丁的 Transformerm 模型能在系统协调的分布外场景中有效泛化,这表明除了简单的模式记忆外,推理能力还未得到充分开发。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1