Time series decomposition as a general preprocessing method has been widely used in the field of time series forecasting. However, since the future is unknown, this preprocessing practice is limited in realistic forecasting, especially in real-time forecasting scenarios. In this paper, we propose a framework with time series decomposition and probabilistic forecasting capabilities. Distinguishing from models based on time series pre-decomposition, our proposed framework can decompose the series into trend components and seasonal components in real time to achieve end-to-end forecasting. We apply this framework to four state-of-the-art deep time series models and test their performance on four synthetic datasets and the WTI oil price dataset. The results show that the seasonal decomposition-based framework can significantly improve the point and probabilistic forecasting accuracy of the original models.
{"title":"Seasonal Decomposition-Enhanced Deep Learning Architecture for Probabilistic Forecasting","authors":"Keyan Jin, Francisco Javier Blanco-Encomienda","doi":"10.1002/for.70065","DOIUrl":"https://doi.org/10.1002/for.70065","url":null,"abstract":"<p>Time series decomposition as a general preprocessing method has been widely used in the field of time series forecasting. However, since the future is unknown, this preprocessing practice is limited in realistic forecasting, especially in real-time forecasting scenarios. In this paper, we propose a framework with time series decomposition and probabilistic forecasting capabilities. Distinguishing from models based on time series pre-decomposition, our proposed framework can decompose the series into trend components and seasonal components in real time to achieve end-to-end forecasting. We apply this framework to four state-of-the-art deep time series models and test their performance on four synthetic datasets and the WTI oil price dataset. The results show that the seasonal decomposition-based framework can significantly improve the point and probabilistic forecasting accuracy of the original models.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"880-891"},"PeriodicalIF":2.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146199389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Curtale, Matteo Schiavone, Filipe Batista e Silva
Socioeconomic projections are policy support tools that are often limited to country-level data, making them insufficient for policy areas that require a more nuanced, sub-national perspective. For granular geographical analyses in a multicountry setting, international organizations often rely on straightforward regionalization techniques, such as assuming that the observed regional shares remain constant over the projection period. This approach fails to capture the varying economic performances between regions, making the resulting regional projections unrealistic. In this paper, we propose a novel regionalization method for GDP projections based on (1) changes in population and (2) econometrically estimated factors of regional GDP per capita growth. We test our approach in the EU27 Member States for the period 2000–2019 by downscaling observed GDP from national to regional (NUTS3) level. Results show that our model substantially outperforms alternative regionalization techniques by improving the skill scores up to 18%. The performance of the proposed methodology increases for longer estimation and projection periods. Our regionalization approach shows the benefit of incorporating demographic dynamics and regional growth factors to regionalize national GDP values, especially to downscale long-term GDP projections.
{"title":"A Novel Approach to Regionalize Country-Level GDP Projections","authors":"Riccardo Curtale, Matteo Schiavone, Filipe Batista e Silva","doi":"10.1002/for.70052","DOIUrl":"https://doi.org/10.1002/for.70052","url":null,"abstract":"<p>Socioeconomic projections are policy support tools that are often limited to country-level data, making them insufficient for policy areas that require a more nuanced, sub-national perspective. For granular geographical analyses in a multicountry setting, international organizations often rely on straightforward regionalization techniques, such as assuming that the observed regional shares remain constant over the projection period. This approach fails to capture the varying economic performances between regions, making the resulting regional projections unrealistic. In this paper, we propose a novel regionalization method for GDP projections based on (1) changes in population and (2) econometrically estimated factors of regional GDP per capita growth. We test our approach in the EU27 Member States for the period 2000–2019 by downscaling observed GDP from national to regional (NUTS3) level. Results show that our model substantially outperforms alternative regionalization techniques by improving the skill scores up to 18%. The performance of the proposed methodology increases for longer estimation and projection periods. Our regionalization approach shows the benefit of incorporating demographic dynamics and regional growth factors to regionalize national GDP values, especially to downscale long-term GDP projections.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"867-879"},"PeriodicalIF":2.7,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Di Sha, Xianyi Zeng, Arne Johannssen, Ruolin Wang, Kim Phuc Tran
Accurate predictions of carbon prices are essential for efficient administration and stable operation of carbon markets. Previous studies have mostly focused on point or interval predictions based on point-valued data. These approaches insufficiently capture the full extent of market volatility. In contrast, interval-valued data, containing maximum and minimum values, enable more meaningful interval-valued predictions and thus provide a more comprehensive assessment of uncertainty. However, as previous research in this direction is limited, this study proposes a two-stage framework for interval-valued prediction using interval-valued data. During the initial prediction stage, natural language processing (NLP) techniques are employed to analyze textual data from social media to assess market sentiment. This unstructured data (UD) is then combined with structured data (SD) and fed into a convolutional neural network-bidirectional long short-term memory-Attention (CNN-BiLSTM-Attention) mechanism to generate an initial prediction. During the error correction (EC) stage, deviations between the actual and initial predicted values are calculated. These error sequences are then predicted and incorporated into the initial prediction to refine the final results. Trading simulations indicate that the proposed SD-UD-CNN-BiLSTM-Attention-EC model can reduce trading risk and improve trading returns.
{"title":"A Two-Stage NLP-Driven Framework for Interval-Valued Carbon Price Prediction Using Sentiment Analysis and Error Correction","authors":"Di Sha, Xianyi Zeng, Arne Johannssen, Ruolin Wang, Kim Phuc Tran","doi":"10.1002/for.70059","DOIUrl":"https://doi.org/10.1002/for.70059","url":null,"abstract":"<p>Accurate predictions of carbon prices are essential for efficient administration and stable operation of carbon markets. Previous studies have mostly focused on point or interval predictions based on point-valued data. These approaches insufficiently capture the full extent of market volatility. In contrast, interval-valued data, containing maximum and minimum values, enable more meaningful interval-valued predictions and thus provide a more comprehensive assessment of uncertainty. However, as previous research in this direction is limited, this study proposes a two-stage framework for interval-valued prediction using interval-valued data. During the initial prediction stage, natural language processing (NLP) techniques are employed to analyze textual data from social media to assess market sentiment. This unstructured data (UD) is then combined with structured data (SD) and fed into a convolutional neural network-bidirectional long short-term memory-Attention (CNN-BiLSTM-Attention) mechanism to generate an initial prediction. During the error correction (EC) stage, deviations between the actual and initial predicted values are calculated. These error sequences are then predicted and incorporated into the initial prediction to refine the final results. Trading simulations indicate that the proposed SD-UD-CNN-BiLSTM-Attention-EC model can reduce trading risk and improve trading returns.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"806-818"},"PeriodicalIF":2.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70059","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media (SM) has revolutionized the way companies connect with customers, enabling more personalized marketing strategies and enhancing engagement. With platforms like Facebook offering detailed user data, businesses can create more targeted advertising campaigns. This paper proposes an approach to categorizing SM variables based on their SM marketing objectives with respect to the demand modeling for promotional products, which is particularly challenging due to limited historical data. A taxonomy is developed of the Facebook marketing metrics that drive consumer behavior with respect to product demand. Moreover, the study explores how the SM marketing metrics groups impact short-term demand modeling for promotional products in an analysis of a real case study and finds that paid Facebook metrics, which are generated from paid advertising efforts on the platform, are the best predictors of demand for their promotional products.
{"title":"Investigation of Social Media Metrics With Respect to Demand Modeling for Promotional Products","authors":"Yvonne Badulescu, Fernan Cañas, Ari-Pekka Hameri, Naoufel Cheikhrouhou","doi":"10.1002/for.70064","DOIUrl":"https://doi.org/10.1002/for.70064","url":null,"abstract":"<p>Social media (SM) has revolutionized the way companies connect with customers, enabling more personalized marketing strategies and enhancing engagement. With platforms like Facebook offering detailed user data, businesses can create more targeted advertising campaigns. This paper proposes an approach to categorizing SM variables based on their SM marketing objectives with respect to the demand modeling for promotional products, which is particularly challenging due to limited historical data. A taxonomy is developed of the Facebook marketing metrics that drive consumer behavior with respect to product demand. Moreover, the study explores how the SM marketing metrics groups impact short-term demand modeling for promotional products in an analysis of a real case study and finds that paid Facebook metrics, which are generated from paid advertising efforts on the platform, are the best predictors of demand for their promotional products.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"850-866"},"PeriodicalIF":2.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146197043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forecasting systems have a long tradition in providing outputs accompanied by explanations. While the vast majority of such explanations relies on inherently interpretable linear statistical models, research has put forth eXplainable Artificial Intelligence (XAI) methods to improve the comprehensibility of nonlinear machine learning models. As explanations related to forecasts constitute important building blocks in forecasting systems, the validation of explainer methods is an essential part of system selection, parameterization, and adoption. Current research on explainer method assessment focuses on metrics for classification rather than numerical forecasting and predominantly assesses explanation quality within time-consuming, costly, and subjective studies involving humans. Given that the functional validation of explanations is of core interest to research on forecasting, our paper makes three contributions: First, we establish an approach for functionally grounded validations of explainer methods for numerical forecasting. Second, we propose computational rules for the metrics consistency, stability, and faithfulness. Third, we demonstrate our approach for the forecasting case of electricity demand estimation for energy benchmarks and compare a linear statistical approach with the state-of-the-art XAI methods SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Explainable Boosting Machine (EBM). Our work allows research and practice to validate and compare the quality of explainer methods on a functionally grounded level.
{"title":"Validating Explainer Methods: A Functionally Grounded Approach for Numerical Forecasting","authors":"Felix Haag, Konstantin Hopf, Thorsten Staake","doi":"10.1002/for.70060","DOIUrl":"https://doi.org/10.1002/for.70060","url":null,"abstract":"<p>Forecasting systems have a long tradition in providing outputs accompanied by explanations. While the vast majority of such explanations relies on inherently interpretable linear statistical models, research has put forth eXplainable Artificial Intelligence (XAI) methods to improve the comprehensibility of nonlinear machine learning models. As explanations related to forecasts constitute important building blocks in forecasting systems, the validation of explainer methods is an essential part of system selection, parameterization, and adoption. Current research on explainer method assessment focuses on metrics for classification rather than numerical forecasting and predominantly assesses explanation quality within time-consuming, costly, and subjective studies involving humans. Given that the functional validation of explanations is of core interest to research on forecasting, our paper makes three contributions: First, we establish an approach for functionally grounded validations of explainer methods for numerical forecasting. Second, we propose computational rules for the metrics consistency, stability, and faithfulness. Third, we demonstrate our approach for the forecasting case of electricity demand estimation for energy benchmarks and compare a linear statistical approach with the state-of-the-art XAI methods SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Explainable Boosting Machine (EBM). Our work allows research and practice to validate and compare the quality of explainer methods on a functionally grounded level.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"819-836"},"PeriodicalIF":2.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70060","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146197013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer L. Castle, Jurgen A. Doornik, David F. Hendry
A sequence of increasingly large same-sign 1-step-ahead forecast errors are most likely due to a sudden unexpected shift. Large forecast errors can be expensive, but also contain valuable information. Impulse indicators acting as intercept corrections to set forecasts back on track can be quickly tested for replacing outliers, a location shift or broken trend, greatly improving forecast accuracy. The analysis is applied to forecasting the UK's annual consumer price inflation which rose rapidly from mid-2021 to over 9% in 2022 after a series of essentially unpredictable shocks led to large forecast errors by the Bank of England.
{"title":"A Novel Approach to Forecasting After Large Forecast Errors","authors":"Jennifer L. Castle, Jurgen A. Doornik, David F. Hendry","doi":"10.1002/for.70062","DOIUrl":"https://doi.org/10.1002/for.70062","url":null,"abstract":"<p>A sequence of increasingly large same-sign 1-step-ahead forecast errors are most likely due to a sudden unexpected shift. Large forecast errors can be expensive, but also contain valuable information. Impulse indicators acting as intercept corrections to set forecasts back on track can be quickly tested for replacing outliers, a location shift or broken trend, greatly improving forecast accuracy. The analysis is applied to forecasting the UK's annual consumer price inflation which rose rapidly from mid-2021 to over 9% in 2022 after a series of essentially unpredictable shocks led to large forecast errors by the Bank of England.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"837-849"},"PeriodicalIF":2.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70062","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146199464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malte C. Tichy, Illia Babounikau, Nikolas Wolke, Stefan Ulbrich, Michael Feindt
Forecast quality should be assessed in the context of what is possible in theory and what is reasonable to expect in practice. Often, one can identify an approximate upper bound to a probabilistic forecast's sharpness, which sets a lower, not necessarily achievable, limit to error metrics. In retail forecasting, a simple but often unconquerable sharpness limit is given by the Poisson distribution. When evaluating forecasts using traditional metrics such as mean absolute error, it is hard to judge whether a certain achieved value reflects unavoidable Poisson noise or truly indicates an overdispersed prediction model. Moreover, every evaluation metric suffers from precision scaling: The metric's value is mostly defined by the selling rate and by the resulting rate-dependent Poisson noise, and only secondarily by the forecast quality. Comparing two groups of forecasted products often yields “the slow movers are performing worse than the fast movers” or vice versa, which we call the naïve scaling trap. To distill the intrinsic quality of a forecast, we stratify predictions into buckets of approximately equal predicted values and evaluate metrics separately per bucket. By comparing the achieved value per bucket to benchmarks defined by the theoretical expectation value of the metric, we obtain an intuitive visualization of forecast quality. This representation can be summarized by a single rating that makes forecast quality comparable among different products or even industries. The thereby developed scaling-aware forecast rating is applied to forecasting models used on the M5 competition dataset as well as to real-life forecasts provided by Blue Yonder's Demand Edge for Retail solution for grocery products in Sainsbury's supermarkets in the United Kingdom. The results permit a clear interpretation and high-level understanding of model quality by nonexperts.
{"title":"Scaling-Aware Rating of Poisson-Limited Demand Forecasts","authors":"Malte C. Tichy, Illia Babounikau, Nikolas Wolke, Stefan Ulbrich, Michael Feindt","doi":"10.1002/for.70055","DOIUrl":"https://doi.org/10.1002/for.70055","url":null,"abstract":"<p>Forecast quality should be assessed in the context of what is possible in theory and what is reasonable to expect in practice. Often, one can identify an approximate upper bound to a probabilistic forecast's sharpness, which sets a lower, not necessarily achievable, limit to error metrics. In retail forecasting, a simple but often unconquerable sharpness limit is given by the Poisson distribution. When evaluating forecasts using traditional metrics such as mean absolute error, it is hard to judge whether a certain achieved value reflects unavoidable Poisson noise or truly indicates an overdispersed prediction model. Moreover, every evaluation metric suffers from <i>precision scaling</i>: The metric's value is mostly defined by the selling rate and by the resulting rate-dependent Poisson noise, and only secondarily by the forecast quality. Comparing two groups of forecasted products often yields “the slow movers are performing worse than the fast movers” or vice versa, which we call the <i>naïve scaling trap</i>. To distill the intrinsic quality of a forecast, we stratify predictions into buckets of approximately equal predicted values and evaluate metrics separately per bucket. By comparing the achieved value per bucket to benchmarks defined by the theoretical expectation value of the metric, we obtain an intuitive visualization of forecast quality. This representation can be summarized by a single rating that makes forecast quality comparable among different products or even industries. The thereby developed <i>scaling-aware forecast rating</i> is applied to forecasting models used on the M5 competition dataset as well as to real-life forecasts provided by Blue Yonder's Demand Edge for Retail solution for grocery products in Sainsbury's supermarkets in the United Kingdom. The results permit a clear interpretation and high-level understanding of model quality by nonexperts.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"787-805"},"PeriodicalIF":2.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interest rates are fundamental in macroeconomic modeling. Recent studies integrate the effective lower bound (ELB) into vector autoregressions (VARs). This paper studies shadow-rate VARs by using interest rates as a latent variable near the ELB to estimate their shadow-rate values. The study explores machine learning models, such as the Bayesian LASSO, and extends the analysis to include homoscedastic and stochastic volatility shadow-rate VARs. It also examines the integration of shadow rate with vintage-specific long-run assumptions derived from the Survey of Professional Forecasters (SPF). The paper analyzes 16 shadow-rate VARs with 20 US variables, using real-time data from 2005 to 2019 and assesses their predictive accuracy for both point and density forecasts. The findings indicate that shadow-rate models can enhance predictive accuracy for both short-term and longer term horizons across macroeconomic and financial variables. These models could be of use for central banks and policymakers.
{"title":"Forecasting With Machine Learning Shadow-Rate VARs","authors":"Michael Grammatikopoulos","doi":"10.1002/for.70041","DOIUrl":"https://doi.org/10.1002/for.70041","url":null,"abstract":"<p>Interest rates are fundamental in macroeconomic modeling. Recent studies integrate the effective lower bound (ELB) into vector autoregressions (VARs). This paper studies shadow-rate VARs by using interest rates as a latent variable near the ELB to estimate their shadow-rate values. The study explores machine learning models, such as the Bayesian LASSO, and extends the analysis to include homoscedastic and stochastic volatility shadow-rate VARs. It also examines the integration of shadow rate with vintage-specific long-run assumptions derived from the Survey of Professional Forecasters (SPF). The paper analyzes 16 shadow-rate VARs with 20 US variables, using real-time data from 2005 to 2019 and assesses their predictive accuracy for both point and density forecasts. The findings indicate that shadow-rate models can enhance predictive accuracy for both short-term and longer term horizons across macroeconomic and financial variables. These models could be of use for central banks and policymakers.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"770-786"},"PeriodicalIF":2.7,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose several threshold mixed data sampling (TMIDAS) autoregressive models to forecast the Canadian inflation rate using predictors observed at different frequencies. These models take two low-frequency variables and a high-frequency index as threshold variables. We compare our TMIDAS models to commonly used benchmark models, evaluating their in-sample and out-of-sample forecasts. Our results demonstrate the good forecasting performance of the TMIDAS models. Particularly, the in-sample results highlight that the TMIDAS model using the high-frequency index as the threshold variable outperforms other models. Through unconditional superior predictive ability (USPA) and conditional superior predictive ability (CSPA) tests for out-of-sample evaluation, we find that no single model consistently outperforms the others, although at least one of our TMIDAS models remains competitive in most cases.
{"title":"Threshold MIDAS Forecasting of Canadian Inflation Rate","authors":"Chaoyi Chen, Yiguo Sun, Yao Rao","doi":"10.1002/for.70040","DOIUrl":"https://doi.org/10.1002/for.70040","url":null,"abstract":"<p>We propose several threshold mixed data sampling (TMIDAS) autoregressive models to forecast the Canadian inflation rate using predictors observed at different frequencies. These models take two low-frequency variables and a high-frequency index as threshold variables. We compare our TMIDAS models to commonly used benchmark models, evaluating their in-sample and out-of-sample forecasts. Our results demonstrate the good forecasting performance of the TMIDAS models. Particularly, the in-sample results highlight that the TMIDAS model using the high-frequency index as the threshold variable outperforms other models. Through unconditional superior predictive ability (USPA) and conditional superior predictive ability (CSPA) tests for out-of-sample evaluation, we find that no single model consistently outperforms the others, although at least one of our TMIDAS models remains competitive in most cases.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"45 2","pages":"749-769"},"PeriodicalIF":2.7,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}