Age-specific life-table death counts observed over time are examples of densities. Nonnegativity and summability are constraints that sometimes require modifications of standard linear statistical methods. The centered log-ratio transformation presents a mapping from a constrained to a less constrained space. With a time series of densities, forecasts are more relevant to the recent data than the data from the distant past. We introduce a weighted compositional functional data analysis for modeling and forecasting life-table death counts. Our extension assigns higher weights to more recent data and provides a modeling scheme easily adapted for constraints. We illustrate our method using age-specific Swedish life-table death counts from 1751 to 2020. Compared with their unweighted counterparts, the weighted compositional data analytic method improves short-term point and interval forecast accuracies. The improved forecast accuracy could help actuaries improve the pricing of annuities and setting of reserves.
{"title":"Weighted compositional functional data analysis for modeling and forecasting life-table death counts","authors":"Han Lin Shang, Steven Haberman","doi":"10.1002/for.3171","DOIUrl":"10.1002/for.3171","url":null,"abstract":"<p>Age-specific life-table death counts observed over time are examples of densities. Nonnegativity and summability are constraints that sometimes require modifications of standard linear statistical methods. The centered log-ratio transformation presents a mapping from a constrained to a less constrained space. With a time series of densities, forecasts are more relevant to the recent data than the data from the distant past. We introduce a weighted compositional functional data analysis for modeling and forecasting life-table death counts. Our extension assigns higher weights to more recent data and provides a modeling scheme easily adapted for constraints. We illustrate our method using age-specific Swedish life-table death counts from 1751 to 2020. Compared with their unweighted counterparts, the weighted compositional data analytic method improves short-term point and interval forecast accuracies. The improved forecast accuracy could help actuaries improve the pricing of annuities and setting of reserves.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"43 8","pages":"3051-3071"},"PeriodicalIF":3.4,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.3171","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How do professionals forecast in uncertain times, when the relationships between variables that held in the past may no longer be useful for forecasting the future? For inflation forecasting, we answer this question by measuring survey respondents' adherence to their pre-COVID-19 Phillips curve models during the pandemic. We also ask whether professionals ought to have put their trust in their Phillips curve models over the COVID-19 period. We address these questions allowing for heterogeneity in respondents' forecasts and in their perceptions of the Phillips curve relationship.
{"title":"Survey respondents' inflation forecasts and the COVID period","authors":"Michael P. Clements","doi":"10.1002/for.3169","DOIUrl":"https://doi.org/10.1002/for.3169","url":null,"abstract":"<p>How do professionals forecast in uncertain times, when the relationships between variables that held in the past may no longer be useful for forecasting the future? For inflation forecasting, we answer this question by measuring survey respondents' adherence to their pre-COVID-19 Phillips curve models during the pandemic. We also ask whether professionals <i>ought</i> to have put their trust in their Phillips curve models over the COVID-19 period. We address these questions allowing for heterogeneity in respondents' forecasts and in their perceptions of the Phillips curve relationship.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"43 8","pages":"3035-3050"},"PeriodicalIF":3.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.3169","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Widely used volatility forecasting methods are usually based on low-frequency time series models. Although some of them employ high-frequency observations, these intraday data are often summarized into low-frequency point statistics, for example, daily realized measures, before being incorporated into a forecasting model. This paper contributes to the volatility forecasting literature by instead predicting the next-period intraday volatility curve via a functional time series forecasting approach. Asymptotic theory related to the estimation of latent volatility curves via functional principal analysis is formally established, laying a solid theoretical foundation of the proposed forecasting method. In contrast with nonfunctional methods, the proposed functional approach fully exploits the rich intraday information and hence leads to more accurate volatility forecasts. This is confirmed by extensive comparisons between the proposed method and those widely used nonfunctional methods in both Monte Carlo simulations and an empirical study on a number of stocks and equity indices from the Chinese market.
{"title":"Functional volatility forecasting","authors":"Yingwen Tan, Zhensi Tan, Yinfen Tang, Zhiyuan Zhang","doi":"10.1002/for.3170","DOIUrl":"https://doi.org/10.1002/for.3170","url":null,"abstract":"<p>Widely used volatility forecasting methods are usually based on low-frequency time series models. Although some of them employ high-frequency observations, these intraday data are often summarized into low-frequency <i>point</i> statistics, for example, daily realized measures, before being incorporated into a forecasting model. This paper contributes to the volatility forecasting literature by instead predicting the next-period intraday volatility curve via a <i>functional</i> time series forecasting approach. Asymptotic theory related to the estimation of latent volatility curves via functional principal analysis is formally established, laying a solid theoretical foundation of the proposed forecasting method. In contrast with nonfunctional methods, the proposed functional approach fully exploits the rich intraday information and hence leads to more accurate volatility forecasts. This is confirmed by extensive comparisons between the proposed method and those widely used nonfunctional methods in both Monte Carlo simulations and an empirical study on a number of stocks and equity indices from the Chinese market.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"43 8","pages":"3009-3034"},"PeriodicalIF":3.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chahid Ahabchane, Tolga Cenesizoglu, Gunnar Grass, Sanjay Dominik Jena
Market participants who need to trade a significant number of securities within a given period can face high transaction costs. In this paper, we document how improvements in intraday liquidity forecasts can help reduce total transaction costs. We compare various approaches for forecasting intraday transaction costs, including autoregressive and machine learning models, using comprehensive ultra-high-frequency limit order book data for a sample of NYSE stocks from 2002 to 2012. Our results indicate that improved liquidity forecasts can significantly decrease total transaction costs. Simple models capturing seasonality in market liquidity tend to outperform alternative models.
{"title":"Reducing transaction costs using intraday forecasts of limit order book slopes","authors":"Chahid Ahabchane, Tolga Cenesizoglu, Gunnar Grass, Sanjay Dominik Jena","doi":"10.1002/for.3164","DOIUrl":"10.1002/for.3164","url":null,"abstract":"<p>Market participants who need to trade a significant number of securities within a given period can face high transaction costs. In this paper, we document how improvements in intraday liquidity forecasts can help reduce total transaction costs. We compare various approaches for forecasting intraday transaction costs, including autoregressive and machine learning models, using comprehensive ultra-high-frequency limit order book data for a sample of NYSE stocks from 2002 to 2012. Our results indicate that improved liquidity forecasts can significantly decrease total transaction costs. Simple models capturing seasonality in market liquidity tend to outperform alternative models.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"43 8","pages":"2982-3008"},"PeriodicalIF":3.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.3164","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141352674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a simple yet effective modification to bootstrap aggregation (bagging) and boosting techniques, aimed at addressing substantial errors arising from parameter estimation, particularly prevalent in macroeconomic and financial forecasting. We propose “egalitarian” bagging and boosting algorithms, where forecasts are derived through an equally weighted combination scheme following variable selection procedures, rather than relying on estimated model parameters. Our empirical work focuses on volatility forecasting, where our approach is applied to a hierarchical model that aggregates a diverse array of volatility components over different time intervals. Significant improvements in predictive accuracy are observed when conventional bagging and boosting approaches are replaced by their “egalitarian” counterparts, across a range of assets and forecast horizons. Notably, these improvements are most pronounced during periods of financial market turmoil, particularly for medium- to long-term predictions. In contrast to boosting, which often yields a sparse model specification, bagging effectively leverages a diverse range of volatility cascades to capture rich information without succumbing to increasing estimation errors. The proposed “egalitarian” algorithm plays a crucial role in facilitating this process, contributing to the superior performance of bagging over other competing approaches.
{"title":"Harnessing volatility cascades with ensemble learning","authors":"Mingmian Cheng","doi":"10.1002/for.3166","DOIUrl":"https://doi.org/10.1002/for.3166","url":null,"abstract":"<p>This paper introduces a simple yet effective modification to bootstrap aggregation (bagging) and boosting techniques, aimed at addressing substantial errors arising from parameter estimation, particularly prevalent in macroeconomic and financial forecasting. We propose “egalitarian” bagging and boosting algorithms, where forecasts are derived through an equally weighted combination scheme following variable selection procedures, rather than relying on estimated model parameters. Our empirical work focuses on volatility forecasting, where our approach is applied to a hierarchical model that aggregates a diverse array of volatility components over different time intervals. Significant improvements in predictive accuracy are observed when conventional bagging and boosting approaches are replaced by their “egalitarian” counterparts, across a range of assets and forecast horizons. Notably, these improvements are most pronounced during periods of financial market turmoil, particularly for medium- to long-term predictions. In contrast to boosting, which often yields a sparse model specification, bagging effectively leverages a diverse range of volatility cascades to capture rich information without succumbing to increasing estimation errors. The proposed “egalitarian” algorithm plays a crucial role in facilitating this process, contributing to the superior performance of bagging over other competing approaches.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"43 8","pages":"2954-2981"},"PeriodicalIF":3.4,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel kernel-based generalized random interval multilayer perceptron (KG-iMLP) method for predicting high-volatility interval-valued returns of crude oil. The KG-iMLP model is constructed by utilizing the