This paper analyzes the effect of planned fiscal consolidation on GDP growth forecast errors from the years 2010-2013 using cross section analyses and fixed effects estimations. Our main findings are that fiscal multipliers have been underestimated in most instances for the year 2011 while we find little to no evidence for the years 2010 and especially the latter years 2012/13. Since the underestimation of fiscal multipliers seems to have decreased over time, it may indicate learning effects of forecasters. However, the implications for fiscal policy should be considered with caution as a false forecast of fiscal multipliers does not confirm that austerity is the wrong fiscal approach but only suggests a too optimistic assessment of fiscal multipliers for the year 2011.
{"title":"Planned Fiscal Consolidations and Growth Forecast Errors -- New Panel Evidence on Fiscal Multipliers","authors":"A. Belke, Dominik Kronen, Thomas Osowski","doi":"10.2139/ssrn.2706425","DOIUrl":"https://doi.org/10.2139/ssrn.2706425","url":null,"abstract":"This paper analyzes the effect of planned fiscal consolidation on GDP growth forecast errors from the years 2010-2013 using cross section analyses and fixed effects estimations. Our main findings are that fiscal multipliers have been underestimated in most instances for the year 2011 while we find little to no evidence for the years 2010 and especially the latter years 2012/13. Since the underestimation of fiscal multipliers seems to have decreased over time, it may indicate learning effects of forecasters. However, the implications for fiscal policy should be considered with caution as a false forecast of fiscal multipliers does not confirm that austerity is the wrong fiscal approach but only suggests a too optimistic assessment of fiscal multipliers for the year 2011.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130364351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this paper, we develop a “jump test” for the null hypothesis that the probability of a jump is zero, building on earlier work by Ait-Sahalia (2002). The test is based on realized third moments, and uses observations over an increasing time span. The test offers an alternative to standard finite time span tests, and is designed to detect jumps in the data generating process rather than detecting realized jumps over a fixed time span. More specifically, we make two contributions. First, we introduce our largely model free jump test for the null hypothesis of zero jump intensity. Second, under the maintained assumption of strictly positive jump intensity, we introduce two “self-excitement” tests for the null of constant jump intensity against the alternative of path dependent intensity. These tests have power against autocorrelation in the jump component, and are direct tests for Hawkes diffusions (see, e.g. Ait-Sahalia et al. (2015)). The limiting distributions of the proposed statistics are analyzed via use of a double asymptotic scheme, wherein the time span goes to infinity and the discrete interval approaches zero; and the distributions of the tests are normal and half normal. The results from a Monte Carlo study indicate that the tests have reasonable finite sample properties. An empirical illustration based on the analysis of 11 stock price series indicates the prevalence of jumps and self-excitation.
{"title":"Testing for Jumps and Jump Intensity Path Dependence","authors":"V. Corradi, M. Silvapulle, Norman R. Swanson","doi":"10.2139/ssrn.2998255","DOIUrl":"https://doi.org/10.2139/ssrn.2998255","url":null,"abstract":"Abstract In this paper, we develop a “jump test” for the null hypothesis that the probability of a jump is zero, building on earlier work by Ait-Sahalia (2002). The test is based on realized third moments, and uses observations over an increasing time span. The test offers an alternative to standard finite time span tests, and is designed to detect jumps in the data generating process rather than detecting realized jumps over a fixed time span. More specifically, we make two contributions. First, we introduce our largely model free jump test for the null hypothesis of zero jump intensity. Second, under the maintained assumption of strictly positive jump intensity, we introduce two “self-excitement” tests for the null of constant jump intensity against the alternative of path dependent intensity. These tests have power against autocorrelation in the jump component, and are direct tests for Hawkes diffusions (see, e.g. Ait-Sahalia et al. (2015)). The limiting distributions of the proposed statistics are analyzed via use of a double asymptotic scheme, wherein the time span goes to infinity and the discrete interval approaches zero; and the distributions of the tests are normal and half normal. The results from a Monte Carlo study indicate that the tests have reasonable finite sample properties. An empirical illustration based on the analysis of 11 stock price series indicates the prevalence of jumps and self-excitation.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126383153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The typical estimation of DSGE models requires data on a set of macroeconomic aggregates, such as output, consumption and investment, which are subject to data revisions. The conventional approach employs the time series that is currently available for these aggregates for estimation, implying that the last observations are still subject to many rounds of revisions. This paper proposes a release-based approach that uses revised data of all observations to estimate DSGE models, but the model is still helpful for real-time forecasting. This new approach accounts for data uncertainty when predicting future values of macroeconomic variables subject to revisions, thus providing policy-makers and professional forecasters with both backcasts and forecasts. Application of this new approach to a medium-sized DSGE model improves the accuracy of density forecasts, particularly the coverage of predictive intervals, of US real macro variables. The application also shows that the estimated relative importance of business cycle sources varies with data maturity.
{"title":"Data Revisions and DSGE Models","authors":"A. Galvão","doi":"10.2139/ssrn.2545388","DOIUrl":"https://doi.org/10.2139/ssrn.2545388","url":null,"abstract":"The typical estimation of DSGE models requires data on a set of macroeconomic aggregates, such as output, consumption and investment, which are subject to data revisions. The conventional approach employs the time series that is currently available for these aggregates for estimation, implying that the last observations are still subject to many rounds of revisions. This paper proposes a release-based approach that uses revised data of all observations to estimate DSGE models, but the model is still helpful for real-time forecasting. This new approach accounts for data uncertainty when predicting future values of macroeconomic variables subject to revisions, thus providing policy-makers and professional forecasters with both backcasts and forecasts. Application of this new approach to a medium-sized DSGE model improves the accuracy of density forecasts, particularly the coverage of predictive intervals, of US real macro variables. The application also shows that the estimated relative importance of business cycle sources varies with data maturity.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115114056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prediction is an important activity in various business processes, but it becomes difficult when historical information is not available, such as forecasting demand of a new product. One approach that can be applied in such situations is to crowdsource opinions from employees and the public. Our paper studies the application of crowd forecasting in operations management. In particular, we study how efficient crowds are in estimating parameters important for operational decisions that companies make, including sales forecasts, price commodity forecasts, and predictions of popular product features. We focus on a widely adopted class of crowd-based forecasting tools, referred to as prediction markets. These are virtual markets created to aggregate crowds' opinions and operate in a way similar to stock markets. We partnered with Cultivate Labs, a leading company that provides a prediction market engine, to test the forecast accuracy of prediction markets using the firm's data from its public markets and several corporate prediction markets, including a chemical company, a retail company and an automotive company. Using information extracted from employees and public crowds, we show that prediction markets produce well-calibrated forecasting results. In addition, we run a field experiment to study the conditions under which groups work well. Specifically, we explore how group size plays a role in the accuracy of the forecast and find that large groups (e.g., 18 participants) perform substantially better than smaller groups (e.g., 8 participants), highlighting the importance of group size and quantifying the right sizes needed to produce a good forecast using such mechanisms.
{"title":"Wisdom of Crowds in Operations: Forecasting Using Prediction Markets","authors":"Achal Bassamboo, Ruomeng Cui, Antonio Moreno","doi":"10.2139/ssrn.2679663","DOIUrl":"https://doi.org/10.2139/ssrn.2679663","url":null,"abstract":"Prediction is an important activity in various business processes, but it becomes difficult when historical information is not available, such as forecasting demand of a new product. One approach that can be applied in such situations is to crowdsource opinions from employees and the public. Our paper studies the application of crowd forecasting in operations management. In particular, we study how efficient crowds are in estimating parameters important for operational decisions that companies make, including sales forecasts, price commodity forecasts, and predictions of popular product features. We focus on a widely adopted class of crowd-based forecasting tools, referred to as prediction markets. These are virtual markets created to aggregate crowds' opinions and operate in a way similar to stock markets. We partnered with Cultivate Labs, a leading company that provides a prediction market engine, to test the forecast accuracy of prediction markets using the firm's data from its public markets and several corporate prediction markets, including a chemical company, a retail company and an automotive company. Using information extracted from employees and public crowds, we show that prediction markets produce well-calibrated forecasting results. In addition, we run a field experiment to study the conditions under which groups work well. Specifically, we explore how group size plays a role in the accuracy of the forecast and find that large groups (e.g., 18 participants) perform substantially better than smaller groups (e.g., 8 participants), highlighting the importance of group size and quantifying the right sizes needed to produce a good forecast using such mechanisms.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"47 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120815457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Caldeira, G. V. Moura, F. Nogales, Andre A. P. Santos
We devise a novel approach to combine predictions of high-dimensional conditional covariance matrices using economic criteria based on portfolio selection. The combination scheme takes into account not only the portfolio objective function but also the portfolio characteristics in order to define the mixing weights. Three important advantages are that i) it does not require a proxy for the latent conditional covariance matrix, ii) it does not require optimization of the combination weights, and iii) can be calibrated in order to adjust the influence of the best performing models. Empirical application involving a data set with 50 assets over a 10-year time span shows that the proposed economic-based combinations of multivariate volatility forecasts leads to mean–variance portfolios with higher risk-adjusted performance in terms of Sharpe ratio as well as to minimum variance portfolios with lower risk on an out-of-sample basis with respect to a number of benchmark specifications.
{"title":"Combining Multivariate Volatility Forecasts: An Economic-Based Approach","authors":"J. Caldeira, G. V. Moura, F. Nogales, Andre A. P. Santos","doi":"10.2139/ssrn.2664128","DOIUrl":"https://doi.org/10.2139/ssrn.2664128","url":null,"abstract":"We devise a novel approach to combine predictions of high-dimensional conditional covariance matrices using economic criteria based on portfolio selection. The combination scheme takes into account not only the portfolio objective function but also the portfolio characteristics in order to define the mixing weights. Three important advantages are that i) it does not require a proxy for the latent conditional covariance matrix, ii) it does not require optimization of the combination weights, and iii) can be calibrated in order to adjust the influence of the best performing models. Empirical application involving a data set with 50 assets over a 10-year time span shows that the proposed economic-based combinations of multivariate volatility forecasts leads to mean–variance portfolios with higher risk-adjusted performance in terms of Sharpe ratio as well as to minimum variance portfolios with lower risk on an out-of-sample basis with respect to a number of benchmark specifications.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131252106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.
{"title":"Quantile Forecasts of Inflation Under Model Uncertainty","authors":"Dimitris Korobilis","doi":"10.2139/ssrn.2610253","DOIUrl":"https://doi.org/10.2139/ssrn.2610253","url":null,"abstract":"Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117253406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the relationship between financial institutions' expectations of the current account and the fiscal balance. Using professional macroeconomic forecasts for the G-7 countries, we find a positive relationship between forecasts of the cyclically adjusted fiscal balance deficit and forecasts of the current account deficit, indicating that professional forecasts embody links implied by the twin deficits hypothesis. In assessing the relationship between the forecasts of the fiscal deficit and the current account, we find that forecasters correctly make the distinction between the effect of fiscal policy and automatic stabilizers.
{"title":"Fiscal Balance and Current Account in Professional Forecasts","authors":"P. Bianchi, B. Deschamps, Khurshid M. Kiani","doi":"10.1111/roie.12165","DOIUrl":"https://doi.org/10.1111/roie.12165","url":null,"abstract":"This paper investigates the relationship between financial institutions' expectations of the current account and the fiscal balance. Using professional macroeconomic forecasts for the G-7 countries, we find a positive relationship between forecasts of the cyclically adjusted fiscal balance deficit and forecasts of the current account deficit, indicating that professional forecasts embody links implied by the twin deficits hypothesis. In assessing the relationship between the forecasts of the fiscal deficit and the current account, we find that forecasters correctly make the distinction between the effect of fiscal policy and automatic stabilizers.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131269685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongkoo Kim, Tae-hwan Rhee, K. Ryu, Changmock Shin
Economic forecasts are quite essential in our daily lives, which is why many research institutions periodically make and publish forecasts of main economic indicators. We ask (1) whether we can consistently have a better prediction when we combine multiple forecasts of the same variable and (2) if we can, what will be the optimal method of combination. We linearly combine multiple linear combinations of existing forecasts to form a new forecast ('combination of combinations'), and the weights are given by Bayesian model averaging. In the case of forecasts on Germany's real GDP growth rate, this new forecast dominates any single forecast in terms of root-mean-square prediction errors.
{"title":"Crowdsourcing of Economic Forecast – Combination of Forecasts using Bayesian Model Averaging","authors":"Dongkoo Kim, Tae-hwan Rhee, K. Ryu, Changmock Shin","doi":"10.2139/ssrn.2618394","DOIUrl":"https://doi.org/10.2139/ssrn.2618394","url":null,"abstract":"Economic forecasts are quite essential in our daily lives, which is why many research institutions periodically make and publish forecasts of main economic indicators. We ask (1) whether we can consistently have a better prediction when we combine multiple forecasts of the same variable and (2) if we can, what will be the optimal method of combination. We linearly combine multiple linear combinations of existing forecasts to form a new forecast ('combination of combinations'), and the weights are given by Bayesian model averaging. In the case of forecasts on Germany's real GDP growth rate, this new forecast dominates any single forecast in terms of root-mean-square prediction errors.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114718827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common practice in policy making institutions using DSGE models for forecasting is to re-estimate them only occasionally rather than every forecasting round. In this paper we ask how such a practice affects the accuracy of DSGE model-based forecasts. To this end we use a canonical medium-sized New Keynesian model and compare how its quarterly real-time forecasts for the US economy vary with the interval between consecutive re-estimations. We find that updating the model parameters only once a year usually does not lead to any significant deterioration in the accuracy of point forecasts. On the other hand, there are some gains from increasing the frequency of re-estimation if one is interested in the quality of density forecasts.
{"title":"How Frequently Should We Re-Estimate DSGE Models?","authors":"Marcin Kolasa, Michał Rubaszek","doi":"10.2139/ssrn.2646625","DOIUrl":"https://doi.org/10.2139/ssrn.2646625","url":null,"abstract":"A common practice in policy making institutions using DSGE models for forecasting is to re-estimate them only occasionally rather than every forecasting round. In this paper we ask how such a practice affects the accuracy of DSGE model-based forecasts. To this end we use a canonical medium-sized New Keynesian model and compare how its quarterly real-time forecasts for the US economy vary with the interval between consecutive re-estimations. We find that updating the model parameters only once a year usually does not lead to any significant deterioration in the accuracy of point forecasts. On the other hand, there are some gains from increasing the frequency of re-estimation if one is interested in the quality of density forecasts.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114489501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In short-term forecasting, it is essential to take into account all available information on the current state of the economic activity. Yet, the fact that various time series are sampled at different frequencies prevents an efficient use of available data. In this respect, the Mixed-Data Sampling (MIDAS) model has proved to outperform existing tools by combining data series of different frequencies. However, major issues remain regarding the choice of explanatory variables. The paper first addresses this point by developing MIDAS based dimension reduction techniques and by introducing two novel approaches based on either a method of penalized variable selection or Bayesian stochastic search variable selection. These features integrate a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Then the developed techniques are assessed with regards to their forecasting power of US economic growth during the period 2000-2013 using jointly daily and monthly data. Our model succeeds in identifying leading indicators and constructing an objective variable selection with broad applicability.
{"title":"Variable Selection in Predictive MIDAS Models","authors":"Clément Marsilli","doi":"10.2139/ssrn.2531339","DOIUrl":"https://doi.org/10.2139/ssrn.2531339","url":null,"abstract":"In short-term forecasting, it is essential to take into account all available information on the current state of the economic activity. Yet, the fact that various time series are sampled at different frequencies prevents an efficient use of available data. In this respect, the Mixed-Data Sampling (MIDAS) model has proved to outperform existing tools by combining data series of different frequencies. However, major issues remain regarding the choice of explanatory variables. The paper first addresses this point by developing MIDAS based dimension reduction techniques and by introducing two novel approaches based on either a method of penalized variable selection or Bayesian stochastic search variable selection. These features integrate a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Then the developed techniques are assessed with regards to their forecasting power of US economic growth during the period 2000-2013 using jointly daily and monthly data. Our model succeeds in identifying leading indicators and constructing an objective variable selection with broad applicability.","PeriodicalId":308524,"journal":{"name":"ERN: Other Econometrics: Applied Econometric Modeling in Forecasting (Topic)","volume":"91 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113980944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}