Abstract This paper extends the multivariate index autoregressive model by Reinsel (1983) to the case of cointegrated time series of order (1,1). In this new modelling, namely the Vector Error-Correction Index Model (VECIM), the first differences of series are driven by some linear combinations of the variables, namely the indexes. When the indexes are significantly fewer than the variables, the VECIM achieves a substantial dimension reduction w.r.t. the Vector Error Correction Model. We show that the VECIM allows one to decompose the reduced form errors into sets of common and uncommon shocks, and that the former can be further decomposed into permanent and transitory shocks. Moreover, we offer a switching algorithm for optimal estimation of the VECIM. Finally, we document the practical value of the proposed approach by both simulations and an empirical application, where we search for the shocks that drive the aggregate fluctuations at different frequency bands in the US.
摘要本文将Reinsel(1983)的多元指数自回归模型推广到(1,1)阶协整时间序列的情况。在这种新的模型中,即矢量误差校正指数模型(Vector Error-Correction Index Model, VECIM),序列的一阶差是由变量(即指标)的一些线性组合驱动的。当索引明显小于变量时,VECIM实现了向量误差修正模型(Vector Error Correction Model)的大幅降维。我们证明了VECIM允许将简化形式误差分解为常见冲击和不常见冲击的集合,并且前者可以进一步分解为永久冲击和短暂冲击。此外,我们还提供了一种切换算法来实现VECIM的最优估计。最后,我们通过模拟和经验应用证明了所提出方法的实用价值,其中我们寻找驱动美国不同频段总波动的冲击。
{"title":"The Vector Error Correction Index Model: Representation, Estimation and Identification","authors":"Gianluca Cubadda, Marco Mazzali","doi":"10.1093/ectj/utad023","DOIUrl":"https://doi.org/10.1093/ectj/utad023","url":null,"abstract":"Abstract This paper extends the multivariate index autoregressive model by Reinsel (1983) to the case of cointegrated time series of order (1,1). In this new modelling, namely the Vector Error-Correction Index Model (VECIM), the first differences of series are driven by some linear combinations of the variables, namely the indexes. When the indexes are significantly fewer than the variables, the VECIM achieves a substantial dimension reduction w.r.t. the Vector Error Correction Model. We show that the VECIM allows one to decompose the reduced form errors into sets of common and uncommon shocks, and that the former can be further decomposed into permanent and transitory shocks. Moreover, we offer a switching algorithm for optimal estimation of the VECIM. Finally, we document the practical value of the proposed approach by both simulations and an empirical application, where we search for the shocks that drive the aggregate fluctuations at different frequency bands in the US.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135413830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary We propose a semi-parametric test to evaluate (a) whether different instruments induce subpopulations of compliers with the same observable characteristics, on average; and (b) whether compliers have observable characteristics that are the same as the full population, treated subpopulation, or untreated subpopulation, on average. The test is a flexible robustness check for the external validity of instruments. To justify the test, we characterise the doubly robust moment for Abadie’s class of complier parameters, and we analyse a machine learning update to weighting that we call the automatic $kappa$ weight. We use the test to reinterpret Angrist and Evans' different local average treatment effect estimates obtained using different instrumental variables.
{"title":"Double Robustness for Complier Parameters and a Semiparametric Test for Complier Characteristics","authors":"Rahul Singh, Liyang Sun","doi":"10.1093/ectj/utad019","DOIUrl":"https://doi.org/10.1093/ectj/utad019","url":null,"abstract":"Summary We propose a semi-parametric test to evaluate (a) whether different instruments induce subpopulations of compliers with the same observable characteristics, on average; and (b) whether compliers have observable characteristics that are the same as the full population, treated subpopulation, or untreated subpopulation, on average. The test is a flexible robustness check for the external validity of instruments. To justify the test, we characterise the doubly robust moment for Abadie’s class of complier parameters, and we analyse a machine learning update to weighting that we call the automatic $kappa$ weight. We use the test to reinterpret Angrist and Evans' different local average treatment effect estimates obtained using different instrumental variables.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135045772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A Bayesian typically uses data and a prior to produce a posterior. We shall follow the opposite route, using data and the posterior information to reveal the prior. We then apply this theory to inflation forecasts by the Bank of England and the National Institute of Economic and Social Research in an attempt to get some insight into the prior beliefs of the policy makers in these two institutions, especially under the uncertainties about the Brexit referendum, the Covid-19 lockdown, and the Russian invasion of Ukraine.
{"title":"Revealing priors from posteriors with an application to inflation forecasting in the UK","authors":"Masako Ikefuji, Jan R Magnus, Takashi Yamagata","doi":"10.1093/ectj/utad021","DOIUrl":"https://doi.org/10.1093/ectj/utad021","url":null,"abstract":"Abstract A Bayesian typically uses data and a prior to produce a posterior. We shall follow the opposite route, using data and the posterior information to reveal the prior. We then apply this theory to inflation forecasts by the Bank of England and the National Institute of Economic and Social Research in an attempt to get some insight into the prior beliefs of the policy makers in these two institutions, especially under the uncertainties about the Brexit referendum, the Covid-19 lockdown, and the Russian invasion of Ukraine.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We consider here penalized likelihood-based estimation and model selection applied to econometric time series models, which allow for non-negativity (boundary) constraints on some or all of the parameters. We establish that joint model selection and estimation result in standard asymptotic Gaussian distributed estimators. The results contrasts with non-penalized estimation, which as well-known leads to non-standard asymptotic distributions that depend on the unknown number of parameters on the boundary of the parameter space. We apply our results to the rich class of autoregressive conditional heteroskedastic (ARCH) models for time-varying volatility. For the ARCH models, simulations show that penalized estimation and model-selection works surprisingly well, even for models with a large number of parameters. An empirical illustration for stock-market return data shows the ability of penalized estimation to select ARCH models that fit nicely the empirical autocorrelation function, and confirms the stylized fact of long-memory in such financial time-series data.
{"title":"Penalized quasi-likelihood estimation and model selection with parameters on the boundary of the parameter space","authors":"Heino Bohn Nielsen, Anders Rahbek","doi":"10.1093/ectj/utad022","DOIUrl":"https://doi.org/10.1093/ectj/utad022","url":null,"abstract":"Abstract We consider here penalized likelihood-based estimation and model selection applied to econometric time series models, which allow for non-negativity (boundary) constraints on some or all of the parameters. We establish that joint model selection and estimation result in standard asymptotic Gaussian distributed estimators. The results contrasts with non-penalized estimation, which as well-known leads to non-standard asymptotic distributions that depend on the unknown number of parameters on the boundary of the parameter space. We apply our results to the rich class of autoregressive conditional heteroskedastic (ARCH) models for time-varying volatility. For the ARCH models, simulations show that penalized estimation and model-selection works surprisingly well, even for models with a large number of parameters. An empirical illustration for stock-market return data shows the ability of penalized estimation to select ARCH models that fit nicely the empirical autocorrelation function, and confirms the stylized fact of long-memory in such financial time-series data.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper provides a structural panel GMM (P-GMM) estimator of the elasticity of substitution between capital and labour that does not depend on external instruments, and which can be applied in the presence of biased technical change. We identify the conditions under which P-GMM is a consistent estimator and compare it to a fixed effects estimator. Using a Monte Carlo study, we find that the P-GMM estimator is nearly unbiased provided the number of time periods (T) is not too small. We show analytically how the small-T bias is related to metrics of weak identification. In an application on manufacturing firms in Norway, we estimate the elasticity of substitution to be 1.9 using the P-GMM and 1.0 using the fixed effects estimator. Neglecting simultaneity may thus lead to the conclusion that capital and labour are complements or can be described by Cobb-Douglas technology, when, in fact, they are substitutes.
{"title":"Identifying the elasticity of substitution with biased technical change - a structural panel GMM estimator","authors":"Thomas von Brasch, Arvid Raknerud, Trond C Vigtel","doi":"10.1093/ectj/utad020","DOIUrl":"https://doi.org/10.1093/ectj/utad020","url":null,"abstract":"Abstract This paper provides a structural panel GMM (P-GMM) estimator of the elasticity of substitution between capital and labour that does not depend on external instruments, and which can be applied in the presence of biased technical change. We identify the conditions under which P-GMM is a consistent estimator and compare it to a fixed effects estimator. Using a Monte Carlo study, we find that the P-GMM estimator is nearly unbiased provided the number of time periods (T) is not too small. We show analytically how the small-T bias is related to metrics of weak identification. In an application on manufacturing firms in Norway, we estimate the elasticity of substitution to be 1.9 using the P-GMM and 1.0 using the fixed effects estimator. Neglecting simultaneity may thus lead to the conclusion that capital and labour are complements or can be described by Cobb-Douglas technology, when, in fact, they are substitutes.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135425231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We extend the Principal Orthogonal complEment Thresholding (POET) framework by Fan, J., Y. Liao, M. Mincheva (2013) to estimate large covariance matrices with a “mixed” structure of observable and unobservable strong/weak factors, and we call this method the extended POET (ePOET). Especially, the weak factor structure allows the existence of much slowly divergent eigenvalues of the covariance matrix that are frequently observed in real data. Under some mild conditions, we derive the uniform consistency of the proposed estimator for the cases with or without observable factors. Furthermore, several simulation studies show that the ePOET achieves good finite-sample performance regardless of data with strong, weak, or mixed factors structure. Finally, we conduct empirical studies to present the practical usefulness of the ePOET.
本文扩展了Fan, J., Y. Liao, M. Mincheva(2013)的Principal Orthogonal补体阈值(POET)框架,以估计具有可观察和不可观察强/弱因子“混合”结构的大协方差矩阵,并将该方法称为扩展POET (ePOET)。特别是,弱因子结构允许协方差矩阵的特征值存在非常缓慢的发散,这在实际数据中经常观察到。在一些温和的条件下,我们得到了在有或没有可观测因子的情况下所提出的估计量的一致相合性。此外,一些仿真研究表明,无论数据具有强、弱或混合因素结构,ePOET都能获得良好的有限样本性能。最后,我们进行了实证研究,以展示ePOET的实际用途。
{"title":"Estimation of Large Covariance Matrices with Mixed Factor Structures","authors":"Runyu Dai, Yoshimasa Uematsu, Yasumasa Matsuda","doi":"10.1093/ectj/utad018","DOIUrl":"https://doi.org/10.1093/ectj/utad018","url":null,"abstract":"Abstract We extend the Principal Orthogonal complEment Thresholding (POET) framework by Fan, J., Y. Liao, M. Mincheva (2013) to estimate large covariance matrices with a “mixed” structure of observable and unobservable strong/weak factors, and we call this method the extended POET (ePOET). Especially, the weak factor structure allows the existence of much slowly divergent eigenvalues of the covariance matrix that are frequently observed in real data. Under some mild conditions, we derive the uniform consistency of the proposed estimator for the cases with or without observable factors. Furthermore, several simulation studies show that the ePOET achieves good finite-sample performance regardless of data with strong, weak, or mixed factors structure. Finally, we conduct empirical studies to present the practical usefulness of the ePOET.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135580765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Royal Economic Society Annual Conference 2022 Special Issue on The New Difference-in-Differences","authors":"","doi":"10.1093/ectj/utad017","DOIUrl":"https://doi.org/10.1093/ectj/utad017","url":null,"abstract":"","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135389722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary I derive simple, flexible strategies for difference-in-differences settings where the nature of the response variable may warrant a nonlinear model. I allow for general staggered interventions, with and without covariates. Under an index version of parallel trends, I show that average treatment effects on the treated (ATTs) are identified for each cohort and calendar time period in which a cohort was subjected to the intervention. The pooled quasi-maximum likelihood estimators in the linear exponential family extend pooled ordinary least squares estimation of linear models. By using the conditional mean associated with the canonical link function, imputation and pooling across the entire sample produce identical estimates. Generally, pooled estimation results in very simple computation of the ATTs and their standard errors. The leading cases are a logit functional form for binary and fractional outcomes—combined with the Bernoulli quasi-log likelihood (QLL)—and an exponential mean combined with the Poisson QLL.
{"title":"Simple approaches to nonlinear difference-in-differences with panel data","authors":"Jeffrey M Wooldridge","doi":"10.1093/ectj/utad016","DOIUrl":"https://doi.org/10.1093/ectj/utad016","url":null,"abstract":"Summary I derive simple, flexible strategies for difference-in-differences settings where the nature of the response variable may warrant a nonlinear model. I allow for general staggered interventions, with and without covariates. Under an index version of parallel trends, I show that average treatment effects on the treated (ATTs) are identified for each cohort and calendar time period in which a cohort was subjected to the intervention. The pooled quasi-maximum likelihood estimators in the linear exponential family extend pooled ordinary least squares estimation of linear models. By using the conditional mean associated with the canonical link function, imputation and pooling across the entire sample produce identical estimates. Generally, pooled estimation results in very simple computation of the ATTs and their standard errors. The leading cases are a logit functional form for binary and fractional outcomes—combined with the Bernoulli quasi-log likelihood (QLL)—and an exponential mean combined with the Poisson QLL.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135464919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary We develop a uniform test for detecting and dating the integrated or mildly explosive behaviour of a strictly stationary generalized autoregressive conditional heteroskedasticity (GARCH) process. Namely, we test the null hypothesis of a globally stable GARCH process with constant parameters against the alternative that there is an ‘abnormal’ period with changed parameter values. During this period, the parameter-value change may lead to an integrated or mildly explosive behaviour of the volatility process. It is assumed that both the magnitude and the timing of the breaks are unknown. We develop a double-supreme test for the existence of breaks, and then provide an algorithm to identify the periods of changes. Our theoretical results hold under mild moment assumptions on the innovations of the GARCH process. Technically, the existing properties for the quasi-maximum likelihood estimation in the GARCH model need to be reinvestigated to hold uniformly over all possible periods of change. The key results involve a uniform weak Bahadur representation for the estimated parameters, which leads to weak convergence of the test statistic to the supreme of a Gaussian process. Simulations in the Appendix show that the test has good size and power for reasonably long time series. We apply the test to the conventional early-warning indicators of both the financial market and a representative of the emerging Fintech market, i.e., the Bitcoin returns.
{"title":"Testing for parameter change epochs in GARCH time series","authors":"Stefan Richter, Weining Wang, Wei Biao Wu","doi":"10.1093/ectj/utad006","DOIUrl":"https://doi.org/10.1093/ectj/utad006","url":null,"abstract":"Summary We develop a uniform test for detecting and dating the integrated or mildly explosive behaviour of a strictly stationary generalized autoregressive conditional heteroskedasticity (GARCH) process. Namely, we test the null hypothesis of a globally stable GARCH process with constant parameters against the alternative that there is an ‘abnormal’ period with changed parameter values. During this period, the parameter-value change may lead to an integrated or mildly explosive behaviour of the volatility process. It is assumed that both the magnitude and the timing of the breaks are unknown. We develop a double-supreme test for the existence of breaks, and then provide an algorithm to identify the periods of changes. Our theoretical results hold under mild moment assumptions on the innovations of the GARCH process. Technically, the existing properties for the quasi-maximum likelihood estimation in the GARCH model need to be reinvestigated to hold uniformly over all possible periods of change. The key results involve a uniform weak Bahadur representation for the estimated parameters, which leads to weak convergence of the test statistic to the supreme of a Gaussian process. Simulations in the Appendix show that the test has good size and power for reasonably long time series. We apply the test to the conventional early-warning indicators of both the financial market and a representative of the emerging Fintech market, i.e., the Bitcoin returns.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136168088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Otilia Boldea, Adriana Cornea-Madeira, João Madeira
Summary In this paper, we estimate the path of daily SARS-CoV-2 infections in England from the beginning of the pandemic until the end of 2021. We employ a dynamic intensity model, where the mean intensity conditional on the past depends both on past intensity of infections and past realized infections. The model parameters are time-varying, and we employ a multiplicative specification along with logistic transition functions to disentangle the time-varying effects of nonpharmaceutical policy interventions, of different variants, and of protection (waning) of vaccines/boosters. Our model results indicate that earlier interventions and vaccinations are key to containing an infection wave. We consider several scenarios that account for more infectious variants and different protection levels of vaccines/boosters. These scenarios suggest that, as vaccine protection wanes, containing a new wave in infections and an associated increase in hospitalizations in the near future may require further booster campaigns and/or nonpharmaceutical interventions.
{"title":"Disentangling the effect of measures, variants, and vaccines on SARS-CoV-2 infections in England: A dynamic intensity model","authors":"Otilia Boldea, Adriana Cornea-Madeira, João Madeira","doi":"10.1093/ectj/utad004","DOIUrl":"https://doi.org/10.1093/ectj/utad004","url":null,"abstract":"Summary In this paper, we estimate the path of daily SARS-CoV-2 infections in England from the beginning of the pandemic until the end of 2021. We employ a dynamic intensity model, where the mean intensity conditional on the past depends both on past intensity of infections and past realized infections. The model parameters are time-varying, and we employ a multiplicative specification along with logistic transition functions to disentangle the time-varying effects of nonpharmaceutical policy interventions, of different variants, and of protection (waning) of vaccines/boosters. Our model results indicate that earlier interventions and vaccinations are key to containing an infection wave. We consider several scenarios that account for more infectious variants and different protection levels of vaccines/boosters. These scenarios suggest that, as vaccine protection wanes, containing a new wave in infections and an associated increase in hospitalizations in the near future may require further booster campaigns and/or nonpharmaceutical interventions.","PeriodicalId":50555,"journal":{"name":"Econometrics Journal","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136297894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}