We investigate a test of equal predictive ability delineated in Giacomini and White (2006; Econometrica). In contrast to a claim made in the paper, we show that their test statistic need not be asymptotically Normal when a fixed window of observations is used to estimate model parameters. An example is provided in which, instead, the test statistic diverges with probability one under the null. Simulations reinforce our analytical results.
{"title":"Tests of Conditional Predictive Ability: A Comment","authors":"Michael W. McCracken","doi":"10.20955/wp.2019.018","DOIUrl":"https://doi.org/10.20955/wp.2019.018","url":null,"abstract":"We investigate a test of equal predictive ability delineated in Giacomini and White (2006; Econometrica). In contrast to a claim made in the paper, we show that their test statistic need not be asymptotically Normal when a fixed window of observations is used to estimate model parameters. An example is provided in which, instead, the test statistic diverges with probability one under the null. Simulations reinforce our analytical results.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130480430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers multiple changes in the factor loadings of a high dimensional factor model occurring at dates that are unknown but common to all subjects. Since the factors are unobservable, the problem is converted to estimating and testing structural changes in the second moments of the pseudo factors. We consider both joint and sequential estimation of the change points and show that the distance between the estimated and the true change points is Op(1). We find that the estimation error contained in the estimated pseudo factors has no effect on the asymptotic properties of the estimated change points as the cross-sectional dimension N and the time dimension T go to infinity jointly. No N-T ratio condition is needed. We also propose (i) tests for the null of no change versus the alternative of l changes (ii) tests for the null of l changes versus the alternative of l + 1 changes, and show that using estimated factors asymptotically has no effect on their limit distributions if √T/N→0. These tests allow us to make inference on the presence and number of structural changes. Simulation results show good performance of the proposed procedure. In an application to US quarterly macroeconomic data we detect two possible breaks.
{"title":"Estimating and Testing High Dimensional Factor Models With Multiple Structural Changes","authors":"B. Baltagi, C. Kao, Fa Wang","doi":"10.2139/ssrn.3531662","DOIUrl":"https://doi.org/10.2139/ssrn.3531662","url":null,"abstract":"This paper considers multiple changes in the factor loadings of a high dimensional factor model occurring at dates that are unknown but common to all subjects. Since the factors are unobservable, the problem is converted to estimating and testing structural changes in the second moments of the pseudo factors. We consider both joint and sequential estimation of the change points and show that the distance between the estimated and the true change points is Op(1). We find that the estimation error contained in the estimated pseudo factors has no effect on the asymptotic properties of the estimated change points as the cross-sectional dimension N and the time dimension T go to infinity jointly. No N-T ratio condition is needed. We also propose (i) tests for the null of no change versus the alternative of l changes (ii) tests for the null of l changes versus the alternative of l + 1 changes, and show that using estimated factors asymptotically has no effect on their limit distributions if √T/N→0. These tests allow us to make inference on the presence and number of structural changes. Simulation results show good performance of the proposed procedure. In an application to US quarterly macroeconomic data we detect two possible breaks.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"21 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120927700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Bayes factor is introduced for the normal linear regression model, which can be used to estimate bounds of the treatment effect on the dependent variable, from the data. This is done while accounting for hidden omitted-variable bias, due to an unobserved covariate, and adjusting for any other observed covariates. The Bayes factor measures how much the data have changed the odds for some specified hidden bias versus no hidden bias, and is defined by a ratio of residual sums-of-squares raised to a power proportional to half the sample size. Therefore, the estimated bounds for the treatment effect can be determined by values of the hidden bias parameter that attain non-small Bayes factors, while the Bayes factor can be quickly computed in closed-form. The Bayes factor is illustrated through the analysis of real data and simulated data sets. Software code for the Bayes factor method is provided as Supplemental Material (available upon request of the author).
{"title":"A Bayes Factor for Bounding the Treatment Effect to Address Hidden Bias in Linear Regression","authors":"G. Karabatsos","doi":"10.2139/ssrn.3128627","DOIUrl":"https://doi.org/10.2139/ssrn.3128627","url":null,"abstract":"A Bayes factor is introduced for the normal linear regression model, which can be used to estimate bounds of the treatment effect on the dependent variable, from the data. This is done while accounting for hidden omitted-variable bias, due to an unobserved covariate, and adjusting for any other observed covariates. The Bayes factor measures how much the data have changed the odds for some specified hidden bias versus no hidden bias, and is defined by a ratio of residual sums-of-squares raised to a power proportional to half the sample size. Therefore, the estimated bounds for the treatment effect can be determined by values of the hidden bias parameter that attain non-small Bayes factors, while the Bayes factor can be quickly computed in closed-form. The Bayes factor is illustrated through the analysis of real data and simulated data sets. Software code for the Bayes factor method is provided as Supplemental Material (available upon request of the author).","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115292843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessments of statistical significance are ubiquitous in damage quantification practice. Little, however, can be concluded from them on the magnitude of the true effect: statistical significance (against zero) allows the conclusion that the true effect is not zero, but nothing else; and lack of statistical significance does not allow the conclusion that the true effect is zero. Thus, what can be learned? In this note I describe an extension to significance testing, SEVERE TESTING, which does allow valid conclusions on effect sizes after significance testing. It does so on an epistemically appealing, yet technically familiar (p-value) basis. It also makes a difference: loosely speaking, severe testing shifts the evidential weight from the centre of the confidence interval, as often assumed in prevailing practice, to its lower or upper edges.
{"title":"What Can Be Concluded from Statistical Significance? Severe Testing as an Appealing Extension to Our Standard Toolkit","authors":"Christopher Milde","doi":"10.2139/ssrn.3413808","DOIUrl":"https://doi.org/10.2139/ssrn.3413808","url":null,"abstract":"Assessments of statistical significance are ubiquitous in damage quantification practice. Little, however, can be concluded from them on the magnitude of the true effect: statistical significance (against zero) allows the conclusion that the true effect is not zero, but nothing else; and lack of statistical significance does not allow the conclusion that the true effect is zero. Thus, what can be learned? In this note I describe an extension to significance testing, SEVERE TESTING, which does allow valid conclusions on effect sizes after significance testing. It does so on an epistemically appealing, yet technically familiar (p-value) basis. It also makes a difference: loosely speaking, severe testing shifts the evidential weight from the centre of the confidence interval, as often assumed in prevailing practice, to its lower or upper edges.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116951435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given the complex relationships between patients’ demographics, underlying health needs, and outcomes, establishing the causal effects of health policy and delivery interventions on health outcomes is often empirically challenging. The single interrupted time series (SITS) design has become a popular evaluation method in contexts where a randomized controlled trial is not feasible. In this paper, we formalize the structure and assumptions underlying the single ITS design and show that it is significantly more vulnerable to confounding than is often acknowledged and, as a result, can produce misleading results. We illustrate this empirically using the Oregon Health Insurance Experiment, showing that an evaluation using a single interrupted time series design instead of the randomized controlled trial would have produced large and statistically significant results of the wrong sign. We discuss the pitfalls of the SITS design, and suggest circumstances in which it is and is not likely to be reliable.
{"title":"Testing the Validity of the Single Interrupted Time Series Design","authors":"Katherine Baicker, Theodore Svoronos","doi":"10.2139/ssrn.3424248","DOIUrl":"https://doi.org/10.2139/ssrn.3424248","url":null,"abstract":"Given the complex relationships between patients’ demographics, underlying health needs, and outcomes, establishing the causal effects of health policy and delivery interventions on health outcomes is often empirically challenging. The single interrupted time series (SITS) design has become a popular evaluation method in contexts where a randomized controlled trial is not feasible. In this paper, we formalize the structure and assumptions underlying the single ITS design and show that it is significantly more vulnerable to confounding than is often acknowledged and, as a result, can produce misleading results. We illustrate this empirically using the Oregon Health Insurance Experiment, showing that an evaluation using a single interrupted time series design instead of the randomized controlled trial would have produced large and statistically significant results of the wrong sign. We discuss the pitfalls of the SITS design, and suggest circumstances in which it is and is not likely to be reliable.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128339702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large strand of the literature on panel data models has focused on explicitly modelling the cross-section dependence between panel units. Factor augmented approaches have been proposed to deal with this issue. Under a mild restriction on the correlation of the factor loadings, we show that factor augmented panel data models can be encompassed by a standard two-way fixed effect model. This highlights the importance of verifying whether the factor loadings are correlated, which, we argue, is an important hypothesis to be tested, in practice. As a main contribution, we propose a Hausman-type test that determines the presence of correlated factor loadings in panels with interactive effects. Furthermore, we develop two nonparametric variance estimators that are robust to the presence of heteroscedasticity, autocorrelation as well as slope heterogeneity. Via Monte Carlo simulations, we demonstrate desirable size and power performance of the proposed test, even in small samples. Finally, we provide extensive empirical evidence in favour of uncorrelated factor loadings in panels with interactive effects.
{"title":"Testing for Correlated Factor Loadings in Cross Sectionally Dependent Panels","authors":"G. Kapetanios, L. Serlenga, Y. Shin","doi":"10.2139/ssrn.3401745","DOIUrl":"https://doi.org/10.2139/ssrn.3401745","url":null,"abstract":"A large strand of the literature on panel data models has focused on explicitly modelling the cross-section dependence between panel units. Factor augmented approaches have been proposed to deal with this issue. Under a mild restriction on the correlation of the factor loadings, we show that factor augmented panel data models can be encompassed by a standard two-way fixed effect model. This highlights the importance of verifying whether the factor loadings are correlated, which, we argue, is an important hypothesis to be tested, in practice. As a main contribution, we propose a Hausman-type test that determines the presence of correlated factor loadings in panels with interactive effects. Furthermore, we develop two nonparametric variance estimators that are robust to the presence of heteroscedasticity, autocorrelation as well as slope heterogeneity. Via Monte Carlo simulations, we demonstrate desirable size and power performance of the proposed test, even in small samples. Finally, we provide extensive empirical evidence in favour of uncorrelated factor loadings in panels with interactive effects.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124308239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce new equivalence tests for approximate independence in two-way contingency tables. The critical values are calculated asymptotically. The finite sample performance of the tests is improved by means of the bootstrap. An estimator of boundary points is developed to make the bootstrap based tests statistically efficient and computationally feasible. We compare the performance of the proposed tests for different table sizes by simulation. Then we apply the tests to real data sets.
{"title":"New Equivalence Tests for Approximate Independence in Contingency Tables","authors":"V. Ostrovski","doi":"10.3390/stats2020018","DOIUrl":"https://doi.org/10.3390/stats2020018","url":null,"abstract":"We introduce new equivalence tests for approximate independence in two-way contingency tables. The critical values are calculated asymptotically. The finite sample performance of the tests is improved by means of the bootstrap. An estimator of boundary points is developed to make the bootstrap based tests statistically efficient and computationally feasible. We compare the performance of the proposed tests for different table sizes by simulation. Then we apply the tests to real data sets.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116037899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider the estimation of break points in high-dimensional factor models where the unobserved factors are estimated by principal component analysis (PCA). The factor loading matrix is assumed to have a structural break at an unknown time. We establish the conditions under which the least squares (LS) estimator is consistent for the break date. Our consistency result holds for both large and smaller breaks. We also find the LS estimator’s asymptotic distribution. Simulation results confirm that the break date can be accurately estimated by the LS even if the breaks are small. In two empirical applications, we implement our method to estimate break points in the U.S. stock market and U.S. macroeconomy, respectively.
{"title":"Estimation and Inference of Change Points in High Dimensional Factor Models","authors":"Jushan Bai, Xu Han, Yutang Shi","doi":"10.2139/ssrn.2875193","DOIUrl":"https://doi.org/10.2139/ssrn.2875193","url":null,"abstract":"In this paper, we consider the estimation of break points in high-dimensional factor models where the unobserved factors are estimated by principal component analysis (PCA). The factor loading matrix is assumed to have a structural break at an unknown time. We establish the conditions under which the least squares (LS) estimator is consistent for the break date. Our consistency result holds for both large and smaller breaks. We also find the LS estimator’s asymptotic distribution. Simulation results confirm that the break date can be accurately estimated by the LS even if the breaks are small. In two empirical applications, we implement our method to estimate break points in the U.S. stock market and U.S. macroeconomy, respectively.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130235924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuseppe Cavaliere, Heino Bohn Nielsen, R. Pedersen, Anders Rahbek
It is a well-established fact that testing a null hypothesis on the boundary of the parameter space, with an unknown number of nuisance parameters at the boundary, is infeasible in practice in the sense that limiting distributions of standard test statistics are non-pivotal. In particular, likelihood ratio statistics have limiting distributions which can be characterized in terms of quadratic forms minimized over cones, where the shape of the cones depends on the unknown location of the (possibly mulitiple) model parameters not restricted by the null hypothesis. We propose to solve this inference problem by a novel bootstrap, which we show to be valid under general conditions, irrespective of the presence of (unknown) nuisance parameters on the boundary. That is, the new bootstrap replicates the unknown limiting distribution of the likelihood ratio statistic under the null hypothesis and is bounded (in probability) under the alternative. The new bootstrap approach, which is very simple to implement, is based on shrinkage of the parameter estimates used to generate the bootstrap sample toward the boundary of the parameter space at an appropriate rate. As an application of our general theory, we treat the problem of inference in ?nite-order ARCH models with coefficients subject to inequality constraints. Extensive Monte Carlo simulations illustrate that the proposed bootstrap has attractive ?nite sample properties both under the null and under the alternative hypothesis.
{"title":"Bootstrap Inference on the Boundary of the Parameter Space with Application to Conditional Volatility Models","authors":"Giuseppe Cavaliere, Heino Bohn Nielsen, R. Pedersen, Anders Rahbek","doi":"10.2139/ssrn.3282935","DOIUrl":"https://doi.org/10.2139/ssrn.3282935","url":null,"abstract":"It is a well-established fact that testing a null hypothesis on the boundary of the parameter space, with an unknown number of nuisance parameters at the boundary, is infeasible in practice in the sense that limiting distributions of standard test statistics are non-pivotal. In particular, likelihood ratio statistics have limiting distributions which can be characterized in terms of quadratic forms minimized over cones, where the shape of the cones depends on the unknown location of the (possibly mulitiple) model parameters not restricted by the null hypothesis. We propose to solve this inference problem by a novel bootstrap, which we show to be valid under general conditions, irrespective of the presence of (unknown) nuisance parameters on the boundary. That is, the new bootstrap replicates the unknown limiting distribution of the likelihood ratio statistic under the null hypothesis and is bounded (in probability) under the alternative. The new bootstrap approach, which is very simple to implement, is based on shrinkage of the parameter estimates used to generate the bootstrap sample toward the boundary of the parameter space at an appropriate rate. As an application of our general theory, we treat the problem of inference in ?nite-order ARCH models with coefficients subject to inequality constraints. Extensive Monte Carlo simulations illustrate that the proposed bootstrap has attractive ?nite sample properties both under the null and under the alternative hypothesis.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117135855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel N. Cohen, Timo Henckel, G. Menzies, Johannes Muhle‐Karbe, D. J. Zizzo
We relate models based on costs of switching beliefs (e.g. due to inattention) to hypothesis tests. Specifically, for an inference problem with a penalty for mistakes and for switching the inferred value, a band of inaction is optimal. We show this band is equivalent to a confidence interval, and therefore to a two-sided hypothesis test.
{"title":"Switching Cost Models as Hypothesis Tests","authors":"Samuel N. Cohen, Timo Henckel, G. Menzies, Johannes Muhle‐Karbe, D. J. Zizzo","doi":"10.2139/ssrn.3245004","DOIUrl":"https://doi.org/10.2139/ssrn.3245004","url":null,"abstract":"We relate models based on costs of switching beliefs (e.g. due to inattention) to hypothesis tests. Specifically, for an inference problem with a penalty for mistakes and for switching the inferred value, a band of inaction is optimal. We show this band is equivalent to a confidence interval, and therefore to a two-sided hypothesis test.","PeriodicalId":425229,"journal":{"name":"ERN: Hypothesis Testing (Topic)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132750620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}