Pub Date : 2024-11-01DOI: 10.1016/j.jeconom.2024.105897
Dan Pu , Kuangnan Fang , Wei Lan , Jihai Yu , Qingzhao Zhang
Multivariate spatiotemporal data arise frequently in practical applications, often involving complex dependencies across cross-sectional units, time points and multivariate variables. In the literature, few studies jointly model the dependence in three dimensions. To simultaneously model the cross-sectional, dynamic and cross-variable dependence, we propose a multivariate reduced-rank spatiotemporal model. By imposing the low-rank assumption on the spatial influence matrix, the proposed model achieves substantial dimension reduction and has a nice interpretation, especially for financial data. Due to the innate endogeneity, we propose the quasi-maximum likelihood estimator (QMLE) to estimate the unknown parameters. A ridge-type ratio estimator is also developed to determine the rank of the spatial influence matrix. We establish the asymptotic distribution of the QMLE and the rank selection consistency of the ridge-type ratio estimator. The proposed methodology is further illustrated via extensive simulation studies and two applications to a stock market dataset and an air pollution dataset.
{"title":"Multivariate spatiotemporal models with low rank coefficient matrix","authors":"Dan Pu , Kuangnan Fang , Wei Lan , Jihai Yu , Qingzhao Zhang","doi":"10.1016/j.jeconom.2024.105897","DOIUrl":"10.1016/j.jeconom.2024.105897","url":null,"abstract":"<div><div>Multivariate spatiotemporal data arise frequently in practical applications, often involving complex dependencies across cross-sectional units, time points and multivariate variables. In the literature, few studies jointly model the dependence in three dimensions. To simultaneously model the cross-sectional, dynamic and cross-variable dependence, we propose a multivariate reduced-rank spatiotemporal model. By imposing the low-rank assumption on the spatial influence matrix, the proposed model achieves substantial dimension reduction and has a nice interpretation, especially for financial data. Due to the innate endogeneity, we propose the quasi-maximum likelihood estimator (QMLE) to estimate the unknown parameters. A ridge-type ratio estimator is also developed to determine the rank of the spatial influence matrix. We establish the asymptotic distribution of the QMLE and the rank selection consistency of the ridge-type ratio estimator. The proposed methodology is further illustrated via extensive simulation studies and two applications to a stock market dataset and an air pollution dataset.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"246 1","pages":"Article 105897"},"PeriodicalIF":9.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.jeconom.2024.105899
Yoichi Arai , Taisuke Otsu , Mengshan Xu
The generalized least square (GLS) is one of the most basic tools in regression analyses. A major issue in implementing the GLS is estimation of the conditional variance function of the error term, which typically requires a restrictive functional form assumption for parametric estimation or smoothing parameters for nonparametric estimation. In this paper, we propose an alternative approach to estimate the conditional variance function under nonparametric monotonicity constraints by utilizing the isotonic regression method. Our GLS estimator is shown to be asymptotically equivalent to the infeasible GLS estimator with knowledge of the conditional error variance, and involves only some tuning to trim boundary observations, not only for point estimation but also for interval estimation or hypothesis testing. Simulation studies and an empirical example illustrate excellent finite sample performances of the proposed method.
{"title":"GLS under monotone heteroskedasticity","authors":"Yoichi Arai , Taisuke Otsu , Mengshan Xu","doi":"10.1016/j.jeconom.2024.105899","DOIUrl":"10.1016/j.jeconom.2024.105899","url":null,"abstract":"<div><div>The generalized least square (GLS) is one of the most basic tools in regression analyses. A major issue in implementing the GLS is estimation of the conditional variance function of the error term, which typically requires a restrictive functional form assumption for parametric estimation or smoothing parameters for nonparametric estimation. In this paper, we propose an alternative approach to estimate the conditional variance function under nonparametric monotonicity constraints by utilizing the isotonic regression method. Our GLS estimator is shown to be asymptotically equivalent to the infeasible GLS estimator with knowledge of the conditional error variance, and involves only some tuning to trim boundary observations, not only for point estimation but also for interval estimation or hypothesis testing. Simulation studies and an empirical example illustrate excellent finite sample performances of the proposed method.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"246 1","pages":"Article 105899"},"PeriodicalIF":9.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105875
Alex Maynard , Katsumi Shimotsu , Nina Kuriyama
This paper studies inference in predictive quantile regressions when the predictive regressor has a near-unit root. We derive asymptotic distributions for the quantile regression estimator and its heteroskedasticity and autocorrelation consistent (HAC) -statistic in terms of functionals of Ornstein–Uhlenbeck processes. We then propose a switching-fully modified (FM) predictive test for quantile predictability. The proposed test employs an FM style correction with a Bonferroni bound for the local-to-unity parameter when the predictor has a near unit root. It switches to a standard predictive quantile regression test with a slightly conservative critical value when the largest root of the predictor lies in the stationary range. Simulations indicate that the test has a reliable size in small samples and good power. We employ this new methodology to test the ability of three commonly employed, highly persistent and endogenous lagged valuation regressors – the dividend price ratio, earnings price ratio, and book-to-market ratio – to predict the median, shoulders, and tails of the stock return distribution.
本文研究了预测性回归因子具有近单位根时的预测性量化回归推断。我们根据 Ornstein-Uhlenbeck 过程的函数推导出了量级回归估计器及其异方差和自相关一致(HAC)t 统计量的渐近分布。然后,我们提出了一种转换-完全修正(FM)的量子预测性检验。当预测因子具有近似单位根时,所提出的检验采用 FM 式修正,并对局部到单位参数进行 Bonferroni 约束。当预测因子的最大根位于静态范围内时,它将切换到标准预测性量化回归检验,临界值略显保守。模拟结果表明,该检验在小样本中具有可靠的规模和良好的功率。我们采用这一新方法检验了三个常用的、高度持久的内生滞后估值回归因子--股息价格比、盈利价格比和账面市值比--预测股票收益率分布的中位数、肩部和尾部的能力。
{"title":"Inference in predictive quantile regressions","authors":"Alex Maynard , Katsumi Shimotsu , Nina Kuriyama","doi":"10.1016/j.jeconom.2024.105875","DOIUrl":"10.1016/j.jeconom.2024.105875","url":null,"abstract":"<div><div>This paper studies inference in predictive quantile regressions when the predictive regressor has a near-unit root. We derive asymptotic distributions for the quantile regression estimator and its heteroskedasticity and autocorrelation consistent (HAC) <span><math><mi>t</mi></math></span>-statistic in terms of functionals of Ornstein–Uhlenbeck processes. We then propose a switching-fully modified (FM) predictive test for quantile predictability. The proposed test employs an FM style correction with a Bonferroni bound for the local-to-unity parameter when the predictor has a near unit root. It switches to a standard predictive quantile regression test with a slightly conservative critical value when the largest root of the predictor lies in the stationary range. Simulations indicate that the test has a reliable size in small samples and good power. We employ this new methodology to test the ability of three commonly employed, highly persistent and endogenous lagged valuation regressors – the dividend price ratio, earnings price ratio, and book-to-market ratio – to predict the median, shoulders, and tails of the stock return distribution.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105875"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105872
Shuping Shi , Jun Yu , Chen Zhang
This paper introduces a novel and easy-to-implement method for accurately approximating the spectral density of discretely sampled fractional Ornstein–Uhlenbeck (fOU) processes. The method offers a substantial reduction in approximation error, particularly within the rough region of the fractional parameter . This approximate spectral density has the potential to enhance the performance of estimation methods and hypothesis testing that make use of spectral densities. We introduce the approximate Whittle maximum likelihood (AWML) method for discretely sampled fOU processes, utilizing the approximate spectral density, and demonstrate that the AWML estimator exhibits properties of consistency and asymptotic normality when , akin to the conventional Whittle maximum likelihood method. Through extensive simulation studies, we show that AWML outperforms existing methods in terms of estimation accuracy in finite samples. We then apply the AWML method to the trading volume of 40 financial assets. Our empirical findings reveal that the estimated Hurst parameters for these assets fall within the range of 0.10 to 0.21, indicating a rough dynamic.
{"title":"On the spectral density of fractional Ornstein–Uhlenbeck processes","authors":"Shuping Shi , Jun Yu , Chen Zhang","doi":"10.1016/j.jeconom.2024.105872","DOIUrl":"10.1016/j.jeconom.2024.105872","url":null,"abstract":"<div><div>This paper introduces a novel and easy-to-implement method for accurately approximating the spectral density of discretely sampled fractional Ornstein–Uhlenbeck (fOU) processes. The method offers a substantial reduction in approximation error, particularly within the rough region of the fractional parameter <span><math><mrow><mi>H</mi><mo>∈</mo><mrow><mo>(</mo><mn>0</mn><mo>,</mo><mn>0</mn><mo>.</mo><mn>5</mn><mo>)</mo></mrow></mrow></math></span>. This approximate spectral density has the potential to enhance the performance of estimation methods and hypothesis testing that make use of spectral densities. We introduce the approximate Whittle maximum likelihood (AWML) method for discretely sampled fOU processes, utilizing the approximate spectral density, and demonstrate that the AWML estimator exhibits properties of consistency and asymptotic normality when <span><math><mrow><mi>H</mi><mo>∈</mo><mrow><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></mrow></mrow></math></span>, akin to the conventional Whittle maximum likelihood method. Through extensive simulation studies, we show that AWML outperforms existing methods in terms of estimation accuracy in finite samples. We then apply the AWML method to the trading volume of 40 financial assets. Our empirical findings reveal that the estimated Hurst parameters for these assets fall within the range of 0.10 to 0.21, indicating a rough dynamic.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105872"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142528271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105868
Patrick Vu
Many explanations have been offered for why replication rates are low in the social sciences, including selective publication, -hacking, and treatment effect heterogeneity. This article emphasizes that issues with the most commonly used approach for setting sample sizes in replication studies may also play an important role. Theoretically, I show in a simple model of the publication process that we should expect the replication rate to fall below its nominal target, even when original studies are unbiased. The main mechanism is that the most commonly used approach for setting the replication sample size does not properly account for the fact that original effect sizes are estimated. Specifically, it sets the replication sample size to achieve a nominal power target under the assumption that estimated effect sizes correspond to fixed true effects. However, since there are non-linearities in the replication power function linking original effect sizes to power, ignoring the fact that effect sizes are estimated leads to systematically lower replication rates than intended. Empirically, I find that a parsimonious model accounting only for these issues can fully explain observed replication rates in experimental economics and social science, and two-thirds of the replication gap in psychology. I conclude with practical recommendations for replicators.
{"title":"Why are replication rates so low?","authors":"Patrick Vu","doi":"10.1016/j.jeconom.2024.105868","DOIUrl":"10.1016/j.jeconom.2024.105868","url":null,"abstract":"<div><div>Many explanations have been offered for why replication rates are low in the social sciences, including selective publication, <span><math><mi>p</mi></math></span>-hacking, and treatment effect heterogeneity. This article emphasizes that issues with the most commonly used approach for setting sample sizes in replication studies may also play an important role. Theoretically, I show in a simple model of the publication process that we should expect the replication rate to fall below its nominal target, even when original studies are unbiased. The main mechanism is that the most commonly used approach for setting the replication sample size does not properly account for the fact that original effect sizes are estimated. Specifically, it sets the replication sample size to achieve a nominal power target under the assumption that estimated effect sizes correspond to fixed true effects. However, since there are non-linearities in the replication power function linking original effect sizes to power, ignoring the fact that effect sizes are estimated leads to systematically lower replication rates than intended. Empirically, I find that a parsimonious model accounting only for these issues can fully explain observed replication rates in experimental economics and social science, and two-thirds of the replication gap in psychology. I conclude with practical recommendations for replicators.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105868"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142528270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105885
William A. Brock , J. Isaac Miller
Poleward transport of atmospheric moisture and heat play major roles in the magnification of warming in poleward latitudes per degree of global warming, a phenomenon known as polar amplification (PA). We derive a time series econometric framework using a system of equations that have error-correction mechanisms restricted across equations to estimate and an identification strategy to recover the parameters of a moist energy balance model (MEBM) similar to those in the recent climate science literature. This framework enables the climate econometrician to estimate and forecast temperature rise in latitude belts as cumulative emissions continue to grow as well as account for effects of increases in atmospheric moisture suggested by the Clausius–Clapeyron equation, a driver of spatial non-uniformity in climate change. Non-uniformity is important for two reasons: climate change has unequal economic consequences that need to be better understood and amplification of temperatures in polar latitudes may trigger irreversible climate tipping points, which are disproportionately located in those regions.
{"title":"Polar amplification in a moist energy balance model: A structural econometric approach to estimation and testing","authors":"William A. Brock , J. Isaac Miller","doi":"10.1016/j.jeconom.2024.105885","DOIUrl":"10.1016/j.jeconom.2024.105885","url":null,"abstract":"<div><div>Poleward transport of atmospheric moisture and heat play major roles in the magnification of warming in poleward latitudes per degree of global warming, a phenomenon known as polar amplification (PA). We derive a time series econometric framework using a system of equations that have error-correction mechanisms restricted across equations to estimate and an identification strategy to recover the parameters of a moist energy balance model (MEBM) similar to those in the recent climate science literature. This framework enables the climate econometrician to estimate and forecast temperature rise in latitude belts as cumulative emissions continue to grow as well as account for effects of increases in atmospheric moisture suggested by the Clausius–Clapeyron equation, a driver of spatial non-uniformity in climate change. Non-uniformity is important for two reasons: climate change has unequal economic consequences that need to be better understood and amplification of temperatures in polar latitudes may trigger irreversible climate tipping points, which are disproportionately located in those regions.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105885"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105873
Yuehao Bai , Jizhou Liu , Azeem M. Shaikh , Max Tabord-Meehan
This paper studies inference in cluster randomized trials where treatment status is determined according to a “matched pairs” design. Here, by a cluster randomized experiment, we mean one in which treatment is assigned at the level of the cluster; by a “matched pairs” design, we mean that a sample of clusters is paired according to baseline, cluster-level covariates and, within each pair, one cluster is selected at random for treatment. We study the large-sample behavior of a weighted difference-in-means estimator and derive two distinct sets of results depending on if the matching procedure does or does not match on cluster size. We then propose a single variance estimator which is consistent in either regime. Combining these results establishes the asymptotic exactness of tests based on these estimators. Next, we consider the properties of two common testing procedures based on -tests constructed from linear regressions, and argue that both are generally conservative in our framework. We additionally study the behavior of a randomization test which permutes the treatment status for clusters within pairs, and establish its finite-sample and asymptotic validity for testing specific null hypotheses. Finally, we propose a covariate-adjusted estimator which adjusts for additional baseline covariates not used for treatment assignment, and establish conditions under which such an estimator leads to strict improvements in precision. A simulation study confirms the practical relevance of our theoretical results.
本文研究了按照 "配对 "设计确定治疗状态的分组随机试验中的推论。这里所说的分组随机试验,是指在分组水平上分配治疗的试验;这里所说的 "配对 "设计,是指根据基线、分组水平协变量将分组样本配对,并在每对样本中随机选择一个分组进行治疗。我们研究了加权均值差估计器的大样本行为,并得出了两组截然不同的结果,这取决于匹配程序是否与群组规模相匹配。然后,我们提出了一个在两种情况下都一致的单一方差估计器。将这些结果结合起来,就能确定基于这些估计器的检验的渐近精确性。接下来,我们考虑了基于线性回归构建的 t 检验的两种常见检验程序的特性,并认为这两种检验程序在我们的框架中一般都是保守的。此外,我们还研究了一种随机化检验的行为,这种检验会对成对集群的处理状态进行置换,并确定其在检验特定零假设时的有限样本有效性和渐近有效性。最后,我们提出了一种协变量调整估计器,该估计器可对未用于治疗分配的额外基线协变量进行调整,并确定了这种估计器可严格提高精确度的条件。一项模拟研究证实了我们理论结果的实用性。
{"title":"Inference in cluster randomized trials with matched pairs","authors":"Yuehao Bai , Jizhou Liu , Azeem M. Shaikh , Max Tabord-Meehan","doi":"10.1016/j.jeconom.2024.105873","DOIUrl":"10.1016/j.jeconom.2024.105873","url":null,"abstract":"<div><div>This paper studies inference in cluster randomized trials where treatment status is determined according to a “matched pairs” design. Here, by a cluster randomized experiment, we mean one in which treatment is assigned at the level of the cluster; by a “matched pairs” design, we mean that a sample of clusters is paired according to baseline, cluster-level covariates and, within each pair, one cluster is selected at random for treatment. We study the large-sample behavior of a weighted difference-in-means estimator and derive two distinct sets of results depending on if the matching procedure does or does not match on cluster size. We then propose a single variance estimator which is consistent in either regime. Combining these results establishes the asymptotic exactness of tests based on these estimators. Next, we consider the properties of two common testing procedures based on <span><math><mi>t</mi></math></span>-tests constructed from linear regressions, and argue that both are generally conservative in our framework. We additionally study the behavior of a randomization test which permutes the treatment status for clusters within pairs, and establish its finite-sample and asymptotic validity for testing specific null hypotheses. Finally, we propose a covariate-adjusted estimator which adjusts for additional baseline covariates not used for treatment assignment, and establish conditions under which such an estimator leads to strict improvements in precision. A simulation study confirms the practical relevance of our theoretical results.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105873"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142528269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105876
Martin Bruns , Sascha A. Keweloh
Proxy variables have gained widespread prominence as indispensable tools for identifying structural VAR models. Analogous to instrumental variables, proxies need to be exogenous, i.e. uncorrelated with all non-target shocks. Assessing the exogeneity of proxies has traditionally relied on economic arguments rather than statistical tests. We argue that the economic rationale underlying the construction of commonly used proxy variables aligns with a stronger form of exogeneity. Specifically, proxies are typically constructed as variables not containing any information on the expected value of non-target shocks. We show conditions under which this enhanced concept of proxy exogeneity is testable without additional identifying assumptions.
代理变量作为确定结构性 VAR 模型不可或缺的工具,已得到广泛重视。与工具变量类似,代理变量必须是外生的,即与所有非目标冲击无关。评估代理变量的外生性历来依赖于经济学论据而非统计检验。我们认为,构建常用替代变量的经济学原理与更强的外生性形式是一致的。具体来说,代理变量通常是作为不包含任何非目标冲击预期值信息的变量构建的。我们展示了无需额外识别假设即可检验这种增强的代理外生性概念的条件。
{"title":"Testing for strong exogeneity in Proxy-VARs","authors":"Martin Bruns , Sascha A. Keweloh","doi":"10.1016/j.jeconom.2024.105876","DOIUrl":"10.1016/j.jeconom.2024.105876","url":null,"abstract":"<div><div>Proxy variables have gained widespread prominence as indispensable tools for identifying structural VAR models. Analogous to instrumental variables, proxies need to be exogenous, i.e. uncorrelated with all non-target shocks. Assessing the exogeneity of proxies has traditionally relied on economic arguments rather than statistical tests. We argue that the economic rationale underlying the construction of commonly used proxy variables aligns with a stronger form of exogeneity. Specifically, proxies are typically constructed as variables not containing any information on the expected value of non-target shocks. We show conditions under which this enhanced concept of proxy exogeneity is testable without additional identifying assumptions.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105876"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jeconom.2024.105883
Han Hong , Gaosheng Ju , Qi Li , Karen X. Yan
This paper considers a varying-coefficient spatial dynamic panel data model with fixed effects. We show that a two-point approximation method poses a potential weak identification problem. We propose a robust modified estimator to address this issue. Our two-step estimation procedure incorporates both linear and quadratic moment conditions. We also extend our analysis to a partially linear varying-coefficient model and develop a consistent test for this specification. We establish the asymptotic properties of the proposed estimators. Simulations indicate that our estimators and the test statistic perform well in finite samples. We apply the partially linear varying-coefficient model to study how the sales of liquor producers respond to those of neighboring competitors in China. We find spatial dependence among liquor producers and show that the spatial effects vary with competition intensity.
{"title":"Varying-coefficient spatial dynamic panel data models with fixed effects: Theory and application","authors":"Han Hong , Gaosheng Ju , Qi Li , Karen X. Yan","doi":"10.1016/j.jeconom.2024.105883","DOIUrl":"10.1016/j.jeconom.2024.105883","url":null,"abstract":"<div><div>This paper considers a varying-coefficient spatial dynamic panel data model with fixed effects. We show that a two-point approximation method poses a potential weak identification problem. We propose a robust modified estimator to address this issue. Our two-step estimation procedure incorporates both linear and quadratic moment conditions. We also extend our analysis to a partially linear varying-coefficient model and develop a consistent test for this specification. We establish the asymptotic properties of the proposed estimators. Simulations indicate that our estimators and the test statistic perform well in finite samples. We apply the partially linear varying-coefficient model to study how the sales of liquor producers respond to those of neighboring competitors in China. We find spatial dependence among liquor producers and show that the spatial effects vary with competition intensity.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"245 1","pages":"Article 105883"},"PeriodicalIF":9.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.jeconom.2024.105829
Yukun Liu , Jing Qin
Propensity score matching (PSM) is a pseudo-experimental method that uses statistical techniques to construct an artificial control group by matching each treated unit with one or more untreated units of similar characteristics. To date, the problem of determining the optimal number of matches per unit, which plays an important role in PSM, has not been adequately addressed. We propose a tuning-parameter-free PSM approach to causal inference based on the nonparametric maximum-likelihood estimation of the propensity score under the monotonicity constraint. The estimated propensity score is piecewise constant, and therefore automatically groups data. Hence, our proposal is free of tuning parameters. The proposed causal effect estimator is asymptotically semiparametric efficient when the covariate is univariate or the outcome and the propensity score depend on the covariate through the same index . We conclude that matching methods based on the propensity score alone cannot, in general, be efficient.
{"title":"Tuning-parameter-free propensity score matching approach for causal inference under shape restriction","authors":"Yukun Liu , Jing Qin","doi":"10.1016/j.jeconom.2024.105829","DOIUrl":"10.1016/j.jeconom.2024.105829","url":null,"abstract":"<div><p>Propensity score matching (PSM) is a pseudo-experimental method that uses statistical techniques to construct an artificial control group by matching each treated unit with one or more untreated units of similar characteristics. To date, the problem of determining the optimal number of matches per unit, which plays an important role in PSM, has not been adequately addressed. We propose a tuning-parameter-free PSM approach to causal inference based on the nonparametric maximum-likelihood estimation of the propensity score under the monotonicity constraint. The estimated propensity score is piecewise constant, and therefore automatically groups data. Hence, our proposal is free of tuning parameters. The proposed causal effect estimator is asymptotically semiparametric efficient when the covariate is univariate or the outcome and the propensity score depend on the covariate through the same index <span><math><mrow><msup><mrow><mi>X</mi></mrow><mrow><mo>⊤</mo></mrow></msup><mi>β</mi></mrow></math></span>. We conclude that matching methods based on the propensity score alone cannot, in general, be efficient.</p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"244 1","pages":"Article 105829"},"PeriodicalIF":9.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}