Pub Date : 2025-11-01DOI: 10.1016/j.jeconom.2024.105851
Cheng Hsiao , Qiankun Zhou
We consider the estimation and statistical inference for low dimensional parameters for a regression model with covariates whose dimension increases with sample size. We suggest a computationally simple one stage orthogonal projection approach to estimate the low dimensional parameters under strict or approximate sparsity conditions. The orthogonal projection approach is simple to implement and the inference for the low dimensional parameters is straightforward to derive whether the high dimensional function is linear or nonlinear. It also avoids the complicated regularization bias issues commonly associated with two stage estimation methods. Monte Carlo simulations and empirical applications are also conducted to investigate the finite sample performance of the proposed estimator vs the double/debiased estimator of Belloni et al. (2014) and Chernozhukov et al. (2018).
{"title":"Statistical inference for the low dimensional parameters of linear regression models in the presence of high-dimensional data: An orthogonal projection approach","authors":"Cheng Hsiao , Qiankun Zhou","doi":"10.1016/j.jeconom.2024.105851","DOIUrl":"10.1016/j.jeconom.2024.105851","url":null,"abstract":"<div><div>We consider the estimation and statistical inference for low dimensional parameters for a regression model with covariates whose dimension increases with sample size. We suggest a computationally simple one stage orthogonal projection approach to estimate the low dimensional parameters under strict or approximate sparsity conditions. The orthogonal projection approach is simple to implement and the inference for the low dimensional parameters is straightforward to derive whether the high dimensional function is linear or nonlinear. It also avoids the complicated regularization bias issues commonly associated with two stage estimation methods. Monte Carlo simulations and empirical applications are also conducted to investigate the finite sample performance of the proposed estimator vs the double/debiased estimator of Belloni et al. (2014) and Chernozhukov et al. (2018).</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 105851"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145620490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1016/j.jeconom.2025.106118
Ruike Wu , Yanrong Yang , Han Lin Shang , Huanjun Zhu
Robust estimation for modern portfolio selection on a large set of assets becomes more important due to the large deviation of empirical inference on big data. We propose a distributionally robust methodology for high-dimensional mean–variance portfolio problems, aiming to select an optimal conservative portfolio allocation by considering distributional uncertainty. With the help of factor structure, we extend the distributionally robust mean–variance problem investigated by Blanchet et al. (2022) to the high-dimensional scenario and transform it to a new penalized risk minimization problem. Furthermore, we propose a data-adaptive method to quantify both the uncertainty size and the lowest acceptable target return. Since the selection of these quantities requires knowledge of certain unknown population parameters, we further develop an estimation procedure, and establish its corresponding asymptotic consistency. Our Monte-Carlo simulation results show that the estimated uncertainty size and target return from the proposed procedure are very close to the corresponding oracle level, and the newly proposed robust portfolio achieves high out-of-sample Sharpe ratio. Finally, we conduct empirical studies based on the components of the S&P 500 index and the Russell 2000 index to demonstrate the superior return–risk performance of our proposed portfolio selection, in comparison with various existing strategies.
{"title":"Making distributionally robust portfolios feasible in high dimension","authors":"Ruike Wu , Yanrong Yang , Han Lin Shang , Huanjun Zhu","doi":"10.1016/j.jeconom.2025.106118","DOIUrl":"10.1016/j.jeconom.2025.106118","url":null,"abstract":"<div><div>Robust estimation for modern portfolio selection on a large set of assets becomes more important due to the large deviation of empirical inference on big data. We propose a distributionally robust methodology for high-dimensional mean–variance portfolio problems, aiming to select an optimal conservative portfolio allocation by considering distributional uncertainty. With the help of factor structure, we extend the distributionally robust mean–variance problem investigated by Blanchet et al. (2022) to the high-dimensional scenario and transform it to a new penalized risk minimization problem. Furthermore, we propose a data-adaptive method to quantify both the uncertainty size and the lowest acceptable target return. Since the selection of these quantities requires knowledge of certain unknown population parameters, we further develop an estimation procedure, and establish its corresponding asymptotic consistency. Our Monte-Carlo simulation results show that the estimated uncertainty size and target return from the proposed procedure are very close to the corresponding oracle level, and the newly proposed robust portfolio achieves high out-of-sample Sharpe ratio. Finally, we conduct empirical studies based on the components of the S&P 500 index and the Russell 2000 index to demonstrate the superior return–risk performance of our proposed portfolio selection, in comparison with various existing strategies.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106118"},"PeriodicalIF":4.0,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145358346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1016/j.jeconom.2025.106117
Takuya Ishihara , Daisuke Kurisu
This study examines the problem of determining whether to treat individuals based on observed covariates. The most common decision rule is the conditional empirical success (CES) rule proposed by Manski (2004), which assigns individuals to treatments that yield the best experimental outcomes conditional on the observed covariates. Conversely, using shrinkage estimators, which shrink unbiased but noisy preliminary estimates toward the average of these estimates, is a common approach in statistical estimation problems because it is well-known that shrinkage estimators may have smaller mean squared errors than unshrunk estimators. Inspired by this idea, we propose a computationally tractable shrinkage rule that selects the shrinkage factor by minimizing an upper bound of the maximum regret. Then, we compare the maximum regret of the proposed shrinkage rule with those of the CES and pooling rules when the space of conditional average treatment effects (CATEs) is correctly specified or misspecified. Our theoretical results demonstrate that the shrinkage rule performs well in many cases and these findings are further supported by numerical experiments. Specifically, we show that the maximum regret of the shrinkage rule can be strictly smaller than those of the CES and pooling rules in certain cases when the space of CATEs is correctly specified. In addition, we find that the shrinkage rule is robust against misspecification of the space of CATEs. Finally, we apply our method to experimental data from the National Job Training Partnership Act Study.
{"title":"Shrinkage methods for treatment choice","authors":"Takuya Ishihara , Daisuke Kurisu","doi":"10.1016/j.jeconom.2025.106117","DOIUrl":"10.1016/j.jeconom.2025.106117","url":null,"abstract":"<div><div>This study examines the problem of determining whether to treat individuals based on observed covariates. The most common decision rule is the conditional empirical success (CES) rule proposed by Manski (2004), which assigns individuals to treatments that yield the best experimental outcomes conditional on the observed covariates. Conversely, using shrinkage estimators, which shrink unbiased but noisy preliminary estimates toward the average of these estimates, is a common approach in statistical estimation problems because it is well-known that shrinkage estimators may have smaller mean squared errors than unshrunk estimators. Inspired by this idea, we propose a computationally tractable shrinkage rule that selects the shrinkage factor by minimizing an upper bound of the maximum regret. Then, we compare the maximum regret of the proposed shrinkage rule with those of the CES and pooling rules when the space of conditional average treatment effects (CATEs) is correctly specified or misspecified. Our theoretical results demonstrate that the shrinkage rule performs well in many cases and these findings are further supported by numerical experiments. Specifically, we show that the maximum regret of the shrinkage rule can be strictly smaller than those of the CES and pooling rules in certain cases when the space of CATEs is correctly specified. In addition, we find that the shrinkage rule is robust against misspecification of the space of CATEs. Finally, we apply our method to experimental data from the National Job Training Partnership Act Study.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106117"},"PeriodicalIF":4.0,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1016/j.jeconom.2025.106110
Jin Seo Cho , Peter C.B. Phillips
In GMM estimation, it is well known that if the moment dimension grows with the sample size, the asymptotics of GMM differ from the standard finite dimensional case. The present work examines the asymptotic properties of infinite dimensional GMM estimation when the weight matrix is formed by inverting Brownian motion or Brownian bridge covariance kernels. These kernels arise in econometric work such as minimum Cramér–von Mises distance estimation when testing distributional specification. The properties of GMM estimation are studied under different environments where the moment conditions converge to a smooth Gaussian or non-differentiable Gaussian process. Conditions are also developed for testing the validity of the moment conditions by means of a suitably constructed -statistic. In case these conditions are invalid we propose another test called the -test. As an empirical application of these infinite dimensional GMM procedures the evolution of cohort labor income inequality indices is studied using the Continuous Work History Sample database. The findings show that labor income inequality indices are maximized at early career years, implying that economic policies to reduce income inequality should be more effective when designed for workers at an early stage in their career cycles.
在GMM估计中,众所周知,当矩维随样本量增长时,GMM的渐近性与标准有限维情况不同。本文研究了当权矩阵由反布朗运动或布朗桥协方差核构成时,无限维GMM估计的渐近性质。这些核函数出现在计量经济学工作中,例如在测试分布规格时的最小cram - von Mises距离估计。研究了矩条件收敛于光滑高斯过程和不可微高斯过程的不同环境下GMM估计的性质。通过适当构造的j统计量,还开发了检验矩条件有效性的条件。如果这些条件无效,我们提出另一种测试,称为u型测试。作为这些无限维GMM程序的实证应用,本文利用连续工作历史样本数据库研究了队列劳动收入不平等指数的演变。研究结果表明,劳动收入不平等指数在职业生涯早期达到最大值,这意味着为处于职业生涯早期阶段的工人设计的减少收入不平等的经济政策应该更有效。
{"title":"GMM estimation with Brownian kernels applied to income inequality measurement","authors":"Jin Seo Cho , Peter C.B. Phillips","doi":"10.1016/j.jeconom.2025.106110","DOIUrl":"10.1016/j.jeconom.2025.106110","url":null,"abstract":"<div><div>In GMM estimation, it is well known that if the moment dimension grows with the sample size, the asymptotics of GMM differ from the standard finite dimensional case. The present work examines the asymptotic properties of infinite dimensional GMM estimation when the weight matrix is formed by inverting Brownian motion or Brownian bridge covariance kernels. These kernels arise in econometric work such as minimum Cramér–von Mises distance estimation when testing distributional specification. The properties of GMM estimation are studied under different environments where the moment conditions converge to a smooth Gaussian or non-differentiable Gaussian process. Conditions are also developed for testing the validity of the moment conditions by means of a suitably constructed <span><math><mi>J</mi></math></span>-statistic. In case these conditions are invalid we propose another test called the <span><math><mi>U</mi></math></span>-test. As an empirical application of these infinite dimensional GMM procedures the evolution of cohort labor income inequality indices is studied using the Continuous Work History Sample database. The findings show that labor income inequality indices are maximized at early career years, implying that economic policies to reduce income inequality should be more effective when designed for workers at an early stage in their career cycles.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106110"},"PeriodicalIF":4.0,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1016/j.jeconom.2025.106113
Falong Tan , Xu Guo , Lixing Zhu
This paper explores hypothesis testing for the parametric forms of the mean and variance functions in regression models under diverging-dimension settings. To mitigate the curse of dimensionality, we introduce weighted residual empirical process-based tests, both with and without martingale transformations. The asymptotic properties of these tests are derived from the behavior of weighted residual empirical processes and their martingale transformations under the null and alternative hypotheses. The proposed tests without martingale transformations achieve the fastest possible rate of detecting local alternatives, specifically of order , which is unaffected by dimensionality. However, these tests are not asymptotically distribution-free. To address this limitation, we propose a smooth residual bootstrap approximation and establish its validity in diverging-dimension settings. In contrast, tests incorporating martingale transformations are asymptotically distribution-free but exhibit an unexpected limitation: they can only detect local alternatives converging to the null at a much slower rate of order , which remains independent of dimensionality. This finding reveals a theoretical advantage in the power of tests based on weighted residual empirical process without martingale transformations over their martingale-transformed counterparts, challenging the conventional wisdom of existing asymptotically distribution-free tests based on martingale transformations. To validate our approach, we conduct simulation studies and apply the proposed tests to a real-world dataset, demonstrating their practical effectiveness.
{"title":"Weighted residual empirical processes, martingale transformations, and model specification tests for regressions with diverging number of parameters","authors":"Falong Tan , Xu Guo , Lixing Zhu","doi":"10.1016/j.jeconom.2025.106113","DOIUrl":"10.1016/j.jeconom.2025.106113","url":null,"abstract":"<div><div>This paper explores hypothesis testing for the parametric forms of the mean and variance functions in regression models under diverging-dimension settings. To mitigate the curse of dimensionality, we introduce weighted residual empirical process-based tests, both with and without martingale transformations. The asymptotic properties of these tests are derived from the behavior of weighted residual empirical processes and their martingale transformations under the null and alternative hypotheses. The proposed tests without martingale transformations achieve the fastest possible rate of detecting local alternatives, specifically of order <span><math><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup></math></span>, which is unaffected by dimensionality. However, these tests are not asymptotically distribution-free. To address this limitation, we propose a smooth residual bootstrap approximation and establish its validity in diverging-dimension settings. In contrast, tests incorporating martingale transformations are asymptotically distribution-free but exhibit an unexpected limitation: they can only detect local alternatives converging to the null at a much slower rate of order <span><math><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>4</mn></mrow></msup></math></span>, which remains independent of dimensionality. This finding reveals a theoretical advantage in the power of tests based on weighted residual empirical process without martingale transformations over their martingale-transformed counterparts, challenging the conventional wisdom of existing asymptotically distribution-free tests based on martingale transformations. To validate our approach, we conduct simulation studies and apply the proposed tests to a real-world dataset, demonstrating their practical effectiveness.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106113"},"PeriodicalIF":4.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1016/j.jeconom.2025.106112
Zixin Yang , Xiaojun Song , Jihai Yu
This paper proposes a sieve generalized method of moments (GMM) method for the estimation of spatial autoregressive panel data models with nonparametric endogenous effect. The new estimator incorporates both linear moments based on the orthogonality of the exogenous regressors with the model disturbances and quadratic moments based on the properties of idiosyncratic errors. We establish the consistency and asymptotic normality of the sieve GMM estimator and show that it is more efficient than the sieve instrumental variable estimator due to additional quadratic moments. We also put forward two new test statistics for testing the linearity of the endogenous effect. Both test statistics are shown to be asymptotic normal under the null and a sequence of local alternatives after proper standardization. Monte Carlo simulations show that the proposed estimators and tests perform well in finite samples. We also apply our method to estimate the environmental Kuznets curve in China and the knowledge spillover effect among 61 countries.
{"title":"Estimation of spatial autoregressive panel data models with nonparametric endogenous effect","authors":"Zixin Yang , Xiaojun Song , Jihai Yu","doi":"10.1016/j.jeconom.2025.106112","DOIUrl":"10.1016/j.jeconom.2025.106112","url":null,"abstract":"<div><div>This paper proposes a sieve generalized method of moments (GMM) method for the estimation of spatial autoregressive panel data models with nonparametric endogenous effect. The new estimator incorporates both linear moments based on the orthogonality of the exogenous regressors with the model disturbances and quadratic moments based on the properties of idiosyncratic errors. We establish the consistency and asymptotic normality of the sieve GMM estimator and show that it is more efficient than the sieve instrumental variable estimator due to additional quadratic moments. We also put forward two new test statistics for testing the linearity of the endogenous effect. Both test statistics are shown to be asymptotic normal under the null and a sequence of local alternatives after proper standardization. Monte Carlo simulations show that the proposed estimators and tests perform well in finite samples. We also apply our method to estimate the environmental Kuznets curve in China and the knowledge spillover effect among 61 countries.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106112"},"PeriodicalIF":4.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1016/j.jeconom.2025.106114
Tom Boot , Johannes W. Ligtenberg
Identification-robust hypothesis tests are commonly based on the continuous updating GMM objective function. When the number of moment conditions grows proportionally with the sample size, the large-dimensional weighting matrix prohibits the use of conventional asymptotic approximations and the behavior of these tests remains unknown. We show that the structure of the weighting matrix opens up an alternative route to asymptotic results when, under the null hypothesis, the distribution of the moment conditions satisfies a symmetry condition known as reflection invariance. We provide several examples in which the invariance follows from standard assumptions. Our results show that existing tests will be asymptotically conservative, and we propose an adjustment to attain nominal size in large samples. We illustrate our findings through simulations for various linear and nonlinear models, and an empirical application on the effect of the concentration of financial activities in banks on systemic risk.
{"title":"Identification- and many moment-robust inference via invariant moment conditions","authors":"Tom Boot , Johannes W. Ligtenberg","doi":"10.1016/j.jeconom.2025.106114","DOIUrl":"10.1016/j.jeconom.2025.106114","url":null,"abstract":"<div><div>Identification-robust hypothesis tests are commonly based on the continuous updating GMM objective function. When the number of moment conditions grows proportionally with the sample size, the large-dimensional weighting matrix prohibits the use of conventional asymptotic approximations and the behavior of these tests remains unknown. We show that the structure of the weighting matrix opens up an alternative route to asymptotic results when, under the null hypothesis, the distribution of the moment conditions satisfies a symmetry condition known as reflection invariance. We provide several examples in which the invariance follows from standard assumptions. Our results show that existing tests will be asymptotically conservative, and we propose an adjustment to attain nominal size in large samples. We illustrate our findings through simulations for various linear and nonlinear models, and an empirical application on the effect of the concentration of financial activities in banks on systemic risk.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106114"},"PeriodicalIF":4.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1016/j.jeconom.2025.106108
Frank Kleibergen , Zhaoguo Zhan
We propose the continuous updating estimator (CUE) for estimating ex-post risk premia from large cross-sections of individual asset returns over limited time periods. We analyze its properties while also allowing for an unknown number of unobserved factors. The CUE then provides an estimator of its, so-called, pseudo-true value, the risk premia on the observed factors without assuming that they comprise all priced factors. We develop size-correct procedures for testing hypotheses on the estimand of the CUE, which are more precise than existing ones. The proposed methodology is used to examine risk factors widely analyzed using a small number of portfolios. Our findings are that market, size, and momentum factors carry largely positive risk premia, while many other factors much less so. Different factors therefore stand out in the cross-section of individual assets.
{"title":"Risk premia from the cross-section of individual assets","authors":"Frank Kleibergen , Zhaoguo Zhan","doi":"10.1016/j.jeconom.2025.106108","DOIUrl":"10.1016/j.jeconom.2025.106108","url":null,"abstract":"<div><div>We propose the continuous updating estimator (CUE) for estimating ex-post risk premia from large cross-sections of individual asset returns over limited time periods. We analyze its properties while also allowing for an unknown number of unobserved factors. The CUE then provides an estimator of its, so-called, pseudo-true value, <span><math><mrow><mi>i</mi><mo>.</mo><mi>e</mi><mo>.</mo><mo>,</mo></mrow></math></span> the risk premia on the observed factors without assuming that they comprise all priced factors. We develop size-correct procedures for testing hypotheses on the estimand of the CUE, which are more precise than existing ones. The proposed methodology is used to examine risk factors widely analyzed using a small number of portfolios. Our findings are that market, size, and momentum factors carry largely positive risk premia, while many other factors much less so. Different factors therefore stand out in the cross-section of individual assets.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106108"},"PeriodicalIF":4.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.jeconom.2025.106111
Gregory Fletcher Cox
When parameters are weakly identified, bounds on the parameters may provide a valuable source of information. Existing weak identification estimation and inference results are unable to combine weak identification with bounds. Within a class of minimum distance models, this paper proposes identification-robust inference that incorporates information from bounds when parameters are weakly identified. This paper demonstrates the value of the bounds and identification-robust inference in a simple latent factor model and a simple GARCH model. This paper also demonstrates the identification-robust inference in an empirical application, a factor model for parental investments in children.
{"title":"Weak identification with bounds in a class of minimum distance models","authors":"Gregory Fletcher Cox","doi":"10.1016/j.jeconom.2025.106111","DOIUrl":"10.1016/j.jeconom.2025.106111","url":null,"abstract":"<div><div>When parameters are weakly identified, bounds on the parameters may provide a valuable source of information. Existing weak identification estimation and inference results are unable to combine weak identification with bounds. Within a class of minimum distance models, this paper proposes identification-robust inference that incorporates information from bounds when parameters are weakly identified. This paper demonstrates the value of the bounds and identification-robust inference in a simple latent factor model and a simple GARCH model. This paper also demonstrates the identification-robust inference in an empirical application, a factor model for parental investments in children.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106111"},"PeriodicalIF":4.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1016/j.jeconom.2025.106100
Alexandre Belloni , Mingli Chen , Victor Chernozhukov
We propose two types of Quantile Graphical Models: (i) Conditional Independence Quantile Graphical Models (CIQGMs) characterize the conditional independence by evaluating the distributional dependence structure at each quantile index, as such, those can be used for validation of the graph structure in the causal graphical models; (ii) Prediction Quantile Graphical Models (PQGMs) characterize the statistical dependencies through the graphs of the best linear predictors under asymmetric loss functions. PQGMs make weaker assumptions than CIQGMs as they allow for misspecification. One advantage of these models is that we can apply them to large collections of variables driven by non-Gaussian and non-separable shocks. Because of QGMs’ ability to handle large collections of variables and focus on specific parts of the distributions, we could apply them to quantify tail interdependence. The resulting tail risk network can be used for measuring systemic risk contributions that help make inroads in understanding international financial contagion and dependence structures of returns under downside market movements.
We develop estimation and inference methods focusing on the high-dimensional case, where the number of nodes in the graph is large as compared to the number of observations. For CIQGMs, these results include valid simultaneous choices of penalty functions, uniform rates of convergence, and confidence regions that are simultaneously valid. We also derive analogous results for PQGMs, which include new results for penalized quantile regressions in high-dimensional settings to handle misspecification, many controls, and a continuum of additional conditioning events.
{"title":"Quantile graphical models: Prediction and conditional independence with applications to systemic risk","authors":"Alexandre Belloni , Mingli Chen , Victor Chernozhukov","doi":"10.1016/j.jeconom.2025.106100","DOIUrl":"10.1016/j.jeconom.2025.106100","url":null,"abstract":"<div><div>We propose two types of Quantile Graphical Models: (i) Conditional Independence Quantile Graphical Models (CIQGMs) characterize the conditional independence by evaluating the distributional dependence structure at each quantile index, as such, those can be used for validation of the graph structure in the causal graphical models; (ii) Prediction Quantile Graphical Models (PQGMs) characterize the statistical dependencies through the graphs of the best linear predictors under asymmetric loss functions. PQGMs make weaker assumptions than CIQGMs as they allow for misspecification. One advantage of these models is that we can apply them to large collections of variables driven by non-Gaussian and non-separable shocks. Because of QGMs’ ability to handle large collections of variables and focus on specific parts of the distributions, we could apply them to quantify tail interdependence. The resulting tail risk network can be used for measuring systemic risk contributions that help make inroads in understanding international financial contagion and dependence structures of returns under downside market movements.</div><div>We develop estimation and inference methods focusing on the high-dimensional case, where the number of nodes in the graph is large as compared to the number of observations. For CIQGMs, these results include valid simultaneous choices of penalty functions, uniform rates of convergence, and confidence regions that are simultaneously valid. We also derive analogous results for PQGMs, which include new results for penalized quantile regressions in high-dimensional settings to handle misspecification, many controls, and a continuum of additional conditioning events.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106100"},"PeriodicalIF":4.0,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}