Pub Date : 2025-11-01Epub Date: 2025-10-10DOI: 10.1016/j.jeconom.2025.106111
Gregory Fletcher Cox
When parameters are weakly identified, bounds on the parameters may provide a valuable source of information. Existing weak identification estimation and inference results are unable to combine weak identification with bounds. Within a class of minimum distance models, this paper proposes identification-robust inference that incorporates information from bounds when parameters are weakly identified. This paper demonstrates the value of the bounds and identification-robust inference in a simple latent factor model and a simple GARCH model. This paper also demonstrates the identification-robust inference in an empirical application, a factor model for parental investments in children.
{"title":"Weak identification with bounds in a class of minimum distance models","authors":"Gregory Fletcher Cox","doi":"10.1016/j.jeconom.2025.106111","DOIUrl":"10.1016/j.jeconom.2025.106111","url":null,"abstract":"<div><div>When parameters are weakly identified, bounds on the parameters may provide a valuable source of information. Existing weak identification estimation and inference results are unable to combine weak identification with bounds. Within a class of minimum distance models, this paper proposes identification-robust inference that incorporates information from bounds when parameters are weakly identified. This paper demonstrates the value of the bounds and identification-robust inference in a simple latent factor model and a simple GARCH model. This paper also demonstrates the identification-robust inference in an empirical application, a factor model for parental investments in children.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106111"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-10-30DOI: 10.1016/j.jeconom.2025.106121
Hongfei Wang , Ping Zhao , Long Feng , Zhaojun Wang
In this article, we address the challenge of identifying well-performing mutual funds among a large pool of candidates, utilizing the linear factor pricing model. Assuming observable factors with a weak correlation structure for the idiosyncratic error, we propose a spatial-sign based multiple testing procedure (SS-BH). When latent factors are present, we first extract them using the elliptical principle component method (He et al. 2022) and then propose a factor-adjusted spatial-sign based multiple testing procedure (FSS-BH). Simulation studies demonstrate that our proposed FSS-BH procedure performs exceptionally well across various applications and exhibits robustness to variations in the covariance structure and the distribution of the error term. Additionally, a real data application further highlights the superiority of the FSS-BH procedure.
在本文中,我们利用线性因素定价模型,解决了在大量候选基金中识别表现良好的共同基金的挑战。假设特质误差具有弱相关结构的可观察因素,我们提出了一种基于空间符号的多重测试程序(SS-BH)。当潜在因素存在时,我们首先使用椭圆主成分法(He et al. 2022)提取潜在因素,然后提出一种基于因素调整的空间符号多重测试程序(FSS-BH)。仿真研究表明,我们提出的FSS-BH过程在各种应用中表现得非常好,并且对协方差结构和误差项分布的变化具有鲁棒性。此外,实际数据应用进一步突出了FSS-BH方法的优越性。
{"title":"Robust mutual fund selection with false discovery rate control","authors":"Hongfei Wang , Ping Zhao , Long Feng , Zhaojun Wang","doi":"10.1016/j.jeconom.2025.106121","DOIUrl":"10.1016/j.jeconom.2025.106121","url":null,"abstract":"<div><div>In this article, we address the challenge of identifying well-performing mutual funds among a large pool of candidates, utilizing the linear factor pricing model. Assuming observable factors with a weak correlation structure for the idiosyncratic error, we propose a spatial-sign based multiple testing procedure (SS-BH). When latent factors are present, we first extract them using the elliptical principle component method (He et al. 2022) and then propose a factor-adjusted spatial-sign based multiple testing procedure (FSS-BH). Simulation studies demonstrate that our proposed FSS-BH procedure performs exceptionally well across various applications and exhibits robustness to variations in the covariance structure and the distribution of the error term. Additionally, a real data application further highlights the superiority of the FSS-BH procedure.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106121"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-10-21DOI: 10.1016/j.jeconom.2025.106118
Ruike Wu , Yanrong Yang , Han Lin Shang , Huanjun Zhu
Robust estimation for modern portfolio selection on a large set of assets becomes more important due to the large deviation of empirical inference on big data. We propose a distributionally robust methodology for high-dimensional mean–variance portfolio problems, aiming to select an optimal conservative portfolio allocation by considering distributional uncertainty. With the help of factor structure, we extend the distributionally robust mean–variance problem investigated by Blanchet et al. (2022) to the high-dimensional scenario and transform it to a new penalized risk minimization problem. Furthermore, we propose a data-adaptive method to quantify both the uncertainty size and the lowest acceptable target return. Since the selection of these quantities requires knowledge of certain unknown population parameters, we further develop an estimation procedure, and establish its corresponding asymptotic consistency. Our Monte-Carlo simulation results show that the estimated uncertainty size and target return from the proposed procedure are very close to the corresponding oracle level, and the newly proposed robust portfolio achieves high out-of-sample Sharpe ratio. Finally, we conduct empirical studies based on the components of the S&P 500 index and the Russell 2000 index to demonstrate the superior return–risk performance of our proposed portfolio selection, in comparison with various existing strategies.
{"title":"Making distributionally robust portfolios feasible in high dimension","authors":"Ruike Wu , Yanrong Yang , Han Lin Shang , Huanjun Zhu","doi":"10.1016/j.jeconom.2025.106118","DOIUrl":"10.1016/j.jeconom.2025.106118","url":null,"abstract":"<div><div>Robust estimation for modern portfolio selection on a large set of assets becomes more important due to the large deviation of empirical inference on big data. We propose a distributionally robust methodology for high-dimensional mean–variance portfolio problems, aiming to select an optimal conservative portfolio allocation by considering distributional uncertainty. With the help of factor structure, we extend the distributionally robust mean–variance problem investigated by Blanchet et al. (2022) to the high-dimensional scenario and transform it to a new penalized risk minimization problem. Furthermore, we propose a data-adaptive method to quantify both the uncertainty size and the lowest acceptable target return. Since the selection of these quantities requires knowledge of certain unknown population parameters, we further develop an estimation procedure, and establish its corresponding asymptotic consistency. Our Monte-Carlo simulation results show that the estimated uncertainty size and target return from the proposed procedure are very close to the corresponding oracle level, and the newly proposed robust portfolio achieves high out-of-sample Sharpe ratio. Finally, we conduct empirical studies based on the components of the S&P 500 index and the Russell 2000 index to demonstrate the superior return–risk performance of our proposed portfolio selection, in comparison with various existing strategies.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106118"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145358346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-10-04DOI: 10.1016/j.jeconom.2025.106103
James A. Duffy , Sophocles Mavroeidis , Sam Wycherley
In the literature on nonlinear cointegration, a long-standing open problem relates to how a (nonlinear) vector autoregression, which provides a unified description of the short- and long-run dynamics of a vector of time series, can generate ‘nonlinear cointegration’ in the profound sense of those series sharing common nonlinear stochastic trends. We consider this problem in the setting of the censored and kinked structural VAR (CKSVAR), which provides a flexible yet tractable framework within which to model time series that are subject to threshold-type nonlinearities, such as those arising due to occasionally binding constraints, of which the zero lower bound (ZLB) on short-term nominal interest rates provides a leading example. We provide a complete characterisation of how common linear and nonlinear stochastic trends may be generated in this model, via unit roots and appropriate generalisations of the usual rank conditions, providing the first extension to date of the Granger–Johansen representation theorem to a nonlinearly cointegrated setting, and thereby giving the first successful treatment of the open problem. The limiting common trend processes include regulated, censored and kinked Brownian motions, none of which have previously appeared in the literature on cointegrated VARs. Our results and running examples illustrate that the CKSVAR is capable of supporting a far richer variety of long-run behaviour than is a linear VAR, in ways that may be particularly useful for the identification of structural parameters.
{"title":"Cointegration with occasionally binding constraints","authors":"James A. Duffy , Sophocles Mavroeidis , Sam Wycherley","doi":"10.1016/j.jeconom.2025.106103","DOIUrl":"10.1016/j.jeconom.2025.106103","url":null,"abstract":"<div><div>In the literature on nonlinear cointegration, a long-standing open problem relates to how a (nonlinear) vector autoregression, which provides a unified description of the short- and long-run dynamics of a vector of time series, can generate ‘nonlinear cointegration’ in the profound sense of those series sharing common nonlinear stochastic trends. We consider this problem in the setting of the censored and kinked structural VAR (CKSVAR), which provides a flexible yet tractable framework within which to model time series that are subject to threshold-type nonlinearities, such as those arising due to occasionally binding constraints, of which the zero lower bound (ZLB) on short-term nominal interest rates provides a leading example. We provide a complete characterisation of how common linear and <em>nonlinear</em> stochastic trends may be generated in this model, via unit roots and appropriate generalisations of the usual rank conditions, providing the first extension to date of the Granger–Johansen representation theorem to a nonlinearly cointegrated setting, and thereby giving the first successful treatment of the open problem. The limiting common trend processes include regulated, censored and kinked Brownian motions, none of which have previously appeared in the literature on cointegrated VARs. Our results and running examples illustrate that the CKSVAR is capable of supporting a far richer variety of long-run behaviour than is a linear VAR, in ways that may be particularly useful for the identification of structural parameters.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106103"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is about the nonparametric regression of a choice variable on a nonlinear budget set under utility maximization with general heterogeneity, i.e. in the random utility model (RUM). We show that utility maximization and convex budget sets make this regression three dimensional with a more parsimonious specification than previously derived. We show that nonconvexities in the budget set will have little effect on these results in important cases. We characterize all the restrictions of utility maximization on the budget set regression and show how to check these restrictions in applications. We formulate budget set effects that can be identified by this regression and give automatic debiased machine learners of these effects. We consider use of control functions to allow for endogeneity. Throughout we take as the main example the effect of taxes on taxable income including accounting for productivity growth. In an application to Swedish data we find the taxable income elasticity of a change in the slope of each segment to be , that the regression satisfies the restrictions of utility maximization at the values chosen for over 95% of observations, and that a productivity growth rate we estimate is close to other estimates.
{"title":"Nonlinear budget set regressions in random utility models: Theory and application to taxable income","authors":"Soren Blomquist , Anil Kumar , Che-Yuan Liang , Whitney K. Newey","doi":"10.1016/j.jeconom.2024.105859","DOIUrl":"10.1016/j.jeconom.2024.105859","url":null,"abstract":"<div><div><span>This paper is about the nonparametric regression of a choice variable on a nonlinear budget set under utility maximization with general heterogeneity, i.e. in the random utility model (RUM). We show that utility maximization and convex budget sets make this regression three dimensional with a more parsimonious specification than previously derived. We show that nonconvexities in the budget set will have little effect on these results in important cases. We characterize all the restrictions of utility maximization on the budget set regression and show how to check these restrictions in applications. We formulate budget set effects that can be identified by this regression and give automatic debiased machine learners of these effects. We consider use of control functions to allow for endogeneity. Throughout we take as the main example the effect of taxes on taxable income including accounting for productivity growth. In an application to Swedish data we find the taxable income elasticity of a change in the slope of each segment to be </span><span><math><mrow><mo>.</mo><mn>52</mn></mrow></math></span>, that the regression satisfies the restrictions of utility maximization at the values chosen for over 95% of observations, and that a productivity growth rate we estimate is close to other estimates.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 105859"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-11-11DOI: 10.1016/j.jeconom.2025.106115
Denis Chetverikov , Yukun Liu , Aleh Tsyvinski
In this paper, we introduce the weighted-average quantile regression model. We argue that this model is of interest in many applied settings and develop an estimator for parameters of this model. We show that our estimator is -consistent and asymptotically normal under weak conditions, where is the sample size. We demonstrate the usefulness of our estimator in two empirical settings. First, we study the factor structures of the expected shortfalls of the industry portfolios. Second, we study inequality and social welfare dependence on individual characteristics.
{"title":"Weighted-average quantile regression","authors":"Denis Chetverikov , Yukun Liu , Aleh Tsyvinski","doi":"10.1016/j.jeconom.2025.106115","DOIUrl":"10.1016/j.jeconom.2025.106115","url":null,"abstract":"<div><div>In this paper, we introduce the weighted-average quantile regression model. We argue that this model is of interest in many applied settings and develop an estimator for parameters of this model. We show that our estimator is <span><math><msqrt><mrow><mi>T</mi></mrow></msqrt></math></span>-consistent and asymptotically normal under weak conditions, where <span><math><mi>T</mi></math></span> is the sample size. We demonstrate the usefulness of our estimator in two empirical settings. First, we study the factor structures of the expected shortfalls of the industry portfolios. Second, we study inequality and social welfare dependence on individual characteristics.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106115"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145516545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2024-04-19DOI: 10.1016/j.jeconom.2024.105736
Jinyong Hahn , Hyungsik Roger Moon , Ruoyao Shi
We develop a Lagrange Multiplier (LM) test of neglected heterogeneity in dyadic models. The test statistic is derived by modifying Breusch and Pagan (1980)’s test. We establish the asymptotic distribution of the test statistic under the null using a novel martingale construction. We also consider the power of the LM test in generic panel models. Even though the test is motivated by random effects, we show that it has a power for detecting fixed effects as well. Finally, we examine how the estimation noise of the maximum likelihood estimator affects the asymptotic distribution of the test under the null, and show that such a noise may be ignored in large samples.
{"title":"Test of neglected heterogeneity in dyadic models","authors":"Jinyong Hahn , Hyungsik Roger Moon , Ruoyao Shi","doi":"10.1016/j.jeconom.2024.105736","DOIUrl":"10.1016/j.jeconom.2024.105736","url":null,"abstract":"<div><div><span>We develop a Lagrange Multiplier (LM) test of neglected heterogeneity in dyadic models. The test statistic is derived by modifying Breusch and Pagan (1980)’s test. We establish the </span>asymptotic distribution<span> of the test statistic under the null using a novel martingale construction. We also consider the power of the LM test in generic panel models. Even though the test is motivated by random effects, we show that it has a power for detecting fixed effects as well. Finally, we examine how the estimation noise of the maximum likelihood estimator affects the asymptotic distribution of the test under the null, and show that such a noise may be ignored in large samples.</span></div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 105736"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140783324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-23DOI: 10.1016/j.jeconom.2025.106101
Luis A.F. Alvarez , Chang Chiann , Pedro A. Morettin
This paper studies parameter estimation using L-moments, an alternative to traditional moments with attractive statistical properties. The estimation of model parameters by matching sample L-moments is known to outperform maximum likelihood estimation (MLE) in small samples from popular distributions. The choice of the number of L-moments used in estimation remains ad-hoc, though: researchers typically set the number of L-moments equal to the number of parameters, which is inefficient in larger samples. In this paper, we show that, by properly choosing the number of L-moments and weighting these accordingly, one is able to construct an estimator that outperforms MLE in finite samples, and yet retains asymptotic efficiency. We do so by introducing a generalised method of L-moments estimator and deriving its properties in an asymptotic framework where the number of L-moments varies with sample size. We then propose methods to automatically select the number of L-moments in a sample. Monte Carlo evidence shows our approach can provide mean-squared-error improvements over MLE in smaller samples, whilst working as well as it in larger samples. We consider extensions of our approach to the estimation of conditional models and a class semiparametric models. We apply the latter to study expenditure patterns in a ridesharing platform in Brazil.
{"title":"Inference on model parameters with many L-moments","authors":"Luis A.F. Alvarez , Chang Chiann , Pedro A. Morettin","doi":"10.1016/j.jeconom.2025.106101","DOIUrl":"10.1016/j.jeconom.2025.106101","url":null,"abstract":"<div><div>This paper studies parameter estimation using L-moments, an alternative to traditional moments with attractive statistical properties. The estimation of model parameters by matching sample L-moments is known to outperform maximum likelihood estimation (MLE) in small samples from popular distributions. The choice of the number of L-moments used in estimation remains <em>ad-hoc</em>, though: researchers typically set the number of L-moments equal to the number of parameters, which is inefficient in larger samples. In this paper, we show that, by properly choosing the number of L-moments and weighting these accordingly, one is able to construct an estimator that outperforms MLE in finite samples, and yet retains asymptotic efficiency. We do so by introducing a generalised method of L-moments estimator and deriving its properties in an asymptotic framework where the number of L-moments varies with sample size. We then propose methods to automatically select the number of L-moments in a sample. Monte Carlo evidence shows our approach can provide mean-squared-error improvements over MLE in smaller samples, whilst working as well as it in larger samples. We consider extensions of our approach to the estimation of conditional models and a class semiparametric models. We apply the latter to study expenditure patterns in a ridesharing platform in Brazil.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106101"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-10-28DOI: 10.1016/j.jeconom.2025.106106
Yanli Lin , Yichun Song
This paper develops a novel, instrument-free semiparametric copula framework for a spatial autoregressive (SAR) model to address endogeneity arising from an endogenous spatial weights matrix, endogenous regressors, or both. Moving beyond conventional Gaussian copulas, we develop a flexible estimator based on the Student’s copula with an unknown degrees-of-freedom (df) parameter, which nests the Gaussian case and allows the data to determine the presence of tail dependence. We propose a sieve maximum likelihood estimator (SMLE) that jointly estimates all structural, copula, and nonparametric marginal parameters, and establish that this joint estimator is consistent, asymptotically normal, and – unlike prevailing multi-stage copula-correction methods – semiparametrically efficient. Monte Carlo simulations highlight the flexibility of our approach, showing that copula misspecification inflates bias and variance, whereas joint estimation improves efficiency. In an empirical application to regional productivity spillovers, we find evidence of tail dependence and demonstrate that our method offers a credible alternative to approaches that rely on hard-to-verify excluded instruments.
{"title":"Addressing endogeneity issues in a spatial autoregressive model using copulas","authors":"Yanli Lin , Yichun Song","doi":"10.1016/j.jeconom.2025.106106","DOIUrl":"10.1016/j.jeconom.2025.106106","url":null,"abstract":"<div><div>This paper develops a novel, instrument-free semiparametric copula framework for a spatial autoregressive (SAR) model to address endogeneity arising from an endogenous spatial weights matrix, endogenous regressors, or both. Moving beyond conventional Gaussian copulas, we develop a flexible estimator based on the Student’s <span><math><mi>t</mi></math></span> copula with an unknown degrees-of-freedom (df) parameter, which nests the Gaussian case and allows the data to determine the presence of tail dependence. We propose a sieve maximum likelihood estimator (SMLE) that jointly estimates all structural, copula, and nonparametric marginal parameters, and establish that this joint estimator is consistent, asymptotically normal, and – unlike prevailing multi-stage copula-correction methods – semiparametrically efficient. Monte Carlo simulations highlight the flexibility of our approach, showing that copula misspecification inflates bias and variance, whereas joint estimation improves efficiency. In an empirical application to regional productivity spillovers, we find evidence of tail dependence and demonstrate that our method offers a credible alternative to approaches that rely on hard-to-verify excluded instruments.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106106"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-26DOI: 10.1016/j.jeconom.2025.106105
Clifford Lam , Zetai Cen
We introduce the matrix-valued time-varying Main Effects Factor Model (MEFM). MEFM is a generalization to the traditional matrix-valued factor model (FM). We give rigorous definitions of MEFM and its identifications, and propose estimators for the time-varying grand mean, row and column main effects, and the row and column factor loading matrices for the common component. Rates of convergence for different estimators are spelt out, with asymptotic normality shown. The core rank estimator for the common component is also proposed, with consistency of the estimators presented. As time series, the row and column main effects and can be non-stationary without affecting the estimation accuracy of our estimators. The number of main effects factors contributing to row or column main effects is also consistently estimated by our proposed estimators. We propose a test for testing if FM is sufficient against the alternative that MEFM is necessary, and demonstrate the power of such a test in various simulation settings. We also demonstrate numerically the accuracy of our estimators in extended simulation experiments. A set of NYC Taxi traffic data is analyzed and our test suggests that MEFM is indeed necessary for analyzing the data against a traditional FM.
{"title":"Matrix-valued factor model with time-varying main effects","authors":"Clifford Lam , Zetai Cen","doi":"10.1016/j.jeconom.2025.106105","DOIUrl":"10.1016/j.jeconom.2025.106105","url":null,"abstract":"<div><div>We introduce the matrix-valued time-varying Main Effects Factor Model (MEFM). MEFM is a generalization to the traditional matrix-valued factor model (FM). We give rigorous definitions of MEFM and its identifications, and propose estimators for the time-varying grand mean, row and column main effects, and the row and column factor loading matrices for the common component. Rates of convergence for different estimators are spelt out, with asymptotic normality shown. The core rank estimator for the common component is also proposed, with consistency of the estimators presented. As time series, the row and column main effects <span><math><mrow><mo>{</mo><msub><mrow><mi>α</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>}</mo></mrow></math></span> and <span><math><mrow><mo>{</mo><msub><mrow><mi>β</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>}</mo></mrow></math></span> can be non-stationary without affecting the estimation accuracy of our estimators. The number of main effects factors contributing to row or column main effects is also consistently estimated by our proposed estimators. We propose a test for testing if FM is sufficient against the alternative that MEFM is necessary, and demonstrate the power of such a test in various simulation settings. We also demonstrate numerically the accuracy of our estimators in extended simulation experiments. A set of NYC Taxi traffic data is analyzed and our test suggests that MEFM is indeed necessary for analyzing the data against a traditional FM.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"252 ","pages":"Article 106105"},"PeriodicalIF":4.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145155354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}