Pub Date : 2025-08-19DOI: 10.1016/j.jeconom.2025.106083
Mehmet Caner , Maurizio Daniele
This paper introduces a consistent estimator and rate of convergence for the precision matrix of asset returns in large portfolios using a non-linear factor model within the deep learning framework. Our estimator remains valid even in low signal-to-noise ratio environments typical for financial markets and is compatible with the weak factor framework. Our theoretical analysis establishes uniform bounds on expected estimation risk based on deep neural networks for an expanding number of assets. Additionally, we provide a new consistent data-dependent estimator of error covariance in deep neural networks. Our models demonstrate superior accuracy in extensive simulations and the empirical application.
{"title":"Deep learning based residuals in non-linear factor models: Precision matrix estimation of returns with low signal-to-noise ratio","authors":"Mehmet Caner , Maurizio Daniele","doi":"10.1016/j.jeconom.2025.106083","DOIUrl":"10.1016/j.jeconom.2025.106083","url":null,"abstract":"<div><div>This paper introduces a consistent estimator and rate of convergence for the precision matrix of asset returns in large portfolios using a non-linear factor model within the deep learning framework. Our estimator remains valid even in low signal-to-noise ratio environments typical for financial markets and is compatible with the weak factor framework. Our theoretical analysis establishes uniform bounds on expected estimation risk based on deep neural networks for an expanding number of assets. Additionally, we provide a new consistent data-dependent estimator of error covariance in deep neural networks. Our models demonstrate superior accuracy in extensive simulations and the empirical application.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106083"},"PeriodicalIF":4.0,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-14DOI: 10.1016/j.jeconom.2025.106073
Mariia Artemova
This paper introduces a novel score-driven dynamic factor model designed for filtering cross-sectional co-movements in panels of time series. The model is formulated using elliptical distribution for noise terms, allowing the update of the time-varying parameter to be potentially nonlinear and robust to outliers. We derive stochastic properties of time series generated by the model, such as stationarity and ergodicity, and establish the invertibility of the filter. We prove that the identification of the factors and loadings is achieved by incorporating an orthogonality constraint on the loadings, which is invariant to the order of the series in the panel. Given the nonlinearity of the constraint, we propose exploiting a maximum likelihood estimation on Stiefel manifolds. This approach ensures that the identification constraint is satisfied numerically, enabling joint estimation of the static and time-varying parameters. Furthermore, the asymptotic properties of the constrained estimator are derived. In a series of Monte Carlo experiments, we find evidence of appropriate finite sample properties of the estimator and resulting score filter for the time-varying parameters. We demonstrate the empirical usefulness of our factor model in constructing indices of economic activity from a set of macroeconomic and financial variables during the period 1981–2022. An empirical application highlights the importance of robustness, particularly in the presence of V-shaped recessions, such as the COVID-19 recession.
{"title":"An order-invariant score-driven dynamic factor model","authors":"Mariia Artemova","doi":"10.1016/j.jeconom.2025.106073","DOIUrl":"10.1016/j.jeconom.2025.106073","url":null,"abstract":"<div><div>This paper introduces a novel score-driven dynamic factor model designed for filtering cross-sectional co-movements in panels of time series. The model is formulated using elliptical distribution for noise terms, allowing the update of the time-varying parameter to be potentially nonlinear and robust to outliers. We derive stochastic properties of time series generated by the model, such as stationarity and ergodicity, and establish the invertibility of the filter. We prove that the identification of the factors and loadings is achieved by incorporating an orthogonality constraint on the loadings, which is invariant to the order of the series in the panel. Given the nonlinearity of the constraint, we propose exploiting a maximum likelihood estimation on Stiefel manifolds. This approach ensures that the identification constraint is satisfied numerically, enabling joint estimation of the static and time-varying parameters. Furthermore, the asymptotic properties of the constrained estimator are derived. In a series of Monte Carlo experiments, we find evidence of appropriate finite sample properties of the estimator and resulting score filter for the time-varying parameters. We demonstrate the empirical usefulness of our factor model in constructing indices of economic activity from a set of macroeconomic and financial variables during the period 1981–2022. An empirical application highlights the importance of robustness, particularly in the presence of V-shaped recessions, such as the COVID-19 recession.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106073"},"PeriodicalIF":4.0,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-13DOI: 10.1016/j.jeconom.2025.106076
Yi-Ting Chen , Chu-An Liu , Jiun-Hua Su
We propose a unified model averaging (MA) approach for a broad class of forecasting targets. This approach is established by minimizing an asymptotic risk based on the expected Bregman divergence of a combined forecast, relative to the optimal forecast of the forecasting target, under local(-to-zero) asymptotics. It can be flexibly applied to develop effective MA methods across various forecasting contexts, including but not limited to univariate and multivariate mean forecasting, volatility forecasting, probabilistic forecasting, and density forecasting. As illustrative examples, we present a series of simulation experiments and empirical cases that demonstrate strong numerical performance of our approach in forecasting.
{"title":"Bregman model averaging for forecast combination","authors":"Yi-Ting Chen , Chu-An Liu , Jiun-Hua Su","doi":"10.1016/j.jeconom.2025.106076","DOIUrl":"10.1016/j.jeconom.2025.106076","url":null,"abstract":"<div><div>We propose a unified model averaging (MA) approach for a broad class of forecasting targets. This approach is established by minimizing an asymptotic risk based on the expected Bregman divergence of a combined forecast, relative to the optimal forecast of the forecasting target, under local(-to-zero) asymptotics. It can be flexibly applied to develop effective MA methods across various forecasting contexts, including but not limited to univariate and multivariate mean forecasting, volatility forecasting, probabilistic forecasting, and density forecasting. As illustrative examples, we present a series of simulation experiments and empirical cases that demonstrate strong numerical performance of our approach in forecasting.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106076"},"PeriodicalIF":4.0,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144827730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-04DOI: 10.1016/j.jeconom.2025.106068
Giuseppe Buccheri , Roberto Renò , Giorgio Vocalelli
This paper rehabilitates biased proxies for the assessment of the predictive accuracy of competing forecasts. By relaxing the ubiquitous assumption of proxy unbiasedness adopted in the theoretical and empirical literature, we show how to optimally combine (possibly) biased proxies to maximize the probability of inferring the ranking that would be obtained using the true latent variable, a property that we dub proxy reliability. Our procedure still preserves the robustness of the loss function, in the sense of Patton (2011b), and allows testing for equal predictive accuracy, as in Diebold and Mariano (1995). We demonstrate the usefulness of the method with compelling empirical applications on GDP growth, financial market volatility forecasting, and sea surface temperature of the Niño 3.4 region.
{"title":"Taking advantage of biased proxies for forecast evaluation","authors":"Giuseppe Buccheri , Roberto Renò , Giorgio Vocalelli","doi":"10.1016/j.jeconom.2025.106068","DOIUrl":"10.1016/j.jeconom.2025.106068","url":null,"abstract":"<div><div>This paper rehabilitates biased proxies for the assessment of the predictive accuracy of competing forecasts. By relaxing the ubiquitous assumption of proxy unbiasedness adopted in the theoretical and empirical literature, we show how to optimally combine (possibly) biased proxies to maximize the probability of inferring the ranking that would be obtained using the true latent variable, a property that we dub proxy reliability. Our procedure still preserves the robustness of the loss function, in the sense of Patton (2011b), and allows testing for equal predictive accuracy, as in Diebold and Mariano (1995). We demonstrate the usefulness of the method with compelling empirical applications on GDP growth, financial market volatility forecasting, and sea surface temperature of the Niño 3.4 region.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106068"},"PeriodicalIF":4.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144766564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.jeconom.2025.106069
Fu Ouyang, Thomas T. Yang
This paper proposes a new method for estimating high-dimensional binary choice models. We consider a semiparametric model that places no distributional assumptions on the error term, allows for heteroskedastic errors, and permits endogenous regressors. Our approaches extend the special regressor estimator originally proposed by Lewbel (2000). This estimator becomes impractical in high-dimensional settings due to the curse of dimensionality associated with high-dimensional conditional density estimation. To overcome this challenge, we introduce an innovative data-driven dimension reduction method for nonparametric kernel estimators, which constitutes the main contribution of this work. The method combines distance covariance-based screening with cross-validation (CV) procedures, making special regressor estimation feasible in high dimensions. Using this new feasible conditional density estimator, we address variable and moment (instrumental variable) selection problems for these models. We apply penalized least squares (LS) and generalized method of moments (GMM) estimators with an penalty. A comprehensive analysis of the oracle and asymptotic properties of these estimators is provided. Finally, through Monte Carlo simulations and an empirical study on the migration intentions of rural Chinese residents, we demonstrate the effectiveness of our proposed methods in finite sample settings.
{"title":"High dimensional binary choice model with unknown heteroskedasticity or instrumental variables","authors":"Fu Ouyang, Thomas T. Yang","doi":"10.1016/j.jeconom.2025.106069","DOIUrl":"10.1016/j.jeconom.2025.106069","url":null,"abstract":"<div><div>This paper proposes a new method for estimating high-dimensional binary choice models. We consider a semiparametric model that places no distributional assumptions on the error term, allows for heteroskedastic errors, and permits endogenous regressors. Our approaches extend the special regressor estimator originally proposed by Lewbel (2000). This estimator becomes impractical in high-dimensional settings due to the curse of dimensionality associated with high-dimensional conditional density estimation. To overcome this challenge, we introduce an innovative data-driven dimension reduction method for nonparametric kernel estimators, which constitutes the main contribution of this work. The method combines distance covariance-based screening with cross-validation (CV) procedures, making special regressor estimation feasible in high dimensions. Using this new feasible conditional density estimator, we address variable and moment (instrumental variable) selection problems for these models. We apply penalized least squares (LS) and generalized method of moments (GMM) estimators with an <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> penalty. A comprehensive analysis of the oracle and asymptotic properties of these estimators is provided. Finally, through Monte Carlo simulations and an empirical study on the migration intentions of rural Chinese residents, we demonstrate the effectiveness of our proposed methods in finite sample settings.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106069"},"PeriodicalIF":4.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144749614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-31DOI: 10.1016/j.jeconom.2025.106061
Hyeong Jin Hyun, Xiao Wang
Bayesian inference for jump diffusion processes (JDPs) remains challenging due to intractable transition densities and the latency of jump times and intensities. This paper introduces Neural Conformal Inference for JDPs (NCoin-JDP), a novel likelihood-free approach that leverages the power of deep neural networks (DNNs). NCoin-JDP bypasses the limitations of traditional methods by establishing a direct mapping between observed data and model parameters using a DNN. This approach eliminates the discretization errors inherent in likelihood-based methods, leading to more accurate inference. Despite the black-box nature of DNNs, we establish the asymptotic theory to quantify the approximation error of our algorithm. Additionally, we calibrate the uncertainty of our estimations using conformal prediction, providing theoretical guarantees of equivalence with the Bayesian posterior. NCoin-JDP demonstrates competitive performance compared to state-of-the-art methods. We showcase its effectiveness through numerical simulations and apply it to real-world data (S&P 500 and NASDAQ, 1993–2024) to investigate the impact of COVID-19 on the US economy. All numerical studies are reproducible in https://github.com/anonymous1116/NCoin-JDP.
{"title":"Neural Conformal Inference for jump diffusion processes","authors":"Hyeong Jin Hyun, Xiao Wang","doi":"10.1016/j.jeconom.2025.106061","DOIUrl":"10.1016/j.jeconom.2025.106061","url":null,"abstract":"<div><div>Bayesian inference for jump diffusion processes (JDPs) remains challenging due to intractable transition densities and the latency of jump times and intensities. This paper introduces Neural Conformal Inference for JDPs (NCoin-JDP), a novel likelihood-free approach that leverages the power of deep neural networks (DNNs). NCoin-JDP bypasses the limitations of traditional methods by establishing a direct mapping between observed data and model parameters using a DNN. This approach eliminates the discretization errors inherent in likelihood-based methods, leading to more accurate inference. Despite the black-box nature of DNNs, we establish the asymptotic theory to quantify the approximation error of our algorithm. Additionally, we calibrate the uncertainty of our estimations using conformal prediction, providing theoretical guarantees of equivalence with the Bayesian posterior. NCoin-JDP demonstrates competitive performance compared to state-of-the-art methods. We showcase its effectiveness through numerical simulations and apply it to real-world data (S&P 500 and NASDAQ, 1993–2024) to investigate the impact of COVID-19 on the US economy. All numerical studies are reproducible in <span><span>https://github.com/anonymous1116/NCoin-JDP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106061"},"PeriodicalIF":4.0,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144739500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-25DOI: 10.1016/j.jeconom.2025.106055
Vira Semenova
Lee (2009) is a common approach to bound the average causal effect in the presence of selection bias, assuming the treatment effect on selection has the same sign for all subjects. This paper generalizes Lee bounds to allow the sign of this effect to be identified by pretreatment covariates, relaxing the standard (unconditional) monotonicity to its conditional analog. Asymptotic theory for generalized Lee bounds is proposed in low-dimensional smooth and high-dimensional sparse designs. The paper also generalizes Lee bounds to accommodate multiple outcomes. Focusing on JobCorps job training program, I first show that unconditional monotonicity is unlikely to hold, and then demonstrate the use of covariates to tighten the bounds.
{"title":"Generalized Lee bounds","authors":"Vira Semenova","doi":"10.1016/j.jeconom.2025.106055","DOIUrl":"10.1016/j.jeconom.2025.106055","url":null,"abstract":"<div><div>Lee (2009) is a common approach to bound the average causal effect in the presence of selection bias, assuming the treatment effect on selection has the same sign for all subjects. This paper generalizes Lee bounds to allow the sign of this effect to be identified by pretreatment covariates, relaxing the standard (unconditional) monotonicity to its conditional analog. Asymptotic theory for generalized Lee bounds is proposed in low-dimensional smooth and high-dimensional sparse designs. The paper also generalizes Lee bounds to accommodate multiple outcomes. Focusing on JobCorps job training program, I first show that unconditional monotonicity is unlikely to hold, and then demonstrate the use of covariates to tighten the bounds.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106055"},"PeriodicalIF":9.9,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144702752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-25DOI: 10.1016/j.jeconom.2025.106059
Jhordano Aguilar-Loyo
Two-way fixed effects (TWFE) is a flexible and widely used approach for estimating treatment effects, and several TWFE estimators have been proposed for staggered treatment designs. This paper focuses on the extended TWFE estimator, introduced by Borusyak et al. (2024) and Wooldridge (2021), and compares it with alternative TWFE estimators. The main contribution is the derivation of an equation that connects the extended TWFE estimator with the difference-in-differences estimator. This equivalence provides a transparent decomposition of the components of the extended TWFE estimand. The results show that the extended TWFE estimand consists of two distinct components: one that captures meaningful comparisons and a residual term. The paper outlines the assumptions required to identify treatment effects. In line with previous literature, the findings show that the extended TWFE estimator relies on a parallel trends assumption that extends across multiple periods. Additionally, illustrative examples compare the TWFE estimators under violations of the parallel trends assumption. The results suggest that no single estimator outperforms the others. The choice of the TWFE estimator depends on the parameter of interest and the characteristics of the empirical application.
{"title":"A comparative analysis of two-way fixed effects estimators in staggered treatment designs","authors":"Jhordano Aguilar-Loyo","doi":"10.1016/j.jeconom.2025.106059","DOIUrl":"10.1016/j.jeconom.2025.106059","url":null,"abstract":"<div><div>Two-way fixed effects (TWFE) is a flexible and widely used approach for estimating treatment effects, and several TWFE estimators have been proposed for staggered treatment designs. This paper focuses on the extended TWFE estimator, introduced by Borusyak et al. (2024) and Wooldridge (2021), and compares it with alternative TWFE estimators. The main contribution is the derivation of an equation that connects the extended TWFE estimator with the difference-in-differences estimator. This equivalence provides a transparent decomposition of the components of the extended TWFE estimand. The results show that the extended TWFE estimand consists of two distinct components: one that captures meaningful comparisons and a residual term. The paper outlines the assumptions required to identify treatment effects. In line with previous literature, the findings show that the extended TWFE estimator relies on a parallel trends assumption that extends across multiple periods. Additionally, illustrative examples compare the TWFE estimators under violations of the parallel trends assumption. The results suggest that no single estimator outperforms the others. The choice of the TWFE estimator depends on the parameter of interest and the characteristics of the empirical application.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106059"},"PeriodicalIF":9.9,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144702751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-24DOI: 10.1016/j.jeconom.2025.106065
Yin Lu , Chunbai Tao , Di Wang , Gazi Salah Uddin , Libo Wu , Xuening Zhu
Spatial autoregression has been extensively studied in various applications, yet its robust estimation methods have received limited attention. In this work, we introduce two dynamic spatial autoregression (DSAR) models aimed at capturing temporal trends and depicting the asymmetric network effects of the units. For both DSAR models, we propose a truncated Yule–Walker estimation method, which is tailored to achieve robust estimation in the presence of heavy-tailed data. Additionally, we extend this robust estimation procedure to a constrained estimation framework using the Dantzig selector, enabling the identification of sparse network effects observed in real-world applications. Theoretically, the minimax optimality of proposed estimators is derived under certain conditions on the weighting matrix. Empirical studies, including an analysis of financial contagion in the Chinese stock market and the dynamics of live streaming popularity, demonstrate the practical efficacy of our methods.
{"title":"Robust estimation for dynamic spatial autoregression models with nearly optimal rates","authors":"Yin Lu , Chunbai Tao , Di Wang , Gazi Salah Uddin , Libo Wu , Xuening Zhu","doi":"10.1016/j.jeconom.2025.106065","DOIUrl":"10.1016/j.jeconom.2025.106065","url":null,"abstract":"<div><div>Spatial autoregression has been extensively studied in various applications, yet its robust estimation methods have received limited attention. In this work, we introduce two dynamic spatial autoregression (DSAR) models aimed at capturing temporal trends and depicting the asymmetric network effects of the units. For both DSAR models, we propose a truncated Yule–Walker estimation method, which is tailored to achieve robust estimation in the presence of heavy-tailed data. Additionally, we extend this robust estimation procedure to a constrained estimation framework using the Dantzig selector, enabling the identification of sparse network effects observed in real-world applications. Theoretically, the minimax optimality of proposed estimators is derived under certain conditions on the weighting matrix. Empirical studies, including an analysis of financial contagion in the Chinese stock market and the dynamics of live streaming popularity, demonstrate the practical efficacy of our methods.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106065"},"PeriodicalIF":9.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144696381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23DOI: 10.1016/j.jeconom.2025.106064
Liangjun Su , Sainan Jin , Xia Wang
In this paper, we propose a sieve approach to estimate state-varying factor models, where the factor loadings vary over specific state variables. Our methodology consists of a two-step estimation procedure for the parameters of interest. In the first step, we achieve preliminary consistent estimates of the factors and factor loadings via nuclear norm regularization (NNR). In the second step, we perform post-NNR iterative least squares estimations for the factors and factor loadings. We establish the asymptotic properties of these estimators. Based on the estimation theory, we propose a test for the null hypothesis of constant factor loadings and examine the asymptotic properties of the test statistic. Monte Carlo simulations demonstrate the favorable performance of the proposed estimation procedure and testing method in finite samples. An application to a U.S. macroeconomic dataset suggests potential state-dependency within the U.S. economy.
{"title":"Sieve estimation of state-varying factor models","authors":"Liangjun Su , Sainan Jin , Xia Wang","doi":"10.1016/j.jeconom.2025.106064","DOIUrl":"10.1016/j.jeconom.2025.106064","url":null,"abstract":"<div><div>In this paper, we propose a sieve approach to estimate state-varying factor models, where the factor loadings vary over specific state variables. Our methodology consists of a two-step estimation procedure for the parameters of interest. In the first step, we achieve preliminary consistent estimates of the factors and factor loadings via nuclear norm regularization (NNR). In the second step, we perform post-NNR iterative least squares estimations for the factors and factor loadings. We establish the asymptotic properties of these estimators. Based on the estimation theory, we propose a test for the null hypothesis of constant factor loadings and examine the asymptotic properties of the test statistic. Monte Carlo simulations demonstrate the favorable performance of the proposed estimation procedure and testing method in finite samples. An application to a U.S. macroeconomic dataset suggests potential state-dependency within the U.S. economy.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"251 ","pages":"Article 106064"},"PeriodicalIF":9.9,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}