Pub Date : 2026-01-01Epub Date: 2026-01-16DOI: 10.1016/j.jeconom.2026.106184
Yao Luo , Peijun Sang
We propose a class of sieve-based efficient estimators for structural models (SEES), which approximate the solution using a linear combination of basis functions and impose equilibrium conditions as a penalty to determine the best-fitting coefficients. Our estimators circumvent repeated solution of the structural model, apply to a broad class of models, and are consistent, asymptotically normal, and asymptotically efficient. Moreover, they solve unconstrained optimization problems with fewer unknowns and offer convenient standard error calculations. As an illustration, we apply our method to an entry game between Walmart and Kmart.
{"title":"Efficient estimation of structural models via sieves","authors":"Yao Luo , Peijun Sang","doi":"10.1016/j.jeconom.2026.106184","DOIUrl":"10.1016/j.jeconom.2026.106184","url":null,"abstract":"<div><div>We propose a class of sieve-based efficient estimators for structural models (SEES), which approximate the solution using a linear combination of basis functions and impose equilibrium conditions as a penalty to determine the best-fitting coefficients. Our estimators circumvent repeated solution of the structural model, apply to a broad class of models, and are consistent, asymptotically normal, and asymptotically efficient. Moreover, they solve <em>unconstrained</em> optimization problems with fewer unknowns and offer convenient standard error calculations. As an illustration, we apply our method to an entry game between Walmart and Kmart.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106184"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-17DOI: 10.1016/j.jeconom.2025.106149
Shouxia Wang , Hua Liu , Jinhong You , Tao Huang
Inspired by a real data example illustrating the periodicity in hog price data, this study aims to analyze time series that exhibit an unknown period alongside complex covariate effects. To address these complexities and effectively handle the data structures, we incorporate the partial functional varying-coefficient single-index model into the classical time series decomposition model. We propose a two-stage estimation procedure designed to accurately estimate the unknown periodic component and the associated covariate functions. In the first stage, the unknown period is estimated using a penalized least squares approach, where the covariate functions are approximated via B-splines rather than being ignored. In the second stage, given the estimated period, we employ B-splines to estimate key components, including the amplitude of the periodic component, the varying-coefficient functions, the single-index link function, and the functional slope function. Asymptotic results for the proposed estimators are derived, encompassing the consistency of the period estimator as well as the asymptotic properties of the estimated periodic sequence and covariate functions. Furthermore, we conduct simulations to validate the superior performance of the proposed method and demonstrate its practical applicability through the aforementioned empirical example.
{"title":"Functional semiparametric modeling for nonstationary and periodic time series data","authors":"Shouxia Wang , Hua Liu , Jinhong You , Tao Huang","doi":"10.1016/j.jeconom.2025.106149","DOIUrl":"10.1016/j.jeconom.2025.106149","url":null,"abstract":"<div><div>Inspired by a real data example illustrating the periodicity in hog price data, this study aims to analyze time series that exhibit an unknown period alongside complex covariate effects. To address these complexities and effectively handle the data structures, we incorporate the partial functional varying-coefficient single-index model into the classical time series decomposition model. We propose a two-stage estimation procedure designed to accurately estimate the unknown periodic component and the associated covariate functions. In the first stage, the unknown period is estimated using a penalized least squares approach, where the covariate functions are approximated via B-splines rather than being ignored. In the second stage, given the estimated period, we employ B-splines to estimate key components, including the amplitude of the periodic component, the varying-coefficient functions, the single-index link function, and the functional slope function. Asymptotic results for the proposed estimators are derived, encompassing the consistency of the period estimator as well as the asymptotic properties of the estimated periodic sequence and covariate functions. Furthermore, we conduct simulations to validate the superior performance of the proposed method and demonstrate its practical applicability through the aforementioned empirical example.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106149"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145577890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-14DOI: 10.1016/j.jeconom.2025.106123
Sylvain Barde
The paper proposes a new algorithm for finding the confidence set of a collection of forecasts or prediction models. Existing numerical implementations use an elimination approach, where one starts with the full collection of models and successively eliminates the worst performing until the null of equal predictive ability is no longer rejected at a given confidence level. The intuition behind the proposed implementation lies in reversing the process, i.e. starting with a collection of two models and updating both the model rankings and p-values as models are successively added to the collection. The first benefit of this approach is a reduction of one polynomial order in both the time complexity and memory cost of finding the confidence set of a collection of models using the rule, falling respectively from to and from to . The second key benefit is that it allows for further models to be added at a later point in time, thus enabling collaborative efforts using the model confidence set procedure. The paper proves that this implementation is equivalent to the elimination approach, demonstrates the improved performance on a multivariate GARCH collection consisting of 4800 models, and discusses possible use-cases where this improved performance could prove useful.
{"title":"Large-scale model comparison with fast model confidence sets","authors":"Sylvain Barde","doi":"10.1016/j.jeconom.2025.106123","DOIUrl":"10.1016/j.jeconom.2025.106123","url":null,"abstract":"<div><div>The paper proposes a new algorithm for finding the confidence set of a collection of forecasts or prediction models. Existing numerical implementations use an elimination approach, where one starts with the full collection of models and successively eliminates the worst performing until the null of equal predictive ability is no longer rejected at a given confidence level. The intuition behind the proposed implementation lies in reversing the process, i.e. starting with a collection of two models and updating both the model rankings and p-values as models are successively added to the collection. The first benefit of this approach is a reduction of one polynomial order in both the time complexity and memory cost of finding the confidence set of a collection of <span><math><mi>M</mi></math></span> models using the <span><math><mi>R</mi></math></span> rule, falling respectively from <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>M</mi></mrow><mrow><mn>3</mn></mrow></msup></mrow></mfenced></mrow></math></span> to <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>M</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></math></span> and from <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>M</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></math></span> to <span><math><mrow><mi>O</mi><mfenced><mrow><mi>M</mi></mrow></mfenced></mrow></math></span>. The second key benefit is that it allows for further models to be added at a later point in time, thus enabling collaborative efforts using the model confidence set procedure. The paper proves that this implementation is equivalent to the elimination approach, demonstrates the improved performance on a multivariate GARCH collection consisting of 4800 models, and discusses possible use-cases where this improved performance could prove useful.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106123"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-10DOI: 10.1016/j.jeconom.2025.106164
Adam Dearing
We provide new non-parametric identification results for stationary dynamic discrete choice models, where both the flow utilities and the distribution of unobserved shocks are fully non-parametric. Our main identification result establishes that a multinomial choice model is non-parametrically identified when there is a special regressor that (i) has a known derivative in the utility function (e.g., enters utility quasi-linearly); (ii) only affects the evolution of the other variables indirectly through the policy function; and (iii) exhibits a type of bounded persistence. To our knowledge, this is the first non-parametric identification result for stationary models that does not require any state variable to exhibit a form of serial independence. Our identification arguments map conditional choice probabilities and the state transition process into structural primitives, and they can be applied to models with persistent unobserved heterogeneity. Our identification results have broad applicability in practice, since candidate variables for the special regressor are already common in the empirical literature.
{"title":"Non-Parametric identification of stationary dynamic discrete choicemodels","authors":"Adam Dearing","doi":"10.1016/j.jeconom.2025.106164","DOIUrl":"10.1016/j.jeconom.2025.106164","url":null,"abstract":"<div><div>We provide new non-parametric identification results for stationary dynamic discrete choice models, where both the flow utilities and the distribution of unobserved shocks are fully non-parametric. Our main identification result establishes that a multinomial choice model is non-parametrically identified when there is a special regressor that (i) has a known derivative in the utility function (e.g., enters utility quasi-linearly); (ii) only affects the evolution of the other variables indirectly through the policy function; and (iii) exhibits a type of bounded persistence. To our knowledge, this is the first non-parametric identification result for stationary models that does not require any state variable to exhibit a form of serial independence. Our identification arguments map conditional choice probabilities and the state transition process into structural primitives, and they can be applied to models with persistent unobserved heterogeneity. Our identification results have broad applicability in practice, since candidate variables for the special regressor are already common in the empirical literature.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106164"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-19DOI: 10.1016/j.jeconom.2026.106188
Shiwei Huang , Yu Chen , Jie Hu , Weiping Zhang
This paper introduces a dynamic panel data quantile regression model with network-linked fixed effects, named DQR-NFE, in which unobserved individual heterogeneity is structured through an underlying network. The corresponding estimator is derived by incorporating a quantile network cohesion (QNC) penalty into the dynamic panel quantile regression framework. This penalty encourages connected units within the network to exhibit similar conditional quantiles, with a particularly increased capacity to capture tail network dependence. Relative to conventional fixed-effects specifications, the proposed framework improves the estimation of unobserved heterogeneity and enables more accurate prediction in cold-start settings where training data are unavailable. We establish the consistency and asymptotic normality of the DQR-NFE estimators within a general nonlinear structural framework. These theoretical guarantees hold under both correctly specified and misspecified network structures, with an explicit characterization of their dependence on the network topology. Simulation studies and empirical applications reveal that the proposed estimator outperforms competing approaches in terms of both estimation accuracy and out-of-sample forecasting.
{"title":"Dynamic panel data quantile regression with network-linked fixed effects","authors":"Shiwei Huang , Yu Chen , Jie Hu , Weiping Zhang","doi":"10.1016/j.jeconom.2026.106188","DOIUrl":"10.1016/j.jeconom.2026.106188","url":null,"abstract":"<div><div>This paper introduces a <strong>d</strong>ynamic panel data <strong>q</strong>uantile <strong>r</strong>egression model with <strong>n</strong>etwork-linked <strong>f</strong>ixed <strong>e</strong>ffects, named DQR-NFE, in which unobserved individual heterogeneity is structured through an underlying network. The corresponding estimator is derived by incorporating a <strong>q</strong>uantile <strong>n</strong>etwork <strong>c</strong>ohesion (QNC) penalty into the dynamic panel quantile regression framework. This penalty encourages connected units within the network to exhibit similar conditional quantiles, with a particularly increased capacity to capture tail network dependence. Relative to conventional fixed-effects specifications, the proposed framework improves the estimation of unobserved heterogeneity and enables more accurate prediction in cold-start settings where training data are unavailable. We establish the consistency and asymptotic normality of the DQR-NFE estimators within a general nonlinear structural framework. These theoretical guarantees hold under both correctly specified and misspecified network structures, with an explicit characterization of their dependence on the network topology. Simulation studies and empirical applications reveal that the proposed estimator outperforms competing approaches in terms of both estimation accuracy and out-of-sample forecasting.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106188"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-19DOI: 10.1016/j.jeconom.2025.106173
Serafin Grundl , Yu Zhu
This paper proposes a new approach to the identification of first-price auctions that is robust to overbidding, but at the same time remains contiguous with the canonical point-identification approach of Guerre et al. (2000) (GPV) and its simple estimators. We show that a weak identifying restriction allows us to reinterpret the GPV estimates as a bound. We demonstrate that the identifying restriction holds in a set of commonly used auction models that can generate overbidding and is satisfied in the bid data from a laboratory experiment. We illustrate the approach in applications to laboratory data and field data. We recommend that practitioners continue to follow the GPV approach, but interpret the estimates as a bound in applications where they are concerned about overbidding.
本文提出了一种新的识别首价拍卖的方法,该方法对超标价具有鲁棒性,但同时与Guerre et al. (2000) (GPV)及其简单估计器的标准点识别方法保持一致。我们表明,一个弱识别限制允许我们将GPV估计重新解释为一个界。我们证明了识别限制在一组常用的拍卖模型中成立,这些模型可以产生过高的出价,并且在实验室实验的出价数据中得到满足。我们在实验室数据和现场数据的应用中说明了这种方法。我们建议从业者继续遵循GPV方法,但在他们担心过高出价的应用程序中,将估计解释为一个界限。
{"title":"A simple, robust identification approach for first-price auctions","authors":"Serafin Grundl , Yu Zhu","doi":"10.1016/j.jeconom.2025.106173","DOIUrl":"10.1016/j.jeconom.2025.106173","url":null,"abstract":"<div><div>This paper proposes a new approach to the identification of first-price auctions that is robust to overbidding, but at the same time remains contiguous with the canonical point-identification approach of Guerre et al. (2000) (GPV) and its simple estimators. We show that a weak identifying restriction allows us to reinterpret the GPV estimates as a bound. We demonstrate that the identifying restriction holds in a set of commonly used auction models that can generate overbidding and is satisfied in the bid data from a laboratory experiment. We illustrate the approach in applications to laboratory data and field data. We recommend that practitioners continue to follow the GPV approach, but interpret the estimates as a bound in applications where they are concerned about overbidding.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106173"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-26DOI: 10.1016/j.jeconom.2025.106151
Daniel Ober-Reynolds
Missing data is pervasive in econometric applications, and rarely is it plausible that the data are missing (completely) at random. This paper proposes a methodology for studying the robustness of results drawn from incomplete datasets. Selection is measured as the divergence from the distribution of complete observations to the distribution of incomplete observations. The breakdown point is defined as the minimal amount of selection needed to overturn a given result. Reporting point estimates and lower confidence intervals of the breakdown point is a simple, concise way to communicate the robustness of a result. An estimator of the breakdown point is proposed and shown -consistent and asymptotically normal. This estimator can be applied directly to conclusions drawn from any model identified with the generalized method of moments (GMM) that satisfies mild assumptions. Simulations demonstrate the finite sample performance of the breakdown point estimator on averages, linear regression, and logistic regression. The methodology is illustrated by estimating the breakdown point of conclusions drawn from several randomized controlled trails suffering from missing data due to attrition.
{"title":"Robustness to missing data: breakdown point analysis","authors":"Daniel Ober-Reynolds","doi":"10.1016/j.jeconom.2025.106151","DOIUrl":"10.1016/j.jeconom.2025.106151","url":null,"abstract":"<div><div>Missing data is pervasive in econometric applications, and rarely is it plausible that the data are missing (completely) at random. This paper proposes a methodology for studying the robustness of results drawn from incomplete datasets. Selection is measured as the divergence from the distribution of complete observations to the distribution of incomplete observations. The <em>breakdown point</em> is defined as the minimal amount of selection needed to overturn a given result. Reporting point estimates and lower confidence intervals of the breakdown point is a simple, concise way to communicate the robustness of a result. An estimator of the breakdown point is proposed and shown <span><math><msqrt><mrow><mi>n</mi></mrow></msqrt></math></span>-consistent and asymptotically normal. This estimator can be applied directly to conclusions drawn from any model identified with the generalized method of moments (GMM) that satisfies mild assumptions. Simulations demonstrate the finite sample performance of the breakdown point estimator on averages, linear regression, and logistic regression. The methodology is illustrated by estimating the breakdown point of conclusions drawn from several randomized controlled trails suffering from missing data due to attrition.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106151"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-13DOI: 10.1016/j.jeconom.2025.106120
Artūras Juodis , Simon Reese
This article distills the vast literature on Common Correlated Effects (CCE), initiated by the seminal contribution of Pesaran (2006), into five practical lessons. We provide a concise overview of the CCE framework and describe the reasons for its popularity in empirical (macro-) panel data research. The lessons we draw focus on aspects that have received substantial methodological attention, but remain underappreciated in empirical work.
{"title":"Five lessons for applied researchers from twenty years of common correlated effects estimation","authors":"Artūras Juodis , Simon Reese","doi":"10.1016/j.jeconom.2025.106120","DOIUrl":"10.1016/j.jeconom.2025.106120","url":null,"abstract":"<div><div>This article distills the vast literature on Common Correlated Effects (CCE), initiated by the seminal contribution of Pesaran (2006), into five practical lessons. We provide a concise overview of the CCE framework and describe the reasons for its popularity in empirical (macro-) panel data research. The lessons we draw focus on aspects that have received substantial methodological attention, but remain underappreciated in empirical work.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106120"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-13DOI: 10.1016/j.jeconom.2025.106125
Miaomiao Yu , Jiaxuan Li , Yong Zhou
In the modern era of big data, the vast amount of available data has brought more ways to analyze important economic and financial issues. For example, predicting the probability of individual default has become more accurate, as the number of defaulted individuals has increased year-on-year with the increase in data volume, leading to a more detailed characterization of the defaulted population. However, it presents new challenges and one of them is that all samples are separately stored in different machines and cannot be transferred directly for privacy considerations and limited data storage capacity. This paper develops an improved communication-efficient distributed algorithm in which more local summarized information is used to estimate the high-order derivatives of the loss function with lower communication cost. Furthermore, to protect the privacy in the interacted vector, we design a privacy-preserving algorithm based on the differential privacy constraint by adding a Laplace-distributed noise term in the parameters that can be extended to other cases beyond distributed architectures. Both non-private and private schemes, in which only local estimators are passed from the local machine to the central machine, are more theoretically and practically accurate and efficient than their counterparts. Then we suggest a bootstrap scheme to estimate the covariance matrix of the parametric estimators that is beneficial to effective inference. Finally, we find that the proposed method can effectively handle the practical activities that are, accurate probabilistic predictions of default risk and climate activity.
{"title":"Enhancements of communication-efficient distributed statistical inference and its privacy preservation","authors":"Miaomiao Yu , Jiaxuan Li , Yong Zhou","doi":"10.1016/j.jeconom.2025.106125","DOIUrl":"10.1016/j.jeconom.2025.106125","url":null,"abstract":"<div><div>In the modern era of big data, the vast amount of available data has brought more ways to analyze important economic and financial issues. For example, predicting the probability of individual default has become more accurate, as the number of defaulted individuals has increased year-on-year with the increase in data volume, leading to a more detailed characterization of the defaulted population. However, it presents new challenges and one of them is that all samples are separately stored in different machines and cannot be transferred directly for privacy considerations and limited data storage capacity. This paper develops an improved communication-efficient distributed algorithm in which more local summarized information is used to estimate the high-order derivatives of the loss function with lower communication cost. Furthermore, to protect the privacy in the interacted vector, we design a privacy-preserving algorithm based on the differential privacy constraint by adding a Laplace-distributed noise term in the parameters that can be extended to other cases beyond distributed architectures. Both non-private and private schemes, in which only local estimators are passed from the local machine to the central machine, are more theoretically and practically accurate and efficient than their counterparts. Then we suggest a bootstrap scheme to estimate the covariance matrix of the parametric estimators that is beneficial to effective inference. Finally, we find that the proposed method can effectively handle the practical activities that are, accurate probabilistic predictions of default risk and climate activity.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106125"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-29DOI: 10.1016/j.jeconom.2025.106171
Z. Merrick Li , Xiye Yang
We test for the presence of market frictions that induce transitory deviations of observed asset prices from the underlying efficient prices. Our test is based on the joint inference of return covariances across multiple horizons. We demonstrate that a small set of horizons suffices to identify a broad spectrum of frictions, both theoretically and practically. Our method works for high- and low-frequency data under different asymptotic regimes. Extensive simulations show our method outperforms widely used state-of-the-art tests. Our empirical studies indicate that intraday transaction prices from recent years can be considered effectively friction-free at significantly higher frequencies.
{"title":"Multi-horizon test for market frictions","authors":"Z. Merrick Li , Xiye Yang","doi":"10.1016/j.jeconom.2025.106171","DOIUrl":"10.1016/j.jeconom.2025.106171","url":null,"abstract":"<div><div>We test for the presence of market frictions that induce transitory deviations of observed asset prices from the underlying efficient prices. Our test is based on the joint inference of return covariances across multiple horizons. We demonstrate that a small set of horizons suffices to identify a broad spectrum of frictions, both theoretically and practically. Our method works for high- and low-frequency data under different asymptotic regimes. Extensive simulations show our method outperforms widely used state-of-the-art tests. Our empirical studies indicate that intraday transaction prices from recent years can be considered effectively friction-free at significantly higher frequencies.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106171"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}