Pub Date : 2025-11-20DOI: 10.1016/j.jeconom.2025.106131
Felipe Asencio , Alejandro Bernales , Daniel González , Richard Holowczak , Thanos Verousis
We develop a multi-asset model to decompose informed trading into the components concerning the underlying stock-value and the volatility in equity options. We isolate the stock-value and volatility components by characterizing their distinct intraday price responses in contracts with different option deltas and vegas, respectively. The stock-value (volatility) component represents on average 41 % (19 %) of the option spread, which remains substantial under various statistical validity analyses and robustness checks. In daily empirical applications, we also show that volatility-informed trading anticipates a 'Volmageddon' high-volatility event, and straddle trades are positively associated with volatility-informed trading.
{"title":"Decomposing informed trading in equity options","authors":"Felipe Asencio , Alejandro Bernales , Daniel González , Richard Holowczak , Thanos Verousis","doi":"10.1016/j.jeconom.2025.106131","DOIUrl":"10.1016/j.jeconom.2025.106131","url":null,"abstract":"<div><div>We develop a multi-asset model to decompose informed trading into the components concerning the underlying stock-value and the volatility in equity options. We isolate the stock-value and volatility components by characterizing their distinct intraday price responses in contracts with different option <em>deltas</em> and <em>vegas</em>, respectively. The stock-value (volatility) component represents on average 41 % (19 %) of the option spread, which remains substantial under various statistical validity analyses and robustness checks. In daily empirical applications, we also show that volatility-informed trading anticipates a 'Volmageddon' high-volatility event, and straddle trades are positively associated with volatility-informed trading.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106131"},"PeriodicalIF":4.0,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145577888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Portfolio selection in modern finance involves constructing optimal asset allocation strategies that balance risk and return. However, traditional portfolio selection faces new challenges due to systemic events, as exemplified by the financial crisis and the COVID-19 pandemic. In response, we introduce a nonparametric systemic risk-driven portfolio selection approach that models market and portfolio losses using kernel density estimation. In the event of market underperformance, we aim to minimize the conditional expected shortfall (CoES) of portfolio losses while targeting a specific return. We observe that directly estimating CoES using nonparametric kernel methods does not produce a convex objective function with respect to portfolio weights. To address this, we propose an augmentation of the objective function to ensure convexity, guaranteeing a unique solution for optimal portfolio weights regardless of the sample size. Through simulations, we demonstrate our proposed approach’s consistency and out-of-sample performance compared to benchmark portfolio criteria and CoES-based parametric models. Applying this method to a real dataset showcases its superior risk–return performance relative to existing approaches.
{"title":"Statistical inference for systemic risk-driven portfolio selection","authors":"Tsz Chai Fung , Yinhuan Li , Liang Peng , Linyi Qian","doi":"10.1016/j.jeconom.2025.106127","DOIUrl":"10.1016/j.jeconom.2025.106127","url":null,"abstract":"<div><div>Portfolio selection in modern finance involves constructing optimal asset allocation strategies that balance risk and return. However, traditional portfolio selection faces new challenges due to systemic events, as exemplified by the financial crisis and the COVID-19 pandemic. In response, we introduce a nonparametric systemic risk-driven portfolio selection approach that models market and portfolio losses using kernel density estimation. In the event of market underperformance, we aim to minimize the conditional expected shortfall (CoES) of portfolio losses while targeting a specific return. We observe that directly estimating CoES using nonparametric kernel methods does not produce a convex objective function with respect to portfolio weights. To address this, we propose an augmentation of the objective function to ensure convexity, guaranteeing a unique solution for optimal portfolio weights regardless of the sample size. Through simulations, we demonstrate our proposed approach’s consistency and out-of-sample performance compared to benchmark portfolio criteria and CoES-based parametric models. Applying this method to a real dataset showcases its superior risk–return performance relative to existing approaches.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106127"},"PeriodicalIF":4.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145577892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jeconom.2025.106147
Pedro H.C. Sant’Anna , Qi Xu
This paper studies Difference-in-Differences (DiD) setups with repeated cross-sectional data and potential compositional changes across time periods. We begin our analysis by deriving the efficient influence function and the semiparametric efficiency bound for the average treatment effect on the treated (ATT). We introduce nonparametric estimators that attain the semiparametric efficiency bound under mild rate conditions on the estimators of the nuisance functions, exhibiting a type of rate doubly robust (DR) property. Additionally, we document a trade-off related to compositional changes: We derive the asymptotic bias of DR DiD estimators that erroneously exclude compositional changes and the efficiency loss when one fails to correctly rule out compositional changes. We propose a nonparametric Hausman-type test for compositional changes based on these trade-offs. The finite sample performance of the proposed DiD tools is evaluated through Monte Carlo experiments and an empirical application. We consider extensions of our framework that accommodate double machine learning procedures with cross-fitting, and setups when some units are observed in both pre- and post-treatment periods. As a by-product of our analysis, we present a new uniform stochastic expansion of the local polynomial multinomial logit estimator, which may be of independent interest.
{"title":"Difference-in-Differences with compositional changes","authors":"Pedro H.C. Sant’Anna , Qi Xu","doi":"10.1016/j.jeconom.2025.106147","DOIUrl":"10.1016/j.jeconom.2025.106147","url":null,"abstract":"<div><div>This paper studies Difference-in-Differences (DiD) setups with repeated cross-sectional data and potential compositional changes across time periods. We begin our analysis by deriving the efficient influence function and the semiparametric efficiency bound for the average treatment effect on the treated (ATT). We introduce nonparametric estimators that attain the semiparametric efficiency bound under mild rate conditions on the estimators of the nuisance functions, exhibiting a type of rate doubly robust (DR) property. Additionally, we document a trade-off related to compositional changes: We derive the asymptotic bias of DR DiD estimators that erroneously exclude compositional changes and the efficiency loss when one fails to correctly rule out compositional changes. We propose a nonparametric Hausman-type test for compositional changes based on these trade-offs. The finite sample performance of the proposed DiD tools is evaluated through Monte Carlo experiments and an empirical application. We consider extensions of our framework that accommodate double machine learning procedures with cross-fitting, and setups when some units are observed in both pre- and post-treatment periods. As a by-product of our analysis, we present a new uniform stochastic expansion of the local polynomial multinomial logit estimator, which may be of independent interest.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106147"},"PeriodicalIF":4.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145577891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.jeconom.2025.106130
David W. Hughes
I introduce a new method for bias correction of dyadic models with agent-specific fixed effects, including the dyadic link formation model with homophily and degree heterogeneity. The proposed approach uses a jackknife procedure to deal with the incidental parameters problem. The method can be applied to both directed and undirected networks, allows for non-binary outcome variables, and can be used to bias correct estimates of average effects and counterfactual outcomes. I also show how the jackknife can be used to bias correct fixed-effect averages over functions that depend on multiple nodes, e.g. triads or tetrads in the network. As an example, I implement specification tests for dependence across dyads, such as reciprocity or transitivity. Finally, I demonstrate the usefulness of the estimator in an application to a gravity model for import/export relationships across countries.
{"title":"A jackknife bias correction for nonlinear network data models with fixed effects","authors":"David W. Hughes","doi":"10.1016/j.jeconom.2025.106130","DOIUrl":"10.1016/j.jeconom.2025.106130","url":null,"abstract":"<div><div>I introduce a new method for bias correction of dyadic models with agent-specific fixed effects, including the dyadic link formation model with homophily and degree heterogeneity. The proposed approach uses a jackknife procedure to deal with the incidental parameters problem. The method can be applied to both directed and undirected networks, allows for non-binary outcome variables, and can be used to bias correct estimates of average effects and counterfactual outcomes. I also show how the jackknife can be used to bias correct fixed-effect averages over functions that depend on multiple nodes, e.g. triads or tetrads in the network. As an example, I implement specification tests for dependence across dyads, such as reciprocity or transitivity. Finally, I demonstrate the usefulness of the estimator in an application to a gravity model for import/export relationships across countries.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106130"},"PeriodicalIF":4.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145577889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1016/j.jeconom.2025.106149
Shouxia Wang , Hua Liu , Jinhong You , Tao Huang
Inspired by a real data example illustrating the periodicity in hog price data, this study aims to analyze time series that exhibit an unknown period alongside complex covariate effects. To address these complexities and effectively handle the data structures, we incorporate the partial functional varying-coefficient single-index model into the classical time series decomposition model. We propose a two-stage estimation procedure designed to accurately estimate the unknown periodic component and the associated covariate functions. In the first stage, the unknown period is estimated using a penalized least squares approach, where the covariate functions are approximated via B-splines rather than being ignored. In the second stage, given the estimated period, we employ B-splines to estimate key components, including the amplitude of the periodic component, the varying-coefficient functions, the single-index link function, and the functional slope function. Asymptotic results for the proposed estimators are derived, encompassing the consistency of the period estimator as well as the asymptotic properties of the estimated periodic sequence and covariate functions. Furthermore, we conduct simulations to validate the superior performance of the proposed method and demonstrate its practical applicability through the aforementioned empirical example.
{"title":"Functional semiparametric modeling for nonstationary and periodic time series data","authors":"Shouxia Wang , Hua Liu , Jinhong You , Tao Huang","doi":"10.1016/j.jeconom.2025.106149","DOIUrl":"10.1016/j.jeconom.2025.106149","url":null,"abstract":"<div><div>Inspired by a real data example illustrating the periodicity in hog price data, this study aims to analyze time series that exhibit an unknown period alongside complex covariate effects. To address these complexities and effectively handle the data structures, we incorporate the partial functional varying-coefficient single-index model into the classical time series decomposition model. We propose a two-stage estimation procedure designed to accurately estimate the unknown periodic component and the associated covariate functions. In the first stage, the unknown period is estimated using a penalized least squares approach, where the covariate functions are approximated via B-splines rather than being ignored. In the second stage, given the estimated period, we employ B-splines to estimate key components, including the amplitude of the periodic component, the varying-coefficient functions, the single-index link function, and the functional slope function. Asymptotic results for the proposed estimators are derived, encompassing the consistency of the period estimator as well as the asymptotic properties of the estimated periodic sequence and covariate functions. Furthermore, we conduct simulations to validate the superior performance of the proposed method and demonstrate its practical applicability through the aforementioned empirical example.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106149"},"PeriodicalIF":4.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145577890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.jeconom.2025.106123
Sylvain Barde
The paper proposes a new algorithm for finding the confidence set of a collection of forecasts or prediction models. Existing numerical implementations use an elimination approach, where one starts with the full collection of models and successively eliminates the worst performing until the null of equal predictive ability is no longer rejected at a given confidence level. The intuition behind the proposed implementation lies in reversing the process, i.e. starting with a collection of two models and updating both the model rankings and p-values as models are successively added to the collection. The first benefit of this approach is a reduction of one polynomial order in both the time complexity and memory cost of finding the confidence set of a collection of models using the rule, falling respectively from to and from to . The second key benefit is that it allows for further models to be added at a later point in time, thus enabling collaborative efforts using the model confidence set procedure. The paper proves that this implementation is equivalent to the elimination approach, demonstrates the improved performance on a multivariate GARCH collection consisting of 4800 models, and discusses possible use-cases where this improved performance could prove useful.
{"title":"Large-scale model comparison with fast model confidence sets","authors":"Sylvain Barde","doi":"10.1016/j.jeconom.2025.106123","DOIUrl":"10.1016/j.jeconom.2025.106123","url":null,"abstract":"<div><div>The paper proposes a new algorithm for finding the confidence set of a collection of forecasts or prediction models. Existing numerical implementations use an elimination approach, where one starts with the full collection of models and successively eliminates the worst performing until the null of equal predictive ability is no longer rejected at a given confidence level. The intuition behind the proposed implementation lies in reversing the process, i.e. starting with a collection of two models and updating both the model rankings and p-values as models are successively added to the collection. The first benefit of this approach is a reduction of one polynomial order in both the time complexity and memory cost of finding the confidence set of a collection of <span><math><mi>M</mi></math></span> models using the <span><math><mi>R</mi></math></span> rule, falling respectively from <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>M</mi></mrow><mrow><mn>3</mn></mrow></msup></mrow></mfenced></mrow></math></span> to <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>M</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></math></span> and from <span><math><mrow><mi>O</mi><mfenced><mrow><msup><mrow><mi>M</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></math></span> to <span><math><mrow><mi>O</mi><mfenced><mrow><mi>M</mi></mrow></mfenced></mrow></math></span>. The second key benefit is that it allows for further models to be added at a later point in time, thus enabling collaborative efforts using the model confidence set procedure. The paper proves that this implementation is equivalent to the elimination approach, demonstrates the improved performance on a multivariate GARCH collection consisting of 4800 models, and discusses possible use-cases where this improved performance could prove useful.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106123"},"PeriodicalIF":4.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.jeconom.2025.106124
Hyunseok Jung , Xiaodong Liu
This paper proposes an Anderson–Rubin (AR) test for the presence of peer effects in panel data without the need to specify the network structure. The unrestricted model of our test is a linear panel data model of social interactions with dyad-specific peer effect coefficients for all potential peers. The proposed AR test evaluates if these peer effect coefficients are all zero. As the number of peer effect coefficients increases with the sample size, so does the number of instrumental variables (IVs) employed to test the restrictions under the null, rendering a many-IV environment of Bekker (1994). By extending existing many-IV asymptotic results to panel data, we establish the asymptotic validity of the proposed AR test. Our Monte Carlo simulations show the robustness and improved performance of the proposed test compared to some existing tests with misspecified networks. We provide two applications to demonstrate its empirical relevance.
{"title":"Testing for peer effects without specifying the network structure","authors":"Hyunseok Jung , Xiaodong Liu","doi":"10.1016/j.jeconom.2025.106124","DOIUrl":"10.1016/j.jeconom.2025.106124","url":null,"abstract":"<div><div>This paper proposes an Anderson–Rubin (AR) test for the presence of peer effects in panel data without the need to specify the network structure. The unrestricted model of our test is a linear panel data model of social interactions with dyad-specific peer effect coefficients for all potential peers. The proposed AR test evaluates if these peer effect coefficients are all zero. As the number of peer effect coefficients increases with the sample size, so does the number of instrumental variables (IVs) employed to test the restrictions under the null, rendering a many-IV environment of Bekker (1994). By extending existing many-IV asymptotic results to panel data, we establish the asymptotic validity of the proposed AR test. Our Monte Carlo simulations show the robustness and improved performance of the proposed test compared to some existing tests with misspecified networks. We provide two applications to demonstrate its empirical relevance.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106124"},"PeriodicalIF":4.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jeconom.2025.106120
Artūras Juodis , Simon Reese
This article distills the vast literature on Common Correlated Effects (CCE), initiated by the seminal contribution of Pesaran (2006), into five practical lessons. We provide a concise overview of the CCE framework and describe the reasons for its popularity in empirical (macro-) panel data research. The lessons we draw focus on aspects that have received substantial methodological attention, but remain underappreciated in empirical work.
{"title":"Five lessons for applied researchers from twenty years of common correlated effects estimation","authors":"Artūras Juodis , Simon Reese","doi":"10.1016/j.jeconom.2025.106120","DOIUrl":"10.1016/j.jeconom.2025.106120","url":null,"abstract":"<div><div>This article distills the vast literature on Common Correlated Effects (CCE), initiated by the seminal contribution of Pesaran (2006), into five practical lessons. We provide a concise overview of the CCE framework and describe the reasons for its popularity in empirical (macro-) panel data research. The lessons we draw focus on aspects that have received substantial methodological attention, but remain underappreciated in empirical work.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106120"},"PeriodicalIF":4.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jeconom.2025.106125
Miaomiao Yu , Jiaxuan Li , Yong Zhou
In the modern era of big data, the vast amount of available data has brought more ways to analyze important economic and financial issues. For example, predicting the probability of individual default has become more accurate, as the number of defaulted individuals has increased year-on-year with the increase in data volume, leading to a more detailed characterization of the defaulted population. However, it presents new challenges and one of them is that all samples are separately stored in different machines and cannot be transferred directly for privacy considerations and limited data storage capacity. This paper develops an improved communication-efficient distributed algorithm in which more local summarized information is used to estimate the high-order derivatives of the loss function with lower communication cost. Furthermore, to protect the privacy in the interacted vector, we design a privacy-preserving algorithm based on the differential privacy constraint by adding a Laplace-distributed noise term in the parameters that can be extended to other cases beyond distributed architectures. Both non-private and private schemes, in which only local estimators are passed from the local machine to the central machine, are more theoretically and practically accurate and efficient than their counterparts. Then we suggest a bootstrap scheme to estimate the covariance matrix of the parametric estimators that is beneficial to effective inference. Finally, we find that the proposed method can effectively handle the practical activities that are, accurate probabilistic predictions of default risk and climate activity.
{"title":"Enhancements of communication-efficient distributed statistical inference and its privacy preservation","authors":"Miaomiao Yu , Jiaxuan Li , Yong Zhou","doi":"10.1016/j.jeconom.2025.106125","DOIUrl":"10.1016/j.jeconom.2025.106125","url":null,"abstract":"<div><div>In the modern era of big data, the vast amount of available data has brought more ways to analyze important economic and financial issues. For example, predicting the probability of individual default has become more accurate, as the number of defaulted individuals has increased year-on-year with the increase in data volume, leading to a more detailed characterization of the defaulted population. However, it presents new challenges and one of them is that all samples are separately stored in different machines and cannot be transferred directly for privacy considerations and limited data storage capacity. This paper develops an improved communication-efficient distributed algorithm in which more local summarized information is used to estimate the high-order derivatives of the loss function with lower communication cost. Furthermore, to protect the privacy in the interacted vector, we design a privacy-preserving algorithm based on the differential privacy constraint by adding a Laplace-distributed noise term in the parameters that can be extended to other cases beyond distributed architectures. Both non-private and private schemes, in which only local estimators are passed from the local machine to the central machine, are more theoretically and practically accurate and efficient than their counterparts. Then we suggest a bootstrap scheme to estimate the covariance matrix of the parametric estimators that is beneficial to effective inference. Finally, we find that the proposed method can effectively handle the practical activities that are, accurate probabilistic predictions of default risk and climate activity.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106125"},"PeriodicalIF":4.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.jeconom.2025.106126
Ruixuan Liu , Zhengfei Yu
This paper introduces a quasi-Bayesian method that integrates frequentist nonparametric estimation with Bayesian inference in a two-stage process. Applied to an endogenous discrete choice model, the approach first uses kernel or sieve estimators to estimate the control function nonparametrically, followed by Bayesian methods to estimate the structural parameters. This combination leverages the advantages of both frequentist tractability for nonparametric estimation and Bayesian computational efficiency for complicated structural models. We analyze the asymptotic properties of the resulting quasi-posterior distribution, finding that its mean provides a consistent estimator for the parameters of interest, although its quantiles do not yield valid confidence intervals. However, bootstrapping the quasi-posterior mean accounts for the estimation uncertainty from the first stage, thereby producing asymptotically valid confidence intervals
{"title":"Quasi-Bayesian estimation and inference with control functions","authors":"Ruixuan Liu , Zhengfei Yu","doi":"10.1016/j.jeconom.2025.106126","DOIUrl":"10.1016/j.jeconom.2025.106126","url":null,"abstract":"<div><div>This paper introduces a quasi-Bayesian method that integrates frequentist nonparametric estimation with Bayesian inference in a two-stage process. Applied to an endogenous discrete choice model, the approach first uses kernel or sieve estimators to estimate the control function nonparametrically, followed by Bayesian methods to estimate the structural parameters. This combination leverages the advantages of both frequentist tractability for nonparametric estimation and Bayesian computational efficiency for complicated structural models. We analyze the asymptotic properties of the resulting quasi-posterior distribution, finding that its mean provides a consistent estimator for the parameters of interest, although its quantiles do not yield valid confidence intervals. However, bootstrapping the quasi-posterior mean accounts for the estimation uncertainty from the first stage, thereby producing asymptotically valid confidence intervals</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"253 ","pages":"Article 106126"},"PeriodicalIF":4.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145527584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}