We develop a generally applicable full‐information inference method for heterogeneous agent models, combining aggregate time series data and repeated cross‐sections of micro data. To handle unobserved aggregate state variables that affect cross‐sectional distributions, we compute a numerically unbiased estimate of the model‐implied likelihood function. Employing the likelihood estimate in a Markov Chain Monte Carlo algorithm, we obtain fully efficient and valid Bayesian inference. Evaluation of the micro part of the likelihood lends itself naturally to parallel computing. Numerical illustrations in models with heterogeneous households or firms demonstrate that the proposed full‐information method substantially sharpens inference relative to using only macro data, and for some parameters micro data is essential for identification.
{"title":"Full‐information estimation of heterogeneous agent models using macro and micro data","authors":"Laura Liu, Mikkel Plagborg-Moller","doi":"10.3982/qe1810","DOIUrl":"https://doi.org/10.3982/qe1810","url":null,"abstract":"We develop a generally applicable full‐information inference method for heterogeneous agent models, combining aggregate time series data and repeated cross‐sections of micro data. To handle unobserved aggregate state variables that affect cross‐sectional distributions, we compute a numerically unbiased estimate of the model‐implied likelihood function. Employing the likelihood estimate in a Markov Chain Monte Carlo algorithm, we obtain fully efficient and valid Bayesian inference. Evaluation of the micro part of the likelihood lends itself naturally to parallel computing. Numerical illustrations in models with heterogeneous households or firms demonstrate that the proposed full‐information method substantially sharpens inference relative to using only macro data, and for some parameters micro data is essential for identification.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136092733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper analyzes risk aversion in discriminatory share auctions. I generalize the k ‐step share auction model of Kastl (2011, 2012) and establish that marginal profits are set‐identified for any given coefficient of constant absolute risk aversion. I also derive necessary conditions for best‐response behavior, which allows determining risk preferences from bidding data. Further, I show how the bidders' optimality conditions allow computing bounds on the marginal profits that are tighter than those currently available. I use my results to estimate import rents from Swiss tariff‐rate quotas on high‐quality beef. Rents are overestimated when ignoring risk aversion, and rent extraction is underestimated. Small bidders (small, privately owned butcheries) are more risk averse than large bidders (general retailers). Best response violations are few and uniform across bidder sizes.
{"title":"Risk aversion in share auctions: Estimating import rents from TRQs in Switzerland","authors":"Samuel Häfner","doi":"10.3982/qe1907","DOIUrl":"https://doi.org/10.3982/qe1907","url":null,"abstract":"This paper analyzes risk aversion in discriminatory share auctions. I generalize the k ‐step share auction model of Kastl (2011, 2012) and establish that marginal profits are set‐identified for any given coefficient of constant absolute risk aversion. I also derive necessary conditions for best‐response behavior, which allows determining risk preferences from bidding data. Further, I show how the bidders' optimality conditions allow computing bounds on the marginal profits that are tighter than those currently available. I use my results to estimate import rents from Swiss tariff‐rate quotas on high‐quality beef. Rents are overestimated when ignoring risk aversion, and rent extraction is underestimated. Small bidders (small, privately owned butcheries) are more risk averse than large bidders (general retailers). Best response violations are few and uniform across bidder sizes.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"81 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135180971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a new approach to estimating differentiated product demand systems that allows for products with zero sales in the data. Zeroes in demand are a common problem in differentiated product markets, but fall outside the scope of existing demand estimation techniques. We show that with a lower bound imposed on the expected sales quantities, we can construct upper and lower bounds for the conditional expectation of the inverse demand. These bounds can be translated into moment inequalities that are shown to yield consistent and asymptotically normal point estimators for demand parameters under natural conditions. In Monte Carlo simulations, we demonstrate that the new approach works well even when the fraction of zeroes is as high as 95%. We apply our estimator to supermarket scanner data and find that correcting the bias caused by zeroes has important empirical implications, for example, price elasticities become twice as large when zeroes are properly controlled.
{"title":"Estimating demand for differentiated products with zeroes in market share data","authors":"Amit Gandhi, Zhentong Lu, Xiaoxia Shi","doi":"10.3982/qe1593","DOIUrl":"https://doi.org/10.3982/qe1593","url":null,"abstract":"In this paper, we introduce a new approach to estimating differentiated product demand systems that allows for products with zero sales in the data. Zeroes in demand are a common problem in differentiated product markets, but fall outside the scope of existing demand estimation techniques. We show that with a lower bound imposed on the expected sales quantities, we can construct upper and lower bounds for the conditional expectation of the inverse demand. These bounds can be translated into moment inequalities that are shown to yield consistent and asymptotically normal point estimators for demand parameters under natural conditions. In Monte Carlo simulations, we demonstrate that the new approach works well even when the fraction of zeroes is as high as 95%. We apply our estimator to supermarket scanner data and find that correcting the bias caused by zeroes has important empirical implications, for example, price elasticities become twice as large when zeroes are properly controlled.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136297932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We estimate sectoral spillovers around the Great Moderation with the help of forecast error variance decomposition tables. Obtaining such tables in high dimensions is challenging because they are functions of the estimated vector autoregressive coefficients and the residual covariance matrix. In a simulation study, we compare various regularization methods on both and conduct a comprehensive analysis of their performance. We show that standard estimators of large connectedness tables lead to biased results and high estimation uncertainty, both of which are mitigated by regularization. To explore possible causes for the Great Moderation, we apply a cross‐validated estimator on sectoral spillovers of industrial production in the US from 1972 to 2019. We find that the spillover network has considerably weakened, which hints at structural change, for example, through improved inventory management, as a critical explanation for the Great Moderation.
{"title":"Estimating large‐dimensional connectedness tables: The great moderation through the lens of sectoral spillovers","authors":"Felix Brunner, R. Hipp","doi":"10.3982/qe1947","DOIUrl":"https://doi.org/10.3982/qe1947","url":null,"abstract":"We estimate sectoral spillovers around the Great Moderation with the help of forecast error variance decomposition tables. Obtaining such tables in high dimensions is challenging because they are functions of the estimated vector autoregressive coefficients and the residual covariance matrix. In a simulation study, we compare various regularization methods on both and conduct a comprehensive analysis of their performance. We show that standard estimators of large connectedness tables lead to biased results and high estimation uncertainty, both of which are mitigated by regularization. To explore possible causes for the Great Moderation, we apply a cross‐validated estimator on sectoral spillovers of industrial production in the US from 1972 to 2019. We find that the spillover network has considerably weakened, which hints at structural change, for example, through improved inventory management, as a critical explanation for the Great Moderation.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70361872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cars Hommes, Kostas Mavromatis, Tolga Özden, Mei Zhu
We introduce Behavioral Learning Equilibria (BLE) into a multivariate linear framework and apply it to New Keynesian DSGE models. In a BLE, boundedly rational agents use simple, but optimal AR(1) forecasting rules whose parameters are consistent with the observed sample mean and autocorrelation of past data. We study the BLE concept in a standard 3‐equation New Keynesian model and develop an estimation methodology for the canonical Smets and Wouters (2007) model. A horse race between Rational Expectations (REE), BLE, and constant gain learning models shows that the BLE model outperforms the REE benchmark and is competitive with constant gain learning models in terms of in‐sample and out‐of‐sample fitness. Sample‐autocorrelation learning of optimal AR(1) beliefs provides the best fit when short‐term survey data on inflation expectations are taken into account in the estimation. As a policy application, we show that optimal Taylor rules under AR(1) expectations inherit history dependence and require a lower degrees of interest rate smoothing than REE.
{"title":"Behavioral learning equilibria in New Keynesian models","authors":"Cars Hommes, Kostas Mavromatis, Tolga Özden, Mei Zhu","doi":"10.3982/qe1533","DOIUrl":"https://doi.org/10.3982/qe1533","url":null,"abstract":"We introduce Behavioral Learning Equilibria (BLE) into a multivariate linear framework and apply it to New Keynesian DSGE models. In a BLE, boundedly rational agents use simple, but optimal AR(1) forecasting rules whose parameters are consistent with the observed sample mean and autocorrelation of past data. We study the BLE concept in a standard 3‐equation New Keynesian model and develop an estimation methodology for the canonical Smets and Wouters (2007) model. A horse race between Rational Expectations (REE), BLE, and constant gain learning models shows that the BLE model outperforms the REE benchmark and is competitive with constant gain learning models in terms of in‐sample and out‐of‐sample fitness. Sample‐autocorrelation learning of optimal AR(1) beliefs provides the best fit when short‐term survey data on inflation expectations are taken into account in the estimation. As a policy application, we show that optimal Taylor rules under AR(1) expectations inherit history dependence and require a lower degrees of interest rate smoothing than REE.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135562259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy G. Conley, Sílvia Gonçalves, Min Seong Kim, B. Perron
In this paper, we introduce a method of generating bootstrap samples with unknown patterns of cross‐ sectional/spatial dependence, which we call the spatial dependent wild bootstrap. This method is a spatial counterpart to the wild dependent bootstrap of Shao (2010) and generates data by multiplying a vector of independently and identically distributed external variables by the eigendecomposition of a bootstrap kernel. We prove the validity of our method for studentized and unstudentized statistics under a linear array representation of the data. Simulation experiments document the potential for improved inference with our approach. We illustrate our method in a firm‐level regression application investigating the relationship between firms' sales growth and the import activity in their local markets using unique firm‐level and imports data for Canada.
{"title":"Bootstrap inference under cross‐sectional dependence","authors":"Timothy G. Conley, Sílvia Gonçalves, Min Seong Kim, B. Perron","doi":"10.3982/qe1626","DOIUrl":"https://doi.org/10.3982/qe1626","url":null,"abstract":"In this paper, we introduce a method of generating bootstrap samples with unknown patterns of cross‐ sectional/spatial dependence, which we call the spatial dependent wild bootstrap. This method is a spatial counterpart to the wild dependent bootstrap of Shao (2010) and generates data by multiplying a vector of independently and identically distributed external variables by the eigendecomposition of a bootstrap kernel. We prove the validity of our method for studentized and unstudentized statistics under a linear array representation of the data. Simulation experiments document the potential for improved inference with our approach. We illustrate our method in a firm‐level regression application investigating the relationship between firms' sales growth and the import activity in their local markets using unique firm‐level and imports data for Canada.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70360674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Expectations affect economic decisions, and inaccurate expectations are costly. Expectations can be wrong due to either bias (systematic mistakes) or noise (unsystematic mistakes). We develop a framework for quantifying the level of noise in survey expectations. The method is based on the insight that theoretical models of expectation formation predict a factor structure for individual expectations. Using data from professional forecasters, we find that the magnitude of noise is large (10%–30% of forecast MSE) and comparable to bias. We illustrate how our estimates can be applied to calibrate models with incomplete information and bound the effects of measurement error.
{"title":"Quantifying noise in survey expectations","authors":"Artūras Juodis, S. Kucinskas","doi":"10.3982/qe1633","DOIUrl":"https://doi.org/10.3982/qe1633","url":null,"abstract":"Expectations affect economic decisions, and inaccurate expectations are costly. Expectations can be wrong due to either bias (systematic mistakes) or noise (unsystematic mistakes). We develop a framework for quantifying the level of noise in survey expectations. The method is based on the insight that theoretical models of expectation formation predict a factor structure for individual expectations. Using data from professional forecasters, we find that the magnitude of noise is large (10%–30% of forecast MSE) and comparable to bias. We illustrate how our estimates can be applied to calibrate models with incomplete information and bound the effects of measurement error.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70360774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor H. Aguiar, Maria Jose Boccardi, Nail Kashaev, Jeongbin Kim
The random utility model (RUM, McFadden and Richter (1990)) has been the standard tool to describe the behavior of a population of decision makers. RUM assumes that decision makers behave as if they maximize a rational preference over a choice set. This assumption may fail when consideration of all alternatives is costly. We provide a theoretical and statistical framework that unifies well‐known models of random (limited) consideration and generalizes them to allow for preference heterogeneity. We apply this methodology in a novel stochastic choice data set that we collected in a large‐scale online experiment. Our data set is unique since it exhibits both choice set and (attention) frame variation. We run a statistical survival race between competing models of random consideration and RUM. We find that RUM cannot explain the population behavior. In contrast, we cannot reject the hypothesis that decision makers behave according to the logit attention model (Brady and Rehbeck (2016)).
随机实用新型(RUM, McFadden and Richter, 1990)已经成为描述决策者群体行为的标准工具。RUM假设决策者的行为就好像他们在一个选择集上最大化了一个理性偏好。当考虑所有替代方案的成本很高时,这种假设可能会失败。我们提供了一个理论和统计框架,统一了众所周知的随机(有限)考虑模型,并对它们进行了推广,以允许偏好异质性。我们将这种方法应用于我们在大规模在线实验中收集的一个新的随机选择数据集。我们的数据集是独一无二的,因为它显示了选择集和(注意)框架的变化。我们在随机考虑模型和随机概率模型之间进行了一场统计上的生存竞赛。我们发现RUM不能解释种群行为。相比之下,我们不能拒绝决策者根据logit注意模型行事的假设(Brady and Rehbeck(2016))。
{"title":"Random utility and limited consideration","authors":"Victor H. Aguiar, Maria Jose Boccardi, Nail Kashaev, Jeongbin Kim","doi":"10.3982/qe1861","DOIUrl":"https://doi.org/10.3982/qe1861","url":null,"abstract":"The random utility model (RUM, McFadden and Richter (1990)) has been the standard tool to describe the behavior of a population of decision makers. RUM assumes that decision makers behave as if they maximize a rational preference over a choice set. This assumption may fail when consideration of all alternatives is costly. We provide a theoretical and statistical framework that unifies well‐known models of random (limited) consideration and generalizes them to allow for preference heterogeneity. We apply this methodology in a novel stochastic choice data set that we collected in a large‐scale online experiment. Our data set is unique since it exhibits both choice set and (attention) frame variation. We run a statistical survival race between competing models of random consideration and RUM. We find that RUM cannot explain the population behavior. In contrast, we cannot reject the hypothesis that decision makers behave according to the logit attention model (Brady and Rehbeck (2016)).","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"74 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136298137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantitative models of sovereign default predict that governments reduce borrowing during recessions to avoid debt crises. A prominent implication of this behavior is that the resulting interest rate spread volatility is counterfactually low. We propose that governments borrow into debt crises because of frictions in the adjustment of their expenditures. We develop a model of government good production, which uses public employment and intermediate consumption as inputs. The inputs have varying degrees of downward rigidity, which means that it is costly to reduce them. Facing an adverse income shock, the government borrows to smooth out the reduction in public employment, which results in increasing debt and higher spread. We quantify this rigidity using the OECD Government Accounts data and show that it explains about 70% of the missing bond spread volatility.
{"title":"Borrowing into debt crises","authors":"Radoslaw Paluszynski, G. Stefanidis","doi":"10.3982/qe1797","DOIUrl":"https://doi.org/10.3982/qe1797","url":null,"abstract":"Quantitative models of sovereign default predict that governments reduce borrowing during recessions to avoid debt crises. A prominent implication of this behavior is that the resulting interest rate spread volatility is counterfactually low. We propose that governments borrow into debt crises because of frictions in the adjustment of their expenditures. We develop a model of government good production, which uses public employment and intermediate consumption as inputs. The inputs have varying degrees of downward rigidity, which means that it is costly to reduce them. Facing an adverse income shock, the government borrows to smooth out the reduction in public employment, which results in increasing debt and higher spread. We quantify this rigidity using the OECD Government Accounts data and show that it explains about 70% of the missing bond spread volatility.","PeriodicalId":46811,"journal":{"name":"Quantitative Economics","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70360974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}