{"title":"Robust Comparative Statics with Misspecified Bayesian Learning","authors":"Aniruddha Ghosh","doi":"arxiv-2407.17037","DOIUrl":null,"url":null,"abstract":"We present novel monotone comparative statics results for steady state\nbehavior in a dynamic optimization environment with misspecified Bayesian\nlearning. We consider a generalized framework, based on Esponda and Pouzo\n(2021), wherein a Bayesian learner facing a dynamic optimization problem has a\nprior on a set of parameterized transition probability functions (models) but\nis misspecified in the sense that the true process is not within this set. In\nthe steady state, the learner infers the model that best-fits the data\ngenerated by their actions, and in turn, their actions are optimally chosen\ngiven their inferred model. We characterize conditions on the primitives of the\nenvironment, and in particular, over the set of models under which the steady\nstate distribution over states and actions and inferred models exhibit\nmonotonic behavior. Further, we offer a new theorem on the existence of a\nsteady state on the basis of a monotonicity argument. Lastly, we provide an\nupper bound on the cost of misspecification, again in terms of the primitives\nof the environment. We demonstrate the utility of our results for several\nenvironments of general interest, including forecasting models, dynamic\neffort-task, and optimal consumption-savings problems.","PeriodicalId":501188,"journal":{"name":"arXiv - ECON - Theoretical Economics","volume":"45 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - Theoretical Economics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.17037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present novel monotone comparative statics results for steady state
behavior in a dynamic optimization environment with misspecified Bayesian
learning. We consider a generalized framework, based on Esponda and Pouzo
(2021), wherein a Bayesian learner facing a dynamic optimization problem has a
prior on a set of parameterized transition probability functions (models) but
is misspecified in the sense that the true process is not within this set. In
the steady state, the learner infers the model that best-fits the data
generated by their actions, and in turn, their actions are optimally chosen
given their inferred model. We characterize conditions on the primitives of the
environment, and in particular, over the set of models under which the steady
state distribution over states and actions and inferred models exhibit
monotonic behavior. Further, we offer a new theorem on the existence of a
steady state on the basis of a monotonicity argument. Lastly, we provide an
upper bound on the cost of misspecification, again in terms of the primitives
of the environment. We demonstrate the utility of our results for several
environments of general interest, including forecasting models, dynamic
effort-task, and optimal consumption-savings problems.