Robust Data-Driven Decisions Under Model Uncertainty

Xiaoyu Cheng
{"title":"Robust Data-Driven Decisions Under Model Uncertainty","authors":"Xiaoyu Cheng","doi":"10.1145/3490486.3538356","DOIUrl":null,"url":null,"abstract":"In data-driven decisions, while the extrapolation from sample data is often based on observable similarities within a population, one also needs to take into account possible unobserved heterogeneity across individuals in the population. Because of the unobservability, a decision-maker (DM) can be uncertain about not only how the individuals might vary but also how different individuals are going to be sampled from the population. As a result, she might worry about the possibility that the sample data are more from one type of individual, whereas future draws that determine the payoff of decisions may be more of a different type. This paper captures this concern by considering a decision environment where the underlying data-generating process (DGP) is a sequence of independent but possibly non-identical distributions. Specifically, the DM observes sample data given by realizations of marginal distributions of a DGP and then makes a decision whose payoff depends only on future realizations of the same DGP. In addition, the DM facesmodel uncertainty in the form ofambiguity, i.e., she only knows there is a set of possible DGPs but cannot form any probabilistic assessment over them. For making decisions, I suppose the DM applies the maxmin expected-utility (MEU) criterion [1] to cope with ambiguity. That is, she makes an optimal decision under the worst possible DGP that she contemplates. The DM can either make adata-free decision based on her initial belief, or adata-driven decision based on her updated belief taking into account the sample data. Given these two types of decisions, I study updating rules in terms of how to guarantee the data-driven decisions to be better than the data-free decisions according toobjective payoff, i.e., the expected utility under thetrue DGP that governs the future uncertainty. In other words, while the DM makes decisions considering the worst case, the quality of her decisions will be evaluated against the ground truth. When an updating rule can guarantee improvement for all possible DGPs in the initial belief, the data-driven decisions are said to robustly improve upon the data-free decisions. In this paper, I formalize two achievable notions of how data-driven decisions can robustly improve upon data-free decisions across decision problems. I show that these two notions are both equivalent to the intuitive requirement that the updated set of DGPs should accommodate (i.e., contain with some technical generalization) the true DGP that generates the data (Theorem 4.2, 4.6, Corollary 4.7). Based on this equivalence, I further study updating rules in terms of this property. In Section 2 of the paper, I make a critical observation that in the presence of independent but non-identical distributions, common updating rules such as maximum likelihood and Bayesian updating, can almost surely rule out the true DGP. Thus, implied by the previous result, they can almost surely lead to strictly worse decisions than simply ignoring the data. To achieve robustly better decisions in such a decision environment, I develop two new updating rules in this paper. When the sample size can increase unboundedly to infinity, I propose theaverage-then-update rule that guarantees to accommodate the true DGP asymptotically almost surely (Theorem 5.4). When the sample size is finite, I propose therobust i.i.d. statistical tests that guarantee to accommodate the true DGP with a pre-specified probability (Theorem 5.7). Under the robust i.i.d. statistical tests, one effectively obtains a confidence region of the true DGP. While constructing confidence regions for non-identically distributed DGP can be computationally challenging, the robust i.i.d. statistical tests require constructing confidence regions for only the independent and identically distributed (i.i.d.) DGPs. Therefore, they are very tractable and thus easy to implement. Finally, the decision framework studied in this paper is often applied to model economic problems such as dynamic portfolio choice, asset pricing, and social learning under model uncertainty and under ambiguity. For those problems, the existing literature obtains conclusions primarily by assuming that the DM applies full Bayesian updating (some may refer to it as prior-by-prior updating). However, the asymptotic result under full Bayesian updating is often hard to solve. In a commonly studied model of learning from Gaussian signals with ambiguous variances, I show that applying the average-then-update rule reduces to a simple and intuitive step. More importantly, applying the average-then-update rule also implies that learning is significantly more effective than applying full Bayesian updating (Proposition 6.2). This learning outcome proves to be more intuitive in such a model. In addition, I provide a more concrete illustration of both proposed updating rules in Section 6 by studying a Bernoulli model with ambiguous nuisance parameters. There, I show that the updated sets of DGPs have tractable expressions and intuitive interpretations.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 23rd ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3490486.3538356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In data-driven decisions, while the extrapolation from sample data is often based on observable similarities within a population, one also needs to take into account possible unobserved heterogeneity across individuals in the population. Because of the unobservability, a decision-maker (DM) can be uncertain about not only how the individuals might vary but also how different individuals are going to be sampled from the population. As a result, she might worry about the possibility that the sample data are more from one type of individual, whereas future draws that determine the payoff of decisions may be more of a different type. This paper captures this concern by considering a decision environment where the underlying data-generating process (DGP) is a sequence of independent but possibly non-identical distributions. Specifically, the DM observes sample data given by realizations of marginal distributions of a DGP and then makes a decision whose payoff depends only on future realizations of the same DGP. In addition, the DM facesmodel uncertainty in the form ofambiguity, i.e., she only knows there is a set of possible DGPs but cannot form any probabilistic assessment over them. For making decisions, I suppose the DM applies the maxmin expected-utility (MEU) criterion [1] to cope with ambiguity. That is, she makes an optimal decision under the worst possible DGP that she contemplates. The DM can either make adata-free decision based on her initial belief, or adata-driven decision based on her updated belief taking into account the sample data. Given these two types of decisions, I study updating rules in terms of how to guarantee the data-driven decisions to be better than the data-free decisions according toobjective payoff, i.e., the expected utility under thetrue DGP that governs the future uncertainty. In other words, while the DM makes decisions considering the worst case, the quality of her decisions will be evaluated against the ground truth. When an updating rule can guarantee improvement for all possible DGPs in the initial belief, the data-driven decisions are said to robustly improve upon the data-free decisions. In this paper, I formalize two achievable notions of how data-driven decisions can robustly improve upon data-free decisions across decision problems. I show that these two notions are both equivalent to the intuitive requirement that the updated set of DGPs should accommodate (i.e., contain with some technical generalization) the true DGP that generates the data (Theorem 4.2, 4.6, Corollary 4.7). Based on this equivalence, I further study updating rules in terms of this property. In Section 2 of the paper, I make a critical observation that in the presence of independent but non-identical distributions, common updating rules such as maximum likelihood and Bayesian updating, can almost surely rule out the true DGP. Thus, implied by the previous result, they can almost surely lead to strictly worse decisions than simply ignoring the data. To achieve robustly better decisions in such a decision environment, I develop two new updating rules in this paper. When the sample size can increase unboundedly to infinity, I propose theaverage-then-update rule that guarantees to accommodate the true DGP asymptotically almost surely (Theorem 5.4). When the sample size is finite, I propose therobust i.i.d. statistical tests that guarantee to accommodate the true DGP with a pre-specified probability (Theorem 5.7). Under the robust i.i.d. statistical tests, one effectively obtains a confidence region of the true DGP. While constructing confidence regions for non-identically distributed DGP can be computationally challenging, the robust i.i.d. statistical tests require constructing confidence regions for only the independent and identically distributed (i.i.d.) DGPs. Therefore, they are very tractable and thus easy to implement. Finally, the decision framework studied in this paper is often applied to model economic problems such as dynamic portfolio choice, asset pricing, and social learning under model uncertainty and under ambiguity. For those problems, the existing literature obtains conclusions primarily by assuming that the DM applies full Bayesian updating (some may refer to it as prior-by-prior updating). However, the asymptotic result under full Bayesian updating is often hard to solve. In a commonly studied model of learning from Gaussian signals with ambiguous variances, I show that applying the average-then-update rule reduces to a simple and intuitive step. More importantly, applying the average-then-update rule also implies that learning is significantly more effective than applying full Bayesian updating (Proposition 6.2). This learning outcome proves to be more intuitive in such a model. In addition, I provide a more concrete illustration of both proposed updating rules in Section 6 by studying a Bernoulli model with ambiguous nuisance parameters. There, I show that the updated sets of DGPs have tractable expressions and intuitive interpretations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
模型不确定性下的稳健数据驱动决策
在数据驱动的决策中,虽然样本数据的外推通常基于群体中可观察到的相似性,但人们还需要考虑群体中个体之间可能未观察到的异质性。由于不可观察性,决策者(DM)不仅不确定个体可能如何变化,而且不确定从总体中抽样的个体有多不同。因此,她可能会担心样本数据更多地来自一种类型的个体,而决定决策收益的未来平局可能更多地来自另一种类型。本文通过考虑一个决策环境来捕捉这个问题,其中底层数据生成过程(DGP)是一系列独立但可能不相同的分布。具体来说,DM观察由DGP边际分布的实现给出的样本数据,然后做出一个决策,该决策的收益仅取决于同一DGP的未来实现。此外,DM还面临着模糊性形式的模型不确定性,即她只知道存在一组可能的dgp,但无法对它们形成任何概率评估。为了做出决策,我认为DM应用最大期望效用(MEU)标准[1]来处理歧义。也就是说,在她考虑到的最坏的DGP下,她做出了最优决策。DM可以根据其初始信念做出无数据决策,也可以根据其考虑样本数据的更新信念做出数据驱动决策。考虑到这两种类型的决策,我研究了更新规则,即如何根据客观收益(即控制未来不确定性的真实DGP下的期望效用)来保证数据驱动的决策优于无数据的决策。换句话说,当DM考虑最坏的情况做出决定时,她的决定的质量将根据基本事实进行评估。当更新规则可以保证对初始信念中所有可能的dpg进行改进时,数据驱动的决策被称为对无数据决策的鲁棒性改进。在本文中,我形式化了数据驱动决策如何在跨决策问题的无数据决策基础上健壮地改进的两个可实现的概念。我说明了这两个概念都等价于直觉要求,即更新的DGP集应该容纳(即,包含一些技术泛化)生成数据的真正DGP(定理4.2,4.6,推论4.7)。在此等价的基础上,我进一步研究了基于这一性质的更新规则。在本文的第2节中,我做了一个重要的观察,即在独立但不相同的分布存在的情况下,常见的更新规则,如最大似然和贝叶斯更新,几乎可以肯定地排除真正的DGP。因此,从前面的结果可以看出,它们几乎肯定会导致比简单地忽略数据更糟糕的决策。为了在这样的决策环境中获得更好的鲁棒决策,本文开发了两个新的更新规则。当样本量可以无限制地增加到无穷大时,我提出了平均-然后更新规则,它保证几乎可以肯定地渐进地适应真实的DGP(定理5.4)。当样本量有限时,我提出了健壮的i.i.d统计检验,保证以预先指定的概率容纳真实的DGP(定理5.7)。在稳健的i.i.d统计检验下,有效地得到了真实gdp的置信区间。虽然构建非同分布DGP的置信区域在计算上具有挑战性,但稳健的i.i.d统计检验只需要为独立和同分布(i.i.d)构建置信区域。DGPs。因此,它们非常容易处理,因此易于实现。最后,本文研究的决策框架经常被应用于模型不确定性和模糊性下的动态投资组合选择、资产定价和社会学习等模型经济问题。对于这些问题,现有文献主要是通过假设DM应用全贝叶斯更新(也有人称之为prior-by-prior更新)来得出结论的。然而,在全贝叶斯更新下的渐近结果往往难以求解。在一个通常研究的从具有模糊方差的高斯信号中学习的模型中,我展示了应用平均-更新规则可以简化为一个简单而直观的步骤。更重要的是,应用平均后更新规则也意味着学习比应用全贝叶斯更新更有效(命题6.2)。这种学习结果在这种模型中更加直观。此外,我通过研究具有模糊干扰参数的伯努利模型,在第6节中提供了两个提出的更新规则的更具体的说明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On Two-sided Matching in Infinite Markets Herd Design Efficient Capacity Provisioning for Firms with Multiple Locations: The Case of the Public Cloud Tight Incentive Analysis on Sybil Attacks to Market Equilibrium of Resource Exchange over General Networks General Graphs are Easier than Bipartite Graphs: Tight Bounds for Secretary Matching
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1