Individual Fairness in Hindsight

Swati Gupta, Vijay Kamble
{"title":"Individual Fairness in Hindsight","authors":"Swati Gupta, Vijay Kamble","doi":"10.1145/3328526.3329605","DOIUrl":null,"url":null,"abstract":"Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"2023 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"50","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3328526.3329605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 50

Abstract

Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
事后诸葛亮的个人公平
由于影响人类生活的许多关键决策越来越多地由算法做出,因此确保在这种算法下对个人的处理在合理的公平概念下是明显公平的,这一点很重要。文献中提出的一个令人信服的概念是个体公平(IF),它主张相似的个体应该得到相似的对待(Dwork等人,2012)。然而,这个概念最初是为离线决策提出的,并没有考虑到与在线决策相关的时间考虑。在本文中,我们扩展了IF的概念,以解释决策做出的时间,在受影响个人感知的决策传导概念存在的情况下。我们引入两个定义:(i)跨时间公平(FT)和(ii)后见之明公平(FH)。FT是IF的最简单的时间延伸,其中要求个人的待遇相对于过去和未来是公平的,而在FH中,我们需要一个片面的个人公平概念,它只相对于过去的决定来定义。我们表明,在委托人需要学习实用新型的情况下,这两种定义可能会产生截然不同的含义。对于非平凡的例子,在FT条件下,相对于最优个人公平决策的线性后悔是不可避免的。另一方面,我们设计了一种新的算法:谨慎公平探索(CAFE),该算法满足FH并在广泛的设置范围内实现亚线性后悔保证。我们描述了下界,表明在最坏情况下这些保证是有序最优的。因此,跳频可以作为算法部署中防止不公平歧视的主要保障,而不会妨碍长期做出正确决策的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Computing Core-Stable Outcomes in Combinatorial Exchanges with Financially Constrained Bidders No Stratification Without Representation How to Sell a Dataset? Pricing Policies for Data Monetization Prophet Inequalities for I.I.D. Random Variables from an Unknown Distribution Incorporating Compatible Pairs in Kidney Exchange: A Dynamic Weighted Matching Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1