Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users for information that will lead to better decisions in the future. Recently, concerns have been raised about whether the process of exploration could be viewed as unfair, placing too much burden on certain individuals or groups. Motivated by these concerns, we initiate the study of the externalities of exploration - the undesirable side effects that the presence of one party may impose on another - under the linear contextual bandits model. We introduce the notion of a group externality, measuring the extent to which the presence of one population of users impacts the rewards of another. We show that this impact can in some cases be negative, and that, in a certain sense, no algorithm can avoid it. We then study externalities at the individual level, interpreting the act of exploration as an externality imposed on the current user of a system by future users. This drives us to ask under what conditions inherent diversity in the data makes explicit exploration unnecessary. We build on a recent line of work on the smoothed analysis of the greedy algorithm that always chooses the action that currently looks optimal, improving on prior results to show that a greedy approach almost matches the best possible Bayesian regret rate of any other algorithm on the same problem instance whenever the diversity conditions hold, and that this regret is at most $tilde{O}(T^{1/3})$. Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm. Together, our results uncover a sharp contrast between the high externalities that exist in the worst case, and the ability to remove all externalities if the data is sufficiently diverse.
{"title":"The Externalities of Exploration and How Data Diversity Helps Exploitation","authors":"Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu","doi":"10.1145/3603195.3603199","DOIUrl":"https://doi.org/10.1145/3603195.3603199","url":null,"abstract":"Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users for information that will lead to better decisions in the future. Recently, concerns have been raised about whether the process of exploration could be viewed as unfair, placing too much burden on certain individuals or groups. Motivated by these concerns, we initiate the study of the externalities of exploration - the undesirable side effects that the presence of one party may impose on another - under the linear contextual bandits model. We introduce the notion of a group externality, measuring the extent to which the presence of one population of users impacts the rewards of another. We show that this impact can in some cases be negative, and that, in a certain sense, no algorithm can avoid it. We then study externalities at the individual level, interpreting the act of exploration as an externality imposed on the current user of a system by future users. This drives us to ask under what conditions inherent diversity in the data makes explicit exploration unnecessary. We build on a recent line of work on the smoothed analysis of the greedy algorithm that always chooses the action that currently looks optimal, improving on prior results to show that a greedy approach almost matches the best possible Bayesian regret rate of any other algorithm on the same problem instance whenever the diversity conditions hold, and that this regret is at most $tilde{O}(T^{1/3})$. Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm. Together, our results uncover a sharp contrast between the high externalities that exist in the worst case, and the ability to remove all externalities if the data is sufficiently diverse.","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117054508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geoff Pleiss, Manish Raghavan, Felix Wu, J. Kleinberg, Kilian Q. Weinberger
The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models. This has motivated a growing line of work on what it means for a classification procedure to be "fair." In this paper, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets.
{"title":"On Fairness and Calibration","authors":"Geoff Pleiss, Manish Raghavan, Felix Wu, J. Kleinberg, Kilian Q. Weinberger","doi":"10.1145/3603195.3603198","DOIUrl":"https://doi.org/10.1145/3603195.3603198","url":null,"abstract":"The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models. This has motivated a growing line of work on what it means for a classification procedure to be \"fair.\" In this paper, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets.","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129969637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inherent Tradeoffs in the Fair Determination of Risk Scores","authors":"","doi":"10.1145/3603195.3603197","DOIUrl":"https://doi.org/10.1145/3603195.3603197","url":null,"abstract":"","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130542970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proof. We need to show that Fθ satisfies the differentiability, asymptotic optimality, and monotonicity conditions in Definition 6.1. Differentiability: The probability density of any realization of the n noise samples εi/θ is ∏ n i=1 f (εi/θ). Let ε = [ε1/θ, ... , εn/θ] be the vector of noise values and let M(π) ⊆ Rn be the region such that any ε ∈ M(π) will produce the ranking π. The probability of any permutation π is
证明。我们需要证明Fθ满足定义6.1中的可微性、渐近最优性和单调性条件。可微性:n个噪声样本εi/θ的任意实现的概率密度为∏n i=1 f (εi/θ)。令ε = [ε1/θ,…], εn/θ]为噪声值向量,设M(π)任一个ε∈M(π)都能产生排序π的域。任意排列的概率π是
{"title":"Algorithmic Monoculture and Social Welfare","authors":"𝜃 𝜀","doi":"10.1145/3603195.3603211","DOIUrl":"https://doi.org/10.1145/3603195.3603211","url":null,"abstract":"Proof. We need to show that Fθ satisfies the differentiability, asymptotic optimality, and monotonicity conditions in Definition 6.1. Differentiability: The probability density of any realization of the n noise samples εi/θ is ∏ n i=1 f (εi/θ). Let ε = [ε1/θ, ... , εn/θ] be the vector of noise values and let M(π) ⊆ Rn be the region such that any ε ∈ M(π) will produce the ranking π. The probability of any permutation π is","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121219141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Do Classifiers Induce Agents to Behave Strategically?","authors":"","doi":"10.1145/3603195.3603201","DOIUrl":"https://doi.org/10.1145/3603195.3603201","url":null,"abstract":"","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2 C 3 A 1 A proof demonstrating that the integral risk assignment problem in Section 1.4.2 is NP-complete. Additional theoretical results and details on experiments. Supplementary lemmas and omitted proofs. D 4 Supplementary lemmas and omitted proofs. E 5 A characterization of strategic behavior in response to a linear mechanism. Supplementary lemmas, omitted proofs, and counterexamples. G 7 A table containing administrative information on vendors. VPART APPENDICES
{"title":"Inherent Tradeoffs in the Fair Determination of Risk Scores","authors":"","doi":"10.1145/3603195.3603206","DOIUrl":"https://doi.org/10.1145/3603195.3603206","url":null,"abstract":"2 C 3 A 1 A proof demonstrating that the integral risk assignment problem in Section 1.4.2 is NP-complete. Additional theoretical results and details on experiments. Supplementary lemmas and omitted proofs. D 4 Supplementary lemmas and omitted proofs. E 5 A characterization of strategic behavior in response to a linear mechanism. Supplementary lemmas, omitted proofs, and counterexamples. G 7 A table containing administrative information on vendors. VPART APPENDICES","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126200325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Externalities of Exploration and How Data Diversity Helps Exploitation","authors":"","doi":"10.1145/3603195.3603208","DOIUrl":"https://doi.org/10.1145/3603195.3603208","url":null,"abstract":"","PeriodicalId":256315,"journal":{"name":"The Societal Impacts of Algorithmic Decision-Making","volume":"54 81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126295655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}