首页 > 最新文献

Proceedings of the Conference on Fairness, Accountability, and Transparency最新文献

英文 中文
The Social Cost of Strategic Classification 战略分类的社会成本
Pub Date : 2018-08-25 DOI: 10.1145/3287560.3287576
S. Milli, John Miller, A. Dragan, Moritz Hardt
Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule. A long line of work has therefore sought to counteract strategic behavior by designing more conservative decision boundaries in an effort to increase robustness to the effects of strategic covariate shift. We show that these efforts benefit the institutional decision maker at the expense of the individuals being classified. Introducing a notion of social burden, we prove that any increase in institutional utility necessarily leads to a corresponding increase in social burden. Moreover, we show that the negative externalities of strategic classification can disproportionately harm disadvantaged groups in the population. Our results highlight that strategy-robustness must be weighed against considerations of social welfare and fairness.
结果性决策通常会激励个人采取战略行动,根据决策规则的具体情况调整他们的行为。因此,通过设计更保守的决策边界,以增加对战略协变量转移影响的鲁棒性,一长串的工作试图抵消战略行为。我们表明,这些努力以牺牲被分类的个人为代价,使机构决策者受益。引入社会负担的概念,证明制度效用的增加必然导致相应的社会负担的增加。此外,我们表明,战略分类的负外部性会不成比例地伤害人口中的弱势群体。我们的研究结果强调,战略稳健性必须与社会福利和公平的考虑相权衡。
{"title":"The Social Cost of Strategic Classification","authors":"S. Milli, John Miller, A. Dragan, Moritz Hardt","doi":"10.1145/3287560.3287576","DOIUrl":"https://doi.org/10.1145/3287560.3287576","url":null,"abstract":"Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule. A long line of work has therefore sought to counteract strategic behavior by designing more conservative decision boundaries in an effort to increase robustness to the effects of strategic covariate shift. We show that these efforts benefit the institutional decision maker at the expense of the individuals being classified. Introducing a notion of social burden, we prove that any increase in institutional utility necessarily leads to a corresponding increase in social burden. Moreover, we show that the negative externalities of strategic classification can disproportionately harm disadvantaged groups in the population. Our results highlight that strategy-robustness must be weighed against considerations of social welfare and fairness.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90979188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 144
An Empirical Study of Rich Subgroup Fairness for Machine Learning 面向机器学习的富子群公平性实证研究
Pub Date : 2018-08-24 DOI: 10.1145/3287560.3287592
Michael Kearns, S. Neel, Aaron Roth, Zhiwei Steven Wu
Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.
Kearns, Neel, Roth和Wu [ICML 2018]最近提出了一个富子群体公平的概念,旨在弥合统计和个人公平概念之间的差距。富子群公平性选择一个统计公平性约束(例如,在保护组之间平衡假阳性率),但随后要求该约束适用于由VC维有限的函数类定义的指数级或无限大的子群集合。他们给出了一个算法,保证在这个约束下学习,条件是它可以访问没有公平约束的完美学习的预言。在本文中,我们对Kearns等人的算法进行了广泛的经验评估。在四个关注公平性的真实数据集上,我们研究了算法在用快速启发式代替学习oracle实例化时的基本收敛性,衡量公平性和准确性之间的权衡,并将这种方法与Agarwal、Beygelzeimer、Dudik、Langford和Wallach [ICML 2018]的最新算法进行比较,后者实现了由个人受保护属性定义的更弱、更传统的边际公平性约束。我们发现,通常情况下,Kearns等人的算法收敛速度很快,可以以较小的精度代价获得较大的公平性收益,并且仅根据边际公平性优化精度会导致分类器具有大量的子组不公平性。我们还提供了卡恩斯等人算法的动态和行为的一些分析和可视化。总的来说,我们发现该算法对实际数据是有效的,并且在实践中富子群公平性是一个可行的概念。
{"title":"An Empirical Study of Rich Subgroup Fairness for Machine Learning","authors":"Michael Kearns, S. Neel, Aaron Roth, Zhiwei Steven Wu","doi":"10.1145/3287560.3287592","DOIUrl":"https://doi.org/10.1145/3287560.3287592","url":null,"abstract":"Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82275660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 154
Model Reconstruction from Model Explanations 从模型解释中重建模型
Pub Date : 2018-07-13 DOI: 10.1145/3287560.3287562
S. Milli, Ludwig Schmidt, A. Dragan, Moritz Hardt
We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself. Our results speak to a tension between the desire to keep a proprietary model secret and the ability to offer model explanations. On the theoretical side, we give an algorithm that provably learns a two-layer ReLU network in a setting where the algorithm may query the gradient of the model with respect to chosen inputs. The number of queries is independent of the dimension and nearly optimal in its dependence on the model size. Of interest not only from a learning-theoretic perspective, this result highlights the power of gradients rather than labels as a learning primitive. Complementing our theory, we give effective heuristics for reconstructing models from gradient explanations that are orders of magnitude more query-efficient than reconstruction attacks relying on prediction interfaces.
我们通过理论和实验证明,基于梯度的模型解释可以快速揭示模型本身。我们的研究结果说明了保持专有模型秘密的愿望与提供模型解释的能力之间的紧张关系。在理论方面,我们给出了一种算法,该算法可以在一个设置中证明学习两层ReLU网络,其中算法可以查询模型相对于所选输入的梯度。查询的数量与维度无关,并且与模型大小的依赖关系几乎是最佳的。不仅从学习理论的角度来看,这个结果突出了梯度而不是标签作为学习原语的力量。为了补充我们的理论,我们给出了有效的启发式方法,用于从梯度解释中重建模型,这比依赖预测接口的重建攻击的查询效率要高几个数量级。
{"title":"Model Reconstruction from Model Explanations","authors":"S. Milli, Ludwig Schmidt, A. Dragan, Moritz Hardt","doi":"10.1145/3287560.3287562","DOIUrl":"https://doi.org/10.1145/3287560.3287562","url":null,"abstract":"We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself. Our results speak to a tension between the desire to keep a proprietary model secret and the ability to offer model explanations. On the theoretical side, we give an algorithm that provably learns a two-layer ReLU network in a setting where the algorithm may query the gradient of the model with respect to chosen inputs. The number of queries is independent of the dimension and nearly optimal in its dependence on the model size. Of interest not only from a learning-theoretic perspective, this result highlights the power of gradients rather than labels as a learning primitive. Complementing our theory, we give effective heuristics for reconstructing models from gradient explanations that are orders of magnitude more query-efficient than reconstruction attacks relying on prediction interfaces.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84483553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 131
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees 具有公平性约束的分类:一个具有可证明保证的元算法
Pub Date : 2018-06-15 DOI: 10.1145/3287560.3287586
L. E. Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi
Developing classification algorithms that are fair with respect to sensitive attributes of the data is an important problem due to the increased deployment of classification algorithms in societal contexts. Several recent works have focused on studying classification with respect to specific fairness metrics, modeled the corresponding fair classification problem as constrained optimization problems, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which there are no fair classifiers with theoretical guarantees; primarily because the resulting optimization problem is non-convex. The main contribution of this paper is a meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees. In particular, our algorithm can handle non-convex "linear fractional" constraints (which includes fairness constraints such as predictive parity) for which no prior algorithm was known. Key to our results is an algorithm for a family of classification problems with convex constraints along with a reduction from classification problems with linear fractional constraints to this family. Empirically, we observe that our algorithm is fast, can achieve near-perfect fairness with respect to various fairness metrics, and the loss in accuracy due to the imposed fairness constraints is often small.
由于分类算法在社会环境中的部署增加,开发对数据敏感属性公平的分类算法是一个重要问题。最近的一些工作集中在研究与特定公平指标相关的分类,将相应的公平分类问题建模为约束优化问题,并开发定制算法来解决这些问题。尽管如此,仍然有一些重要的指标没有理论保证的公平分类器;主要是因为所得到的优化问题是非凸的。本文的主要贡献是一种用于分类的元算法,该算法可以将关于多个不相交和多值敏感属性的一般公平性约束作为输入,并且具有可证明的保证。特别是,我们的算法可以处理非凸的“线性分数”约束(包括公平性约束,如预测奇偶性),而之前的算法是未知的。我们的研究结果的关键是一种针对一类具有凸约束的分类问题的算法,以及对一类具有线性分数约束的分类问题的简化。根据经验,我们观察到我们的算法速度很快,可以在各种公平指标上实现近乎完美的公平,并且由于强加的公平约束而导致的准确性损失通常很小。
{"title":"Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees","authors":"L. E. Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi","doi":"10.1145/3287560.3287586","DOIUrl":"https://doi.org/10.1145/3287560.3287586","url":null,"abstract":"Developing classification algorithms that are fair with respect to sensitive attributes of the data is an important problem due to the increased deployment of classification algorithms in societal contexts. Several recent works have focused on studying classification with respect to specific fairness metrics, modeled the corresponding fair classification problem as constrained optimization problems, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which there are no fair classifiers with theoretical guarantees; primarily because the resulting optimization problem is non-convex. The main contribution of this paper is a meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees. In particular, our algorithm can handle non-convex \"linear fractional\" constraints (which includes fairness constraints such as predictive parity) for which no prior algorithm was known. Key to our results is an algorithm for a family of classification problems with convex constraints along with a reduction from classification problems with linear fractional constraints to this family. Empirically, we observe that our algorithm is fast, can achieve near-perfect fairness with respect to various fairness metrics, and the loss in accuracy due to the imposed fairness constraints is often small.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80526416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 255
A comparative study of fairness-enhancing interventions in machine learning 机器学习中公平增强干预的比较研究
Pub Date : 2018-02-13 DOI: 10.1145/3287560.3287589
Sorelle A. Friedler, C. Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, Derek Roth
Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
计算机越来越多地用于做出对人们生活有重大影响的决策。通常,这些预测会不成比例地影响不同的人口亚群。因此,公平性问题最近受到了很多关注,并且在文献中出现了许多公平性增强的分类器。本文试图研究以下问题:这些不同的技术如何从根本上相互比较,是什么导致了这些差异?具体而言,我们试图引起人们对这种增强公平性干预措施的许多未被重视的方面的关注,这些干预措施需要对这些算法进行调查,以获得广泛的采用。我们展示了我们开发的一个开放基准的结果,该基准可以让我们在各种公平度量和现有数据集下比较许多不同的算法。我们发现,尽管不同的算法倾向于选择特定的公平保护公式,但这些措施中的许多都彼此密切相关。此外,我们发现保持公平性的算法往往对数据集组成的波动(在我们的基准中通过不同的训练测试分割模拟)和不同形式的预处理敏感,这表明公平性干预可能比以前认为的更脆弱。
{"title":"A comparative study of fairness-enhancing interventions in machine learning","authors":"Sorelle A. Friedler, C. Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, Derek Roth","doi":"10.1145/3287560.3287589","DOIUrl":"https://doi.org/10.1145/3287560.3287589","url":null,"abstract":"Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.","PeriodicalId":20573,"journal":{"name":"Proceedings of the Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81073706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 492
期刊
Proceedings of the Conference on Fairness, Accountability, and Transparency
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1