Fairkit, fairkit, on the wall, who’s the fairest of them all? Supporting fairness-related decision-making

IF 2.3 Q3 MANAGEMENT EURO Journal on Decision Processes Pub Date : 2023-01-01 DOI:10.1016/j.ejdp.2023.100031
Brittany Johnson , Jesse Bartola , Rico Angell , Sam Witty , Stephen Giguere , Yuriy Brun
{"title":"Fairkit, fairkit, on the wall, who’s the fairest of them all? Supporting fairness-related decision-making","authors":"Brittany Johnson ,&nbsp;Jesse Bartola ,&nbsp;Rico Angell ,&nbsp;Sam Witty ,&nbsp;Stephen Giguere ,&nbsp;Yuriy Brun","doi":"10.1016/j.ejdp.2023.100031","DOIUrl":null,"url":null,"abstract":"<div><p>Modern software relies heavily on data and machine learning, and affects decisions that shape our world. Unfortunately, recent studies have shown that because of biases in data, software systems frequently inject bias into their decisions, from producing more errors when transcribing women’s than men’s voices to overcharging people of color for financial loans. To address bias in software, data scientists and software engineers need tools that help them understand the trade-offs between model quality and fairness in their specific data domains. Toward that end, we present fairkit-learn, an interactive toolkit for helping engineers reason about and understand fairness. Fairkit-learn supports over 70 definition of fairness and works with state-of-the-art machine learning tools, using the same interfaces to ease adoption. It can evaluate thousands of models produced by multiple machine learning algorithms, hyperparameters, and data permutations, and compute and visualize a small Pareto-optimal set of models that describe the optimal trade-offs between fairness and quality. Engineers can then iterate, improving their models and evaluating them using fairkit-learn. We evaluate fairkit-learn via a user study with 54 students, showing that students using fairkit-learn produce models that provide a better balance between fairness and quality than students using scikit-learn and IBM AI Fairness 360 toolkits. With fairkit-learn, users can select models that are up to 67% more fair and 10% more accurate than the models they are likely to train with scikit-learn.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURO Journal on Decision Processes","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2193943823000043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 1

Abstract

Modern software relies heavily on data and machine learning, and affects decisions that shape our world. Unfortunately, recent studies have shown that because of biases in data, software systems frequently inject bias into their decisions, from producing more errors when transcribing women’s than men’s voices to overcharging people of color for financial loans. To address bias in software, data scientists and software engineers need tools that help them understand the trade-offs between model quality and fairness in their specific data domains. Toward that end, we present fairkit-learn, an interactive toolkit for helping engineers reason about and understand fairness. Fairkit-learn supports over 70 definition of fairness and works with state-of-the-art machine learning tools, using the same interfaces to ease adoption. It can evaluate thousands of models produced by multiple machine learning algorithms, hyperparameters, and data permutations, and compute and visualize a small Pareto-optimal set of models that describe the optimal trade-offs between fairness and quality. Engineers can then iterate, improving their models and evaluating them using fairkit-learn. We evaluate fairkit-learn via a user study with 54 students, showing that students using fairkit-learn produce models that provide a better balance between fairness and quality than students using scikit-learn and IBM AI Fairness 360 toolkits. With fairkit-learn, users can select models that are up to 67% more fair and 10% more accurate than the models they are likely to train with scikit-learn.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fairkit,Fairkit,在墙上,谁是最公平的?支持与公平相关的决策
现代软件在很大程度上依赖于数据和机器学习,并影响着塑造我们世界的决策。不幸的是,最近的研究表明,由于数据中的偏见,软件系统经常在他们的决策中注入偏见,从转录女性声音时产生的错误多于男性声音,到向有色人种收取过高的金融贷款费用。为了解决软件中的偏见,数据科学家和软件工程师需要一些工具来帮助他们理解特定数据领域中模型质量和公平性之间的权衡。为此,我们推出了fairkit learn,这是一个帮助工程师思考和理解公平的交互式工具包。Fairkit learn支持70多个公平定义,并与最先进的机器学习工具配合使用,使用相同的界面来简化采用。它可以评估由多种机器学习算法、超参数和数据排列产生的数千个模型,并计算和可视化描述公平性和质量之间最佳权衡的帕累托最优模型集。然后,工程师可以迭代,改进他们的模型,并使用fairkit-learn对其进行评估。我们通过对54名学生的用户研究评估了fairkit learn,结果表明,与使用scikit learn和IBM AI fairness 360工具包的学生相比,使用fairkit learn的学生生成的模型在公平性和质量之间提供了更好的平衡。使用fairkit learn,用户可以选择比他们可能使用scikit learn训练的模型公平67%、准确10%的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.70
自引率
10.00%
发文量
15
期刊最新文献
Editorial: Feature Issue on Fair and Explainable Decision Support Systems Editorial: Feature issue on fair and explainable decision support systems Corrigendum to “Multi-objective optimization in real-time operation of rainwater harvesting systems” [EURO Journal on Decision Processes Volume 11 (2023) 100039] Multiobjective combinatorial optimization with interactive evolutionary algorithms: The case of facility location problems Performance assessment of waste sorting: Component-based approach to incorporate quality into data envelopment analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1