Fair Generalized Linear Models with a Convex Penalty.

Hyungrok Do, Preston Putzel, Axel Martin, Padhraic Smyth, Judy Zhong
{"title":"Fair Generalized Linear Models with a Convex Penalty.","authors":"Hyungrok Do, Preston Putzel, Axel Martin, Padhraic Smyth, Judy Zhong","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Despite recent advances in algorithmic fairness, methodologies for achieving fairness with generalized linear models (GLMs) have yet to be explored in general, despite GLMs being widely used in practice. In this paper we introduce two fairness criteria for GLMs based on equalizing expected outcomes or log-likelihoods. We prove that for GLMs both criteria can be achieved via a convex penalty term based solely on the linear components of the GLM, thus permitting efficient optimization. We also derive theoretical properties for the resulting fair GLM estimator. To empirically demonstrate the efficacy of the proposed fair GLM, we compare it with other wellknown fair prediction methods on an extensive set of benchmark datasets for binary classification and regression. In addition, we demonstrate that the fair GLM can generate fair predictions for a range of response variables, other than binary and continuous outcomes.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"162 ","pages":"5286-5308"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10069982/pdf/nihms-1880290.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of machine learning research","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Despite recent advances in algorithmic fairness, methodologies for achieving fairness with generalized linear models (GLMs) have yet to be explored in general, despite GLMs being widely used in practice. In this paper we introduce two fairness criteria for GLMs based on equalizing expected outcomes or log-likelihoods. We prove that for GLMs both criteria can be achieved via a convex penalty term based solely on the linear components of the GLM, thus permitting efficient optimization. We also derive theoretical properties for the resulting fair GLM estimator. To empirically demonstrate the efficacy of the proposed fair GLM, we compare it with other wellknown fair prediction methods on an extensive set of benchmark datasets for binary classification and regression. In addition, we demonstrate that the fair GLM can generate fair predictions for a range of response variables, other than binary and continuous outcomes.

Abstract Image

Abstract Image

分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有凹凸惩罚的公平广义线性模型
尽管最近在算法公平性方面取得了一些进展,但实现广义线性模型(GLMs)公平性的方法还没有得到普遍探讨,尽管 GLMs 在实践中得到了广泛应用。在本文中,我们介绍了两种基于预期结果或对数似然相等的 GLM 公平性标准。我们证明,对于 GLMs 来说,这两个标准都可以通过一个仅基于 GLM 线性成分的凸惩罚项来实现,从而实现高效优化。我们还推导出了由此产生的公平 GLM 估计器的理论属性。为了从经验上证明所提出的公平 GLM 的有效性,我们在一组广泛的二元分类和回归基准数据集上,将其与其他著名的公平预测方法进行了比较。此外,我们还证明了公平 GLM 可以为二元和连续结果以外的一系列响应变量生成公平预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Borrowing From the Future: Enhancing Early Risk Assessment through Contrastive Learning. Balancing Interpretability and Flexibility in Modeling Diagnostic Trajectories with an Embedded Neural Hawkes Process Model. ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning. Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension using Large Language Models. Sidechain conditioning and modeling for full-atom protein sequence design with FAMPNN.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1