金融中机器学习的公平性措施

Sanjiv Ranjan Das, Michele Donini, J. Gelman, Kevin Haas, Mila Hardt, Jared Katzman, K. Kenthapadi, Pedro Larroy, Pinar Yilmaz, Bilal Zafar
{"title":"金融中机器学习的公平性措施","authors":"Sanjiv Ranjan Das, Michele Donini, J. Gelman, Kevin Haas, Mila Hardt, Jared Katzman, K. Kenthapadi, Pedro Larroy, Pinar Yilmaz, Bilal Zafar","doi":"10.3905/jfds.2021.1.075","DOIUrl":null,"url":null,"abstract":"The authors present a machine learning pipeline for fairness-aware machine learning (FAML) in finance that encompasses metrics for fairness (and accuracy). Whereas accuracy metrics are well understood and the principal ones are used frequently, there is no consensus as to which of several available measures for fairness should be used in a generic manner in the financial services industry. The authors explore these measures and discuss which ones to focus on at various stages in the ML pipeline, pre-training and post-training, and they examine simple bias mitigation approaches. Using a standard dataset, they show that the sequencing in their FAML pipeline offers a cogent approach to arriving at a fair and accurate ML model. The authors discuss the intersection of bias metrics with legal considerations in the United States, and the entanglement of explainability and fairness is exemplified in the case study. They discuss possible approaches for training ML models while satisfying constraints imposed from various fairness metrics and the role of causality in assessing fairness. Key Findings ▪ Sources of bias are presented and a range of metrics is considered for machine learning applications in finance, both pre-training and post-training of models. ▪ A process of using the metrics to arrive at fair models is discussed. ▪ Various considerations for the choice of specific metrics are also analyzed.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Fairness Measures for Machine Learning in Finance\",\"authors\":\"Sanjiv Ranjan Das, Michele Donini, J. Gelman, Kevin Haas, Mila Hardt, Jared Katzman, K. Kenthapadi, Pedro Larroy, Pinar Yilmaz, Bilal Zafar\",\"doi\":\"10.3905/jfds.2021.1.075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The authors present a machine learning pipeline for fairness-aware machine learning (FAML) in finance that encompasses metrics for fairness (and accuracy). Whereas accuracy metrics are well understood and the principal ones are used frequently, there is no consensus as to which of several available measures for fairness should be used in a generic manner in the financial services industry. The authors explore these measures and discuss which ones to focus on at various stages in the ML pipeline, pre-training and post-training, and they examine simple bias mitigation approaches. Using a standard dataset, they show that the sequencing in their FAML pipeline offers a cogent approach to arriving at a fair and accurate ML model. The authors discuss the intersection of bias metrics with legal considerations in the United States, and the entanglement of explainability and fairness is exemplified in the case study. They discuss possible approaches for training ML models while satisfying constraints imposed from various fairness metrics and the role of causality in assessing fairness. Key Findings ▪ Sources of bias are presented and a range of metrics is considered for machine learning applications in finance, both pre-training and post-training of models. ▪ A process of using the metrics to arrive at fair models is discussed. ▪ Various considerations for the choice of specific metrics are also analyzed.\",\"PeriodicalId\":199045,\"journal\":{\"name\":\"The Journal of Financial Data Science\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Financial Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3905/jfds.2021.1.075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Financial Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3905/jfds.2021.1.075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

摘要

作者提出了一个用于金融领域公平感知机器学习(FAML)的机器学习管道,该管道包含公平性(和准确性)指标。虽然准确性指标被很好地理解,主要指标也经常被使用,但对于在金融服务行业中应以通用方式使用几种可用的公平措施中的哪一种,并没有达成共识。作者探讨了这些措施,并讨论了在机器学习管道的各个阶段(训练前和训练后)关注哪些措施,并研究了简单的偏见缓解方法。使用标准数据集,他们表明FAML管道中的测序提供了一种令人信服的方法来达到公平和准确的ML模型。作者讨论了偏见指标与美国法律考虑的交集,并在案例研究中举例说明了可解释性和公平性的纠缠。他们讨论了训练ML模型的可能方法,同时满足各种公平指标和因果关系在评估公平中的作用所施加的约束。▪提出了偏差的来源,并考虑了金融领域机器学习应用的一系列指标,包括模型的预训练和后训练。▪讨论了使用指标得出公平模型的过程。▪还分析了选择具体指标的各种考虑因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fairness Measures for Machine Learning in Finance
The authors present a machine learning pipeline for fairness-aware machine learning (FAML) in finance that encompasses metrics for fairness (and accuracy). Whereas accuracy metrics are well understood and the principal ones are used frequently, there is no consensus as to which of several available measures for fairness should be used in a generic manner in the financial services industry. The authors explore these measures and discuss which ones to focus on at various stages in the ML pipeline, pre-training and post-training, and they examine simple bias mitigation approaches. Using a standard dataset, they show that the sequencing in their FAML pipeline offers a cogent approach to arriving at a fair and accurate ML model. The authors discuss the intersection of bias metrics with legal considerations in the United States, and the entanglement of explainability and fairness is exemplified in the case study. They discuss possible approaches for training ML models while satisfying constraints imposed from various fairness metrics and the role of causality in assessing fairness. Key Findings ▪ Sources of bias are presented and a range of metrics is considered for machine learning applications in finance, both pre-training and post-training of models. ▪ A process of using the metrics to arrive at fair models is discussed. ▪ Various considerations for the choice of specific metrics are also analyzed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Managing Editor’s Letter Explainable Machine Learning Models of Consumer Credit Risk Predicting Returns with Machine Learning across Horizons, Firm Size, and Time Deep Calibration with Artificial Neural Network: A Performance Comparison on Option-Pricing Models RIFT: Pretraining and Applications for Representations of Interrelated Financial Time Series
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1