通过后处理实现公平和优化预测

IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Ai Magazine Pub Date : 2024-09-22 DOI:10.1002/aaai.12191
Han Zhao
{"title":"通过后处理实现公平和优化预测","authors":"Han Zhao","doi":"10.1002/aaai.12191","DOIUrl":null,"url":null,"abstract":"<p>With the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the <i>fairness</i> of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high-stakes domains, for example, criminal judgment, medical testing, online advertising, hiring process, and so forth. To mitigate the potential bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, is crucial to the design of optimal and fair algorithms. My research focuses on characterizing the inherent tradeoff between fairness and accuracy in machine learning, and developing algorithms that can achieve both fairness and optimality. In this article, I will discuss our recent work on designing post-processing algorithms for fair classification, which can be applied to a wide range of fairness criteria, including statistical parity, equal opportunity, and equalized odds, under both attribute-aware and attribute-blind settings, and is particularly suited to large-scale foundation models where retraining is expensive or even infeasible. I will also discuss the connections between our work and other related research on trustworthy machine learning, including the connections between algorithmic fairness and differential privacy as well as adversarial robustness.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"411-418"},"PeriodicalIF":2.5000,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12191","citationCount":"0","resultStr":"{\"title\":\"Fair and optimal prediction via post-processing\",\"authors\":\"Han Zhao\",\"doi\":\"10.1002/aaai.12191\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the <i>fairness</i> of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high-stakes domains, for example, criminal judgment, medical testing, online advertising, hiring process, and so forth. To mitigate the potential bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, is crucial to the design of optimal and fair algorithms. My research focuses on characterizing the inherent tradeoff between fairness and accuracy in machine learning, and developing algorithms that can achieve both fairness and optimality. In this article, I will discuss our recent work on designing post-processing algorithms for fair classification, which can be applied to a wide range of fairness criteria, including statistical parity, equal opportunity, and equalized odds, under both attribute-aware and attribute-blind settings, and is particularly suited to large-scale foundation models where retraining is expensive or even infeasible. I will also discuss the connections between our work and other related research on trustworthy machine learning, including the connections between algorithmic fairness and differential privacy as well as adversarial robustness.</p>\",\"PeriodicalId\":7854,\"journal\":{\"name\":\"Ai Magazine\",\"volume\":\"45 3\",\"pages\":\"411-418\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12191\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ai Magazine\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12191\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12191","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

随着机器学习算法的发展和可用计算资源的不断增加,人工智能在许多应用领域取得了巨大成功。然而,机器学习的成功也引发了人们对所学模型公平性的担忧。例如,学习到的模型可能会延续甚至加剧训练数据中潜在的偏见和歧视。这一问题已成为机器学习系统在刑事判决、医疗测试、在线广告、招聘流程等高风险领域部署的主要障碍。为了减轻机器学习模型可能表现出的偏差,可以将公平性标准集成到训练过程中,以确保公平对待所有人群,但这往往以牺牲模型性能为代价。因此,了解这种权衡对于设计最佳公平算法至关重要。我的研究重点是描述机器学习中公平性和准确性之间固有的权衡,并开发能同时实现公平性和最优性的算法。在本文中,我将讨论我们最近在设计公平分类的后处理算法方面所做的工作,该算法可在属性感知和属性盲设置下应用于广泛的公平标准,包括统计均等、机会均等和赔率均等,尤其适用于重新训练成本高昂甚至不可行的大规模基础模型。我还将讨论我们的工作与其他可信机器学习相关研究之间的联系,包括算法公平性与差异隐私以及对抗鲁棒性之间的联系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fair and optimal prediction via post-processing

With the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the fairness of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high-stakes domains, for example, criminal judgment, medical testing, online advertising, hiring process, and so forth. To mitigate the potential bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, is crucial to the design of optimal and fair algorithms. My research focuses on characterizing the inherent tradeoff between fairness and accuracy in machine learning, and developing algorithms that can achieve both fairness and optimality. In this article, I will discuss our recent work on designing post-processing algorithms for fair classification, which can be applied to a wide range of fairness criteria, including statistical parity, equal opportunity, and equalized odds, under both attribute-aware and attribute-blind settings, and is particularly suited to large-scale foundation models where retraining is expensive or even infeasible. I will also discuss the connections between our work and other related research on trustworthy machine learning, including the connections between algorithmic fairness and differential privacy as well as adversarial robustness.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ai Magazine
Ai Magazine 工程技术-计算机:人工智能
CiteScore
3.90
自引率
11.10%
发文量
61
审稿时长
>12 weeks
期刊介绍: AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.
期刊最新文献
Issue Information AI fairness in practice: Paradigm, challenges, and prospects Toward the confident deployment of real-world reinforcement learning agents Towards robust visual understanding: A paradigm shift in computer vision from recognition to reasoning Efficient and robust sequential decision making algorithms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1