Bringing practical statistical science to AI and predictive model fairness testing

Victor S. Y. Lo, Sayan Datta, Youssouf Salami
{"title":"Bringing practical statistical science to AI and predictive model fairness testing","authors":"Victor S. Y. Lo,&nbsp;Sayan Datta,&nbsp;Youssouf Salami","doi":"10.1007/s43681-024-00518-2","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial Intelligence, Machine Learning, Statistical Modeling and Predictive Analytics have been widely used in various industries for a long time. More recently, AI Model Governance including AI Ethics has received significant attention from academia, industry, and regulatory agencies. To minimize potential unjustified treatment disfavoring individuals based on demographics, an increasingly critical task is to assess group fairness through some established metrics. Many commercial and open-source tools are now available to support the computations of these fairness metrics. However, this area is largely based on rules, e.g., metrics within a prespecified range would be considered satisfactory. These metrics are statistical estimates and are often based on limited sample data and therefore subject to sampling variability. For instance, if a fairness criterion is barely met or missed, it is often uncertain if it should be a “pass” or “failure,” if the sample size is not large. This is where statistical science can help. Specifically, statistical hypothesis testing enables us to determine whether the sample data can support a particular hypothesis (e.g., falling within an acceptable range) or the observations may have happened by chance. Drawing upon the bioequivalence literature from medicine and advanced hypothesis testing in statistics, we propose a practical statistical significance testing method to enhance the current rule-based process for model fairness testing and its associated power calculation, followed by an illustration with a realistic example.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"2149 - 2164"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00518-2.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00518-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial Intelligence, Machine Learning, Statistical Modeling and Predictive Analytics have been widely used in various industries for a long time. More recently, AI Model Governance including AI Ethics has received significant attention from academia, industry, and regulatory agencies. To minimize potential unjustified treatment disfavoring individuals based on demographics, an increasingly critical task is to assess group fairness through some established metrics. Many commercial and open-source tools are now available to support the computations of these fairness metrics. However, this area is largely based on rules, e.g., metrics within a prespecified range would be considered satisfactory. These metrics are statistical estimates and are often based on limited sample data and therefore subject to sampling variability. For instance, if a fairness criterion is barely met or missed, it is often uncertain if it should be a “pass” or “failure,” if the sample size is not large. This is where statistical science can help. Specifically, statistical hypothesis testing enables us to determine whether the sample data can support a particular hypothesis (e.g., falling within an acceptable range) or the observations may have happened by chance. Drawing upon the bioequivalence literature from medicine and advanced hypothesis testing in statistics, we propose a practical statistical significance testing method to enhance the current rule-based process for model fairness testing and its associated power calculation, followed by an illustration with a realistic example.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为人工智能和预测模型公平性测试带来实用的统计科学
长期以来,人工智能、机器学习、统计建模和预测分析已广泛应用于各个行业。最近,包括人工智能伦理在内的人工智能模型治理受到了学术界、工业界和监管机构的极大关注。为了最大限度地减少基于人口统计学对个人不利的潜在不合理待遇,通过一些既定指标评估群体公平性是一项日益重要的任务。现在可以使用许多商业和开源工具来支持这些公平度量的计算。然而,这个领域很大程度上是基于规则的,例如,在预先指定的范围内的度量将被认为是令人满意的。这些度量是统计估计,通常基于有限的样本数据,因此受到抽样可变性的影响。例如,如果一个公平标准几乎没有达到或错过,如果样本大小不大,它通常是不确定的,它应该是“通过”还是“失败”。这就是统计科学可以提供帮助的地方。具体来说,统计假设检验使我们能够确定样本数据是否能够支持特定的假设(例如,落在可接受的范围内),或者观察结果可能是偶然发生的。基于医学上的生物等效性文献和统计学中先进的假设检验,我们提出了一种实用的统计显著性检验方法,以改进目前基于规则的模型公平性检验及其相关功率计算过程,并通过一个现实例子进行了说明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Algorithms, language, and poetry: a phenomenological perspective Why AI might not gain moral standing: lessons from animal ethics From optimization to inquiry: a Deweyan criterion for machine intelligence Conversational AI agents in education: an umbrella review of current utilization, challenges, and future directions for ethical and responsible use Fostering an enabling environment for health AI innovation and scale: The need for tailored ethics training for innovators in low- and middle-income countries
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1